Dataset Viewer
Auto-converted to Parquet
id
stringlengths
19
42
text
stringlengths
0
4.51M
added
stringlengths
24
24
created
stringlengths
20
24
source
stringclasses
4 values
original_shard_dir
stringclasses
232 values
original_shard_idx
int64
0
311k
num_tokens
int64
1
885k
proofpile-arXiv_065-241
\section{Motivation} Chromium is considered as the archetypical itinerant antiferromagnet~\cite{1988_Fawcett_RevModPhys, 1994_Fawcett_RevModPhys}. Interestingly, it shares its body-centered cubic crystal structure $Im\overline{3}m$ with the archetypical itinerant ferromagnet $\alpha$-iron and, at melting temperature, all compositions Fe$_{x}$Cr$_{1-x}$~\cite{2010_Okamoto_Book}. As a result, the Cr--Fe system offers the possibility to study the interplay of two fundamental forms of magnetic order in the same crystallographic environment. Chromium exhibits transverse spin-density wave order below a N\'{e}el temperature $T_{\mathrm{N}} = 311$~K and longitudinal spin-density wave order below $T_{\mathrm{SF}} = 123$~K~\cite{1988_Fawcett_RevModPhys}. Under substitutional doping with iron, the longitudinal spin-density wave order becomes commensurate at $x = 0.02$. For $0.04 < x$, only commensurate antiferromagnetic order is observed~\cite{1967_Ishikawa_JPhysSocJpn, 1980_Babic_JPhysChemSolids, 1983_Burke_JPhysFMetPhys_I}. The N\'{e}el temperature decreases at first linearly with increasing $x$ and vanishes around $x \approx 0.15$~\cite{1967_Ishikawa_JPhysSocJpn, 1976_Suzuki_JPhysSocJpn, 1978_Burke_JPhysFMetPhys, 1980_Babic_JPhysChemSolids, 1983_Burke_JPhysFMetPhys_I}. Increasing $x$ further, a putative lack of long-range magnetic order~\cite{1978_Burke_JPhysFMetPhys} is followed by the onset of ferromagnetic order at $x \approx 0.18$ with a monotonic increase of the Curie temperature up to $T_{\mathrm{C}} = 1041$~K in pure $\alpha$-iron~\cite{1963_Nevitt_JApplPhys, 1975_Loegel_JPhysFMetPhys, 1980_Fincher_PhysRevLett, 1981_Shapiro_PhysRevB, 1983_Burke_JPhysFMetPhys_II, 1983_Burke_JPhysFMetPhys_III}. The suppression of magnetic order is reminiscent of quantum critical systems under pressure~\cite{2001_Stewart_RevModPhys, 2007_Lohneysen_RevModPhys, 2008_Broun_NatPhys}, where substitutional doping of chromium with iron decreases the unit cell volume. In comparison to stoichiometric systems tuned by hydrostatic pressure, however, disorder and local strain are expected to play a crucial role in Fe$_{x}$Cr$_{1-x}$. This conjecture is consistent with reports on superparamagnetic behavior for $0.20 \leq x \leq 0.29$~\cite{1975_Loegel_JPhysFMetPhys}, mictomagnetic behavior~\footnote{In mictomagnetic materials, the virgin magnetic curves recorded in magnetization measurements as a function of field lie outside of the hysteresis loops recorded when starting from high field~\cite{1976_Shull_SolidStateCommunications}.} gradually evolving towards ferromagnetism for $0.09 \leq x \leq 0.23$~\cite{1975_Shull_AIPConferenceProceedings}, and spin-glass behavior for $0.14 \leq x \leq 0.19$~\cite{1979_Strom-Olsen_JPhysFMetPhys, 1980_Babic_JPhysChemSolids, 1981_Shapiro_PhysRevB, 1983_Burke_JPhysFMetPhys_I, 1983_Burke_JPhysFMetPhys_II, 1983_Burke_JPhysFMetPhys_III}. Despite the rather unique combination of properties, notably a metallic spin glass emerging at the border of both itinerant antiferromagnetic and ferromagnetic order, comprehensive studies addressing the magnetic properties of Fe$_{x}$Cr$_{1-x}$ in the concentration range of putative quantum criticality are lacking. In particular, a classification of the spin-glass regime, to the best of our knowledge, has not been addressed before. Here, we report a study of polycrystalline samples of Fe$_{x}$Cr$_{1-x}$ covering the concentration range $0.05 \leq x \leq 0.30$, i.e., from antiferromagnetic doped chromium well into the ferromagnetically ordered state of doped iron. The compositional phase diagram inferred from magnetization and ac susceptibility measurements is in agreement with previous reports~\cite{1983_Burke_JPhysFMetPhys_I, 1983_Burke_JPhysFMetPhys_II, 1983_Burke_JPhysFMetPhys_III}. As the perhaps most notable new observation, we identify a precursor phenomenon preceding the onset of spin-glass behavior in the imaginary part of the ac susceptibility. For the spin-glass state, analysis of ac susceptibility data recorded at different excitation frequencies by means of the Mydosh parameter, power-law fits, and a Vogel--Fulcher ansatz establishes a crossover from cluster-glass to superparamagnetic behavior as a function of increasing $x$. Microscopic evidence for this evolution is provided by neutron depolarization, indicating an increase of the size of ferromagnetic clusters with $x$. Our paper is organized as follows. In Sec.~\ref{sec:methods}, the preparation of the samples and their metallurgical characterization by means of x-ray powder diffraction is reported. In addition, experimental details are briefly described. Providing a first point of reference, the presentation of the experimental results starts in Sec.~\ref{sec:results} with the compositional phase diagram as inferred in our study, before turning to a detailed description of the ac susceptibility and magnetization data. Next, neutron depolarization data are presented, allowing to extract the size of ferromagnetically ordered clusters from exponential fits. Exemplary data on the specific heat, electrical resistivity, and high-field magnetization for $x = 0.15$ complete this section. In Sec.~\ref{sec:discussion}, information on the nature of the spin-glass behavior in Fe$_{x}$Cr$_{1-x}$ and its evolution under increasing $x$ is inferred from an analysis of ac susceptibility data recorded at different excitation frequencies. Finally, in Sec.~\ref{sec:conclusion} the central findings of this study are summarized. \section{Experimental methods} \label{sec:methods} Polycrystalline samples of Fe$_{x}$Cr$_{1-x}$ for $0.05 \leq x \leq 0.30$ ($x = 0.05$, 0.10, 0.15, 0.16, 0.17, 0.18, 0.18, 0.19, 0.20, 0.21, 0.22, 0.25, 0.30) were prepared from iron (4N) and chromium (5N) pieces by means of radio-frequency induction melting in a bespoke high-purity furnace~\cite{2016_Bauer_RevSciInstrum}. No losses in weight or signatures of evaporation were observed. In turn, the composition is denoted in terms of the weighed-in amounts of starting material. Prior to the synthesis, the furnace was pumped to ultra-high vacuum and subsequently flooded with 1.4~bar of argon (6N) treated by a point-of-use gas purifier yielding a nominal purity of 9N. For each sample, the starting elements were melted in a water-cooled Hukin crucible and the resulting specimen was kept molten for about 10~min to promote homogenization. Finally, the sample was quenched to room temperature. With this approach, the imminent exsolution of the compound into two phases upon cooling was prevented, as suggested by the binary phase diagram of the Fe--Cr system reported in Ref.~\cite{2010_Okamoto_Book}. From the resulting ingots samples were cut with a diamond wire saw. \begin{figure} \includegraphics[width=1.0\linewidth]{figure1} \caption{\label{fig:1}X-ray powder diffraction data of Fe$_{x}$Cr$_{1-x}$. (a)~Diffraction pattern for $x = 0.15$. The Rietveld refinement (red curve) is in excellent agreement with the experimental data and confirms the $Im\overline{3}m$ structure. (b)~Diffraction pattern around the (011) peak for all concentrations studied. For clarity, the intensities are normalized and curves are offset by 0.1. Inset: Linear decrease of the lattice constant $a$ with increasing $x$. The solid gray line represents a guide to the eye.} \end{figure} Powder was prepared of a small piece of each ingot using an agate mortar. X-ray powder diffraction at room temperature was carried out on a Huber G670 diffractometer using a Guinier geometry. Fig.~\ref{fig:1}(a) shows the diffraction pattern for $x = 0.15$, representing typical data. A Rietveld refinement based on the $Im\overline{3}m$ structure yields a lattice constant $a = 2.883$~\AA. Refinement and experimental data are in excellent agreement, indicating a high structural quality and homogeneity of the polycrystalline samples. With increasing $x$, the diffraction peaks shift to larger angles, as shown for the (011) peak in Fig.~\ref{fig:1}(b), consistent with a linear decrease of the lattice constant in accordance with Vegard's law. Measurements of the magnetic properties and neutron depolarization were carried out on thin discs with a thickness of ${\sim}0.5$~mm and a diameter of ${\sim}10$~mm. Specific heat and electrical transport for $x = 0.15$ were measured on a cube of 2~mm edge length and a platelet of dimensions $5\times2\times0.5~\textrm{mm}^{3}$, respectively. The magnetic properties, the specific heat, and the electrical resistivity were measured in a Quantum Design physical properties measurement system. The magnetization was measured by means of an extraction technique. If not stated otherwise, the ac susceptibility was measured at an excitation amplitude of 0.1~mT and an excitation frequency of 1~kHz. Additional ac susceptibility data for the analysis of the spin-glass behavior were recorded at frequencies ranging from 10~Hz to 10~kHz. The specific heat was measured using a quasi-adiabatic large-pulse technique with heat pulses of about 30\% of the current temperature~\cite{2013_Bauer_PhysRevLett}. For the measurements of the electrical resistivity the samples were contacted in a four-terminal configuration and a bespoke setup was used based on a lock-in technique at an excitation amplitude of 1~mA and an excitation frequency of 22.08~Hz. Magnetic field and current were applied perpendicular to each other, corresponding to the transverse magneto-resistance. Neutron depolarization measurements were carried out at the instrument ANTARES~\cite{2015_Schulz_JLarge-ScaleResFacilJLSRF} at the Heinz Maier-Leibniz Zentrum~(MLZ). The incoming neutron beam had a wavelength $\lambda = 4.13$~\AA\ and a wavelength spread $\Delta\lambda / \lambda = 10\%$. It was polarized using V-cavity supermirrors. The beam was transmitted through the sample and its polarization analyzed using a second polarizing V-cavity. While nonmagnetic samples do not affect the polarization of the neutron beam, the presence of ferromagnetic domains in general results in a precession of the neutron spins. In turn, the transmitted polarization with respect to the polarization axis of the incoming beam is reduced. This effect is referred to as neutron depolarization. Low temperatures and magnetic fields for this experiment were provided by a closed-cycle refrigerator and water-cooled Helmholtz coils, respectively. A small guide field of 0.5~mT was generated by means of permanent magnets. For further information on the neutron depolarization setup, we refer to Refs.~\cite{2015_Schmakat_PhD, 2017_Seifert_JPhysConfSer, 2019_Jorba_JMagnMagnMater}. All data shown as a function of temperature in this paper were recorded at a fixed magnetic field under increasing temperature. Depending on how the sample was cooled to 2~K prior to the measurement, three temperature versus field histories are distinguished. The sample was either cooled (i)~in zero magnetic field (zero-field cooling, zfc), (ii)~with the field at the value applied during the measurement (field cooling, fc), or (iii)~in a field of 250~mT (high-field cooling, hfc). For the magnetization data as a function of field, the sample was cooled in zero field. Subsequently, data were recorded during the initial increase of the field to $+250$~mT corresponding to a magnetic virgin curve, followed by a decrease to $-250$~mT, and a final increase back to $+250$~mT. \section{Experimental results} \label{sec:results} \subsection{Phase diagram and bulk magnetic properties} \begin{figure} \includegraphics[width=1.0\linewidth]{figure2} \caption{\label{fig:2}Zero-field composition--temperature phase diagram of Fe$_{x}$Cr$_{1-x}$. Data inferred from ac susceptibility, $\chi_{\mathrm{ac}}$, and neutron depolarization are combined with data reported by Burke and coworkers~\cite{1983_Burke_JPhysFMetPhys_I, 1983_Burke_JPhysFMetPhys_II, 1983_Burke_JPhysFMetPhys_III}. Paramagnetic~(PM), antiferromagnetic~(AFM), ferromagnetic~(FM), and spin-glass~(SG) regimes are distinguished. A precursor phenomenon is observed above the dome of spin-glass behavior (purple line). (a)~Overview. (b) Close-up view of the regime of spin-glass behavior as marked by the dashed box in panel (a).} \end{figure} The presentation of the experimental results starts with the compositional phase diagram of Fe$_{x}$Cr$_{1-x}$, illustrating central results of our study. An overview of the entire concentration range studied, $0.05 \leq x \leq 0.30$, and a close-up view around the dome of spin-glass behavior are shown in Figs.~\ref{fig:2}(a) and \ref{fig:2}(b), respectively. Characteristic temperatures inferred in this study are complemented by values reported by Burke and coworkers~\cite{1983_Burke_JPhysFMetPhys_I, 1983_Burke_JPhysFMetPhys_II, 1983_Burke_JPhysFMetPhys_III}, in good agreement with our results. Comparing the different physical properties in our study, we find that the imaginary part of the ac susceptibility displays the most pronounced signatures at the various phase transitions and crossovers. Therefore, the imaginary part was used to define the characteristic temperatures as discussed in the following. The same values are then marked in the different physical properties to highlight the consistency with alternative definitions of the characteristic temperatures based on these properties. Four regimes may be distinguished in the phase diagram, namely paramagnetism at high temperatures (PM, no shading), antiferromagnetic order for small values of $x$ (AFM, green shading), ferromagnetic order for larger values of $x$ (FM, blue shading), and spin-glass behavior at low temperatures (SG, orange shading). We note that faint signatures reminiscent of those attributed to the onset of ferromagnetic order are observed in the susceptibility and neutron depolarization for $0.15 \leq x \leq 0.18$ (light blue shading). In addition, a distinct precursor phenomenon preceding the spin-glass behavior is observed at the temperature $T_{\mathrm{X}}$ (purple line) across a wide concentration range. Before elaborating on the underlying experimental data, we briefly summarize the key characteristics of the different regimes. We attribute the onset of antiferromagnetic order below the N\'{e}el temperature $T_{\mathrm{N}}$ for $x = 0.05$ and $x = 0.10$ to a sharp kink in the imaginary part of the ac susceptibility, where values of $T_{\mathrm{N}}$ are consistent with previous reports~\cite{1978_Burke_JPhysFMetPhys, 1983_Burke_JPhysFMetPhys_I}. As may be expected, the transition is not sensitive to changes of the magnetic field, excitation frequency, or cooling history. The absolute value of the magnetization is small and it increases essentially linearly as a function of field in the parameter range studied. We identify the emergence of ferromagnetic order below the Curie temperature $T_{\mathrm{C}}$ for $0.18 \leq x$ from a maximum in the imaginary part of the ac susceptibility that is suppressed in small magnetic fields of a few millitesla. This interpretation is corroborated by the onset of neutron depolarization. The transition is not sensitive to changes of the excitation frequency or cooling history. The magnetic field dependence of the magnetization exhibits a characteristic S-shape with almost vanishing hysteresis, reaching quasi-saturation at small fields. Both characteristics are expected for a soft ferromagnetic material such as iron. For $0.15 \leq x \leq 0.18$, faint signatures reminiscent of those observed for $0.18 \leq x$, such as a small shoulder instead of a maximum in the imaginary part of the ac susceptibility, are interpreted in terms of an incipient onset of ferromagnetic order. We identify reentrant spin-glass behavior below a freezing temperature $T_{\mathrm{g}}$ for $0.10 \leq x \leq 0.25$ from a pronounced maximum in the imaginary part of the ac susceptibility that is suppressed at intermediate magnetic fields of the order of 50~mT. The transition shifts to lower temperatures with increasing excitation frequency, representing a hallmark of spin glasses. Further key indications for spin-glass behavior below $T_{\mathrm{g}}$ are a branching between different cooling histories in the temperature dependence of the magnetization and neutron depolarization as well as mictomagnetic behavior in the field dependence of the magnetization, i.e., the virgin magnetic curve lies outside the hysteresis loop obtained when starting from high magnetic field. In addition, we identify a precursor phenomenon preceding the onset of spin-glass behavior at a temperature $T_{\mathrm{X}}$ based on a maximum in the imaginary part of the ac susceptibility that is suppressed in small magnetic fields reminiscent of the ferromagnetic transition. With increasing excitation frequency the maximum shifts to lower temperatures, however at a smaller rate than the freezing temperature $T_{\mathrm{g}}$. Interestingly, the magnetization and neutron depolarization exhibit no signatures at $T_{\mathrm{X}}$. \subsection{Zero-field ac susceptibility} \begin{figure} \includegraphics[width=1.0\linewidth]{figure3} \caption{\label{fig:3}Zero-field ac susceptibility as a function of temperature for all samples studied. For each concentration, real part (Re\,$\chi_{\mathrm{ac}}$, left column) and imaginary part (Im\,$\chi_{\mathrm{ac}}$, right column) of the susceptibility are shown. Note the logarithmic temperature scale and the increasing scale on the ordinate with increasing $x$. Triangles mark temperatures associated with the onset of antiferromagnetic order at $T_{\mathrm{N}}$ (green), spin-glass behavior at $T_{\mathrm{g}}$ (red), ferromagnetic order at $T_{\mathrm{C}}$ (blue), and the precursor phenomenon at $T_{\mathrm{X}}$ (purple). The corresponding values are inferred from Im\,$\chi_{\mathrm{ac}}$, see text for details.} \end{figure} The real and imaginary parts of the zero-field ac susceptibility on a logarithmic temperature scale are shown in Fig.~\ref{fig:3} for each sample studied. Characteristic temperatures are inferred from the imaginary part and marked by colored triangles in both quantities. While the identification of the underlying transitions and crossovers will be justified further in terms of the dependence of the signatures on magnetic field, excitation frequency, and history, as elaborated below, the corresponding temperatures are referred to as $T_{\mathrm{N}}$, $T_{\mathrm{C}}$, $T_{\mathrm{g}}$, and $T_{\mathrm{X}}$ already in the following. For small iron concentrations, such as $x = 0.05$ shown in Fig.~\ref{fig:3}(a), the real part is small and essentially featureless, with exception of an increase at low temperatures that may be attributed to the presence of ferromagnetic impurities, i.e., a so-called Curie tail~\cite{1972_DiSalvo_PhysRevB, 2014_Bauer_PhysRevB}. The imaginary part is also small but displays a kink at the N\'{e}el temperature $T_{\mathrm{N}}$. In metallic specimens, such as Fe$_{x}$Cr$_{1-x}$, part of the dissipation detected via the imaginary part of the ac susceptibility arises from the excitation of eddy currents at the surface of the sample. Eddy current losses scale with the resistivity~\cite{1998_Jackson_Book, 1992_Samarappuli_PhysicaCSuperconductivity} and in turn the kink at $T_{\mathrm{N}}$ reflects the distinct change of the electrical resistivity at the onset of long-range antiferromagnetic order. When increasing the iron concentration to $x = 0.10$, as shown in Fig.~\ref{fig:3}(b), both the real and imaginary parts increase by one order of magnitude. Starting at $x = 0.10$, a broad maximum may be observed in the real part that indicates an onset of magnetic correlations where the lack of further fine structure renders the extraction of more detailed information impossible. In contrast, the imaginary part exhibits several distinct signatures that allow, in combination with data presented below, to infer the phase diagram shown in Fig.~\ref{fig:2}. For $x = 0.10$, in addition to the kink at $T_{\mathrm{N}}$ a maximum may be observed at 3~K which we attribute to the spin freezing at $T_{\mathrm{g}}$. Further increasing the iron concentration to $x = 0.15$, as shown in Fig.~\ref{fig:3}(c), results again in an increase of both the real and imaginary parts by one order of magnitude. The broad maximum in the real part shifts to slightly larger temperatures. In the imaginary part, two distinct maxima are resolved, accompanied by a shoulder at their high-temperature side. From low to high temperatures, these signatures may be attributed to $T_{\mathrm{g}}$, $T_{\mathrm{X}}$, and a potential onset of ferromagnetism at $T_{\mathrm{C}}$. No signatures related to antiferromagnetism may be discerned. For $x = 0.16$ and 0.17, shown in Figs.~\ref{fig:3}(d) and \ref{fig:3}(e), both the real and imaginary part remain qualitatively unchanged while their absolute values increase further. The characteristic temperatures shift slightly to larger values. For $x = 0.18$, 0.19, 0.20, 0.21, and 0.22, shown in Figs.~\ref{fig:3}(f)--\ref{fig:3}(j), the size of the real and imaginary parts of the susceptibility remains essentially unchanged. The real part is best described in terms of a broad maximum that becomes increasingly asymmetric as the low-temperature extrapolation of the susceptibility increases with $x$. In the imaginary part, the signature ascribed to the onset of ferromagnetic order at $T_{\mathrm{C}}$ at larger concentrations develops into a clear maximum, overlapping with the maximum at $T_{\mathrm{X}}$ up to $x = 0.20$. For $x = 0.21$ and $x = 0.22$, three well-separated maxima may be attributed to the characteristic temperatures $T_{\mathrm{g}}$, $T_{\mathrm{X}}$, and $T_{\mathrm{C}}$. While both $T_{\mathrm{g}}$ and $T_{\mathrm{X}}$ stay almost constant with increasing $x$, $T_{\mathrm{C}}$ distinctly shifts to higher temperatures. For $x = 0.25$, shown in Fig.~\ref{fig:3}(k), the signature attributed to $T_{\mathrm{X}}$ has vanished while $T_{\mathrm{g}}$ is suppressed to about 5~K. For $x = 0.30$, shown in Fig.~\ref{fig:3}(l), only the ferromagnetic transition at $T_{\mathrm{C}}$ remains and the susceptibility is essentially constant below $T_{\mathrm{C}}$. Note that the suppression of spin-glass behavior around $x = 0.25$ coincides with the percolation limit of 24.3\% in the crystal structure $Im\overline{3}m$, i.e., the limit above which long-range magnetic order is expected in spin-glass systems~\cite{1978_Mydosh_JournalofMagnetismandMagneticMaterials}. Table~\ref{tab:1} summarizes the characteristic temperatures for all samples studied, including an estimate of the associated errors. \subsection{Magnetization and ac susceptibility under applied magnetic fields} \begin{figure*} \includegraphics[width=1.0\linewidth]{figure4} \caption{\label{fig:4}Magnetization and ac susceptibility in magnetic fields up to 250~mT for selected concentrations (increasing from top to bottom). Triangles mark the temperatures $T_{\mathrm{N}}$ (green), $T_{\mathrm{g}}$ (red), $T_{\mathrm{C}}$ (blue), and $T_{\mathrm{X}}$ (purple). The values shown in all panels correspond to those inferred from Im\,$\chi_{\mathrm{ac}}$ in zero field. \mbox{(a1)--(f1)}~Real part of the ac susceptibility, Re\,$\chi_{\mathrm{ac}}$, as a function of temperature on a logarithmic scale for different magnetic fields. \mbox{(a2)--(f2)}~Imaginary part of the ac susceptibility, Im\,$\chi_{\mathrm{ac}}$. \mbox{(a3)--(f3)}~Magnetization for three different field histories, namely high-field cooling~(hfc), field cooling (fc), and zero-field cooling (zfc). \mbox{(a4)--(f4)}~Magnetization as a function of field at a temperature of 2~K after initial zero-field cooling. Arrows indicate the sweep directions. The scales of the ordinates for all quantities increase from top to bottom.} \end{figure*} In order to justify further the relationship of the signatures in the ac susceptibility with the different phases, their evolution under increasing magnetic field up to 250~mT and their dependence on the cooling history are illustrated in Fig.~\ref{fig:4}. For selected values of $x$, the temperature dependences of the real part of the ac susceptibility, the imaginary part of the ac susceptibility, and the magnetization, shown in the first three columns, are complemented by the magnetic field dependence of the magnetization at low temperature, $T = 2$~K, shown in the fourth column. For small iron concentrations, such as $x = 0.05$ shown in Figs.~\ref{fig:4}(a1)--\ref{fig:4}(a4), both Re\,$\chi_{\mathrm{ac}}$ and Im\,$\chi_{\mathrm{ac}}$ remain qualitatively unchanged up to the highest fields studied. The associated stability of the transition at $T_{\mathrm{N}}$ under magnetic field represents a key characteristic of itinerant antiferromagnetism, which is also observed in pure chromium. Consistent with this behavior, the magnetization is small and increases essentially linearly in the field range studied. No dependence on the cooling history is observed. For intermediate iron concentrations, such as $x = 0.15$, $x = 0.17$, and $x = 0.18$ shown in Figs.~\ref{fig:4}(b1) to \ref{fig:4}(d4), the broad maximum in Re\,$\chi_{\mathrm{ac}}$ is suppressed under increasing field. Akin to the situation in zero field, the evolution of the different characteristic temperatures is tracked in Im\,$\chi_{\mathrm{ac}}$. Here, the signatures associated with $T_{\mathrm{X}}$ and $T_{\mathrm{C}}$ proof to be highly sensitive to magnetic fields and are suppressed already above about 2~mT. The maximum associated with the spin freezing at $T_{\mathrm{g}}$ is suppressed at higher field values. In the magnetization as a function of temperature, shown in Figs.~\ref{fig:4}(b3) to \ref{fig:4}(d3), a branching between different cooling histories may be observed below $T_{\mathrm{g}}$. Compared to data recorded after field cooling (fc), for which the temperature dependence of the magnetization is essentially featureless at $T_{\mathrm{g}}$, the magnetization at low temperatures is reduced for data recorded after zero-field cooling (zfc) and enhanced for data recorded after high-field cooling (hfc). Such a history dependence is typical for spin glasses~\cite{2015_Mydosh_RepProgPhys}, but also observed in materials where the orientation and population of domains with a net magnetic moment plays a role, such as conventional ferromagnets. The spin-glass character below $T_{\mathrm{g}}$ is corroborated by the field dependence of the magnetization shown in Figs.~\ref{fig:4}(b4) to \ref{fig:4}(d4), which is perfectly consistent with the temperature dependence. Most notably, in the spin-glass regime at low temperatures, mictomagnetic behavior is observed, i.e., the magnetization of the magnetic virgin state obtained after initial zero-field cooling (red curve) is partly outside the hysteresis loop obtained when starting from the field-polarized state at large fields (blue curves)~\cite{1976_Shull_SolidStateCommunications}. This peculiar behavior is not observed in ferromagnets and represents a hallmark of spin glasses~\cite{1978_Mydosh_JournalofMagnetismandMagneticMaterials}. For slightly larger iron concentrations, such as $x = 0.22$ shown in Figs.~\ref{fig:4}(e1) to \ref{fig:4}(e4), three maxima at $T_{\mathrm{g}}$, $T_{\mathrm{X}}$, and $T_{\mathrm{C}}$ are clearly separated. With increasing field, first the high-temperature maximum associated with $T_{\mathrm{C}}$ is suppressed, followed by the maxima at $T_{\mathrm{X}}$ and $T_{\mathrm{g}}$. The hysteresis loop at low temperatures is narrower, becoming akin to that of a conventional soft ferromagnet. For large iron concentrations, such as $x = 0.30$ shown in Figs.~\ref{fig:4}(f1) to \ref{fig:4}(f4), the evolution of Re\,$\chi_{\mathrm{ac}}$, Im\,$\chi_{\mathrm{ac}}$, and the magnetization as a function of magnetic field consistently corresponds to that of a conventional soft ferromagnet with a Curie temperature $T_{\mathrm{C}}$ of more than 200~K. For the ferromagnetic state observed here, all domains are aligned in fields exceeding ${\sim}50$~mT. \begin{table} \caption{\label{tab:1}Summary of the characteristic temperatures in Fe$_{x}$Cr$_{1-x}$ as inferred from the imaginary part of the ac susceptibility and neutron depolarization data. We distinguish the N\'{e}el temperature $T_{\mathrm{N}}$, the Curie temperature $T_{\mathrm{C}}$, the spin freezing temperature $T_{\mathrm{g}}$, and the precursor phenomenon at $T_{\mathrm{X}}$. Temperatures inferred from neutron depolarization data are denoted with the superscript `D'. For $T_{\mathrm{C}}^{\mathrm{D}}$, the errors were extracted from the fitting procedure (see below), while all other errors correspond to estimates of read-out errors.} \begin{ruledtabular} \begin{tabular}{ccccccc} $x$ & $T_{\mathrm{N}}$ (K) & $T_{\mathrm{g}}$ (K) & $T_{\mathrm{X}}$ (K) & $T_{\mathrm{C}}$ (K) & $T_{\mathrm{g}}^{\mathrm{D}}$ (K) & $T_{\mathrm{C}}^{\mathrm{D}}$ (K) \\ \hline 0.05 & $240 \pm 5$ & - & - & - &- & - \\ 0.10 & $190 \pm 5$ & $3 \pm 5$ & - & - & - & - \\ 0.15 & - & $11 \pm 2$ & $23 \pm 3$ & $30 \pm 10$ & - & - \\ 0.16 & - & $15 \pm 2$ & $34 \pm 3$ & $42 \pm 10$ & $18 \pm 5$ & $61 \pm 10$ \\ 0.17 & - & $20 \pm 2$ & $36 \pm 3$ & $42 \pm 10$ & $23 \pm 5$ & $47 \pm 2$ \\ 0.18 & - & $22 \pm 2$ & $35 \pm 3$ & $42 \pm 10$ & $22 \pm 5$ & $73 \pm 1$ \\ 0.19 & - & $19 \pm 2$ & $37 \pm 5$ & $56 \pm 10$ & $25 \pm 5$ & $93 \pm 1$ \\ 0.20 & - & $19 \pm 2$ & $35 \pm 5$ & $50 \pm 10$ & $24 \pm 5$ & $84 \pm 1$ \\ 0.21 & - & $14 \pm 2$ & $35 \pm 5$ & $108 \pm 5$ & $25 \pm 5$ & $101 \pm 1$ \\ 0.22 & - & $13 \pm 2$ & $32 \pm 5$ & $106 \pm 5$ & $21 \pm 5$ & $100 \pm 1$ \\ 0.25 & - & $5 \pm 5$ & - & $200 \pm 5$ & - & - \\ 0.30 & - & - & - & $290 \pm 5$ & - & - \\ \end{tabular} \end{ruledtabular} \end{table} \subsection{Neutron depolarization} \begin{figure} \includegraphics{figure5} \caption{\label{fig:5}Remaining neutron polarization after transmission through 0.5~mm of Fe$_{x}$Cr$_{1-x}$ as a function of temperature for $0.15 \leq x \leq 0.22$ (increasing from top to bottom). Data were measured in zero magnetic field under increasing temperature following initial zero-field cooling (zfc) or high-field cooling (hfc). Colored triangles mark the Curie transition $T_{\mathrm{C}}$ and the freezing temperature $T_{\mathrm{g}}$. Orange solid lines are fits to the experimental data, see text for details.} \end{figure} The neutron depolarization of samples in the central composition range $0.15 \leq x \leq 0.22$ was studied to gain further insights on the microscopic nature of the different magnetic states. Figure~\ref{fig:5} shows the polarization, $P$, of the transmitted neutron beam with respect to the polarization axis of the incoming neutron beam as a function of temperature. In the presence of ferromagnetically ordered domains or clusters that are large enough to induce a Larmor precession of the neutron spin during its transit, adjacent neutron trajectories pick up different Larmor phases due to the domain distribution in the sample. When averaged over the pixel size of the detector, this process results in polarization values below 1, also referred to as neutron depolarization. For a pedagogical introduction to the time and space resolution of this technique, we refer to Refs.~\cite{2008_Kardjilov_NatPhys, 2010_Schulz_PhD, 2015_Schmakat_PhD, _Seifert_tobepublished}. For $x = 0.15$, shown in Fig.~\ref{fig:5}(a), no depolarization is observed. For $x = 0.16$, shown in Fig.~\ref{fig:5}(b), a weak decrease of polarization emerges below a point of inflection at $T_{\mathrm{C}} \approx 60$~K (blue triangle). The value of $T_{\mathrm{C}}$ may be inferred from a fit to the experimental data as described below and is in reasonable agreement with the value inferred from the susceptibility. The partial character of the depolarization, $P \approx 0.96$ in the low-temperature limit, indicates that ferromagnetically ordered domains of sufficient size occupy only a fraction of the sample volume. At lower temperatures, a weak additional change of slope may be attributed to the spin freezing at $T_{\mathrm{g}}$ (red triangle). For $x = 0.17$, shown in Fig.~\ref{fig:5}(c), both signatures get more pronounced. In particular, data recorded after zero-field cooling (zfc) and high-field cooling (hfc) branch below $T_{\mathrm{g}}$, akin to the branching observed in the magnetization. The underlying dependence of the microscopic magnetic texture on the cooling history is typical for a spin glass. Note that the amount of branching varies from sample to sample. Such pronounced sample dependence is not uncommon in spin-glass systems, though the microscopic origin of these irregularities in Fe$_{x}$Cr$_{1-x}$ remains to be resolved. When further increasing $x$, shown in Figs.~\ref{fig:5}(c)--\ref{fig:5}(h), the transition temperature $T_{\mathrm{C}}$ shifts to larger values and the depolarization gets more pronounced until essentially reaching $P = 0$ at low temperatures for $x = 0.22$. No qualitative changes are observed around $x = 0.19$, i.e., the composition for which the onset of long-range ferromagnetic order was reported previously~\cite{1983_Burke_JPhysFMetPhys_II}. Instead, the gradual evolution as a function of $x$ suggests that ferromagnetically ordered domains start to emerge already for $x \approx 0.15$ and continuously increase in size and/or number with $x$. This conjecture is also consistent with the appearance of faint signatures in the susceptibility. Note that there are no signatures related to $T_{\mathrm{X}}$. In order to infer quantitative information, the neutron depolarization data were fitted using the formalism of Halpern and Holstein~\cite{1941_Halpern_PhysRev}. Here, spin-polarized neutrons are considered as they are traveling through a sample with randomly oriented ferromagnet domains. When the rotation of the neutron spin is small for each domain, i.e., when $\omega_{\mathrm{L}}t \ll 2\pi$ with the Larmor frequency $\omega_{\mathrm{L}}$ and the time required for transiting the domain $t$, the temperature dependence of the polarization of the transmitted neutrons may be approximated as \begin{equation}\label{equ1} P(T) = \mathrm{exp}\left[-\frac{1}{3}\gamma^{2}B^{2}_{\mathrm{0}}(T)\frac{d\delta}{v^{2}}\right]. \end{equation} Here, $\gamma$ is the gyromagnetic ratio of the neutron, $B_{\mathrm{0}}(T)$ is the temperature-dependent average magnetic flux per domain, $d$ is the sample thickness along the flight direction, $\delta$ is the mean magnetic domain size, and $v$ is the speed of the neutrons. In mean-field approximation, the temperature dependence of the magnetic flux per domain is given by \begin{equation}\label{equ2} B_{\mathrm{0}}(T) = {\mu_{0}}^{2} {M_{0}}^{2} \left(1 - \frac{T}{T_{\mathrm{C}}}\right)^{\beta} \end{equation} where $\mu_{0}$ is the vacuum permeability, $M_{0}$ is the spontaneous magnetization in each domain, and $\beta$ is the critical exponent. In the following, we use the magnetization value measured at 2~K in a magnetic field of 250~mT as an approximation for $M_{0}$ and set $\beta = 0.5$, i.e., the textbook value for a mean-field ferromagnet. Note that $M_{0}$ more than triples when increasing the iron concentration from $x = 0.15$ to $x = 0.22$, as shown in Tab.~\ref{tab:2}, suggesting that correlations become increasingly important. Fitting the temperature dependence of the polarization for temperatures above $T_{\mathrm{g}}$ according to Eq.~\eqref{equ1} yields mean values for the Curie temperature $T_{\mathrm{C}}$ and the domain size $\delta$, cf.\ solid orange lines in Fig.~\ref{fig:5} tracking the experimental data. The results of the fitting are summarized in Tab.~\ref{tab:2}. The values of $T_{\mathrm{C}}$ inferred this way are typically slightly higher than those inferred from the ac susceptibility, cf.\ Tab.~\ref{tab:1}. This shift could be related to depolarization caused by slow ferromagnetic fluctuations prevailing at temperatures just above the onset of static magnetic order. Yet, both values of $T_{\mathrm{C}}$ are in reasonable agreement. The mean size of ferromagnetically aligned domains or clusters, $\delta$, increases with increasing $x$, reflecting the increased density of iron atoms. As will be shown below, this general trend is corroborated also by an analysis of the Mydosh parameter indicating that Fe$_{x}$Cr$_{1-x}$ transforms from a cluster glass for small $x$ to a superparamagnet for larger $x$. \begin{table} \caption{\label{tab:2}Summary of the Curie temperature, $T_{\mathrm{C}}$, and the mean domain size, $\delta$, in Fe$_{x}$Cr$_{1-x}$ as inferred from neutron depolarization studies. Also shown is the magnetization measured at a temperature of 2~K in a magnetic field of 250~mT, ${M_{0}}$.} \begin{ruledtabular} \begin{tabular}{cccc} $x$ & $T_{\mathrm{C}}^{\mathrm{D}}$ (K) & $\delta$ ($\upmu$m) & $M_{0}$ ($10^{5}$A/m) \\ \hline 0.15 & - & - & 0.70 \\ 0.16 & $61 \pm 10$ & $0.61 \pm 0.10$ & 0.84 \\ 0.17 & $47 \pm 2$ & $2.12 \pm 0.15$ & 0.96 \\ 0.18 & $73 \pm 1$ & $3.17 \pm 0.07$ & 1.24 \\ 0.19 & $93 \pm 1$ & $3.47 \pm 0.02$ & 1.64 \\ 0.20 & $84 \pm 1$ & $4.67 \pm 0.03$ & 1.67 \\ 0.21 & $101 \pm 1$ & $3.52 \pm 0.03$ & 2.18 \\ 0.22 & $100 \pm 1$ & $5.76 \pm 0.13$ & 2.27\\ \end{tabular} \end{ruledtabular} \end{table} \subsection{Specific heat, high-field magnetometry, and electrical resistivity} \begin{figure} \includegraphics{figure6} \caption{\label{fig:6}Low-temperature properties of Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$. (a)~Specific heat as a function of temperature. Zero-field data (black curve) and an estimate for the phonon contribution using the Debye model (gray curve) are shown. Inset: Specific heat at high temperatures approaching the Dulong--Petit limit. (b)~Specific heat divided by temperature. After subtraction of the phonon contribution, magnetic contributions at low temperatures are observed (green curve). (c)~Magnetic contribution to the entropy obtained by numerical integration. (d)~Magnetization as a function of field up to $\pm9$~T for different temperatures. (e)~Electrical resistivity as a function of temperature for different applied field values.} \end{figure} To obtain a complete picture of the low-temperature properties of Fe$_{x}$Cr$_{1-x}$, the magnetic properties at low fields presented so far are complemented by measurements of the specific heat, high-field magnetization, and electrical resistivity on the example of Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$. The specific heat as a function of temperature measured in zero magnetic field is shown in Fig.~\ref{fig:6}(a). At high temperatures, the specific heat approaches the Dulong--Petit limit of $C_{\mathrm{DP}} = 3R = 24.9~\mathrm{J}\,\mathrm{mol}^{-1}\mathrm{K}^{-1}$, as illustrated in the inset. With decreasing temperature, the specific heat monotonically decreases, lacking pronounced anomalies at the different characteristic temperatures. The specific heat at high temperatures is dominated by the phonon contribution that is described well by a Debye model with a Debye temperature $\mathit{\Theta}_{\mathrm{D}} = 460$~K, which is slightly smaller than the values reported for $\alpha$-iron (477~K) and chromium (606~K)~\cite{2003_Tari_Book}. As shown in terms of the specific heat divided by temperature, $C/T$, in Fig.~\ref{fig:6}(b), the subtraction of this phonon contribution from the measured data highlights the presence of magnetic contributions to the specific heat below ${\sim}$30~K (green curve). As typical for spin-glass systems, no sharp signatures are observed and the total magnetic contribution to the specific heat is rather small~\cite{2015_Mydosh_RepProgPhys}. This finding is substantiated by the entropy $S$ as calculated by means of extrapolating $C/T$ to zero temperature and numerically integrating \begin{equation} S(T) = \int_{0}^{T}\frac{C(T)}{T}\,\mathrm{d}T. \end{equation} As shown in Fig.~\ref{fig:6}(c), the magnetic contribution to the entropy released up to 30~K amounts to about $0.04~R\ln2$, which corresponds to a small fraction of the total magnetic moment only. Insights on the evolution of the magnetic properties under high magnetic fields may be inferred from the magnetization as measured up to $\pm9$~T, shown in Fig.~\ref{fig:6}(d). The magnetization is unsaturated up to the highest fields studied and qualitatively unchanged under increasing temperature, only moderately decreasing in absolute value. The value of 0.22~$\mu_{\mathrm{B}}/\mathrm{f.u.}$ obtained at 2~K and 9~T corresponds to a moment of 1.46~$\mu_{\mathrm{B}}/\mathrm{Fe}$, i.e., the moment per iron atom in Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$ stays below the value of 2.2~$\mu_{\mathrm{B}}/\mathrm{Fe}$ observed in $\alpha$-iron~\cite{2001_Blundell_Book}. Finally, the electrical resistivity as a function of temperature is shown in Fig.~\ref{fig:6}(e). As typical for a metal, the resistivity is of the order of several ten $\upmu\Omega\,\mathrm{cm}$ and, starting from room temperature, decreases essentially linearly with temperature. However, around 60~K, i.e., well above the onset of magnetic order, a minimum is observed before the resistivity increases towards low temperatures. Such an incipient divergence of the resistivity with decreasing temperature due to magnetic impurities is reminiscent of single-ion Kondo systems~\cite{1934_deHaas_Physica, 1964_Kondo_ProgTheorPhys, 1987_Lin_PhysRevLett, 2012_Pikul_PhysRevLett}. When magnetic field is applied perpendicular to the current direction, this low-temperature increase is suppressed and a point of inflection emerges around 100~K. This sensitivity with respect to magnetic fields clearly indicates that the additional scattering at low temperatures is of magnetic origin. Qualitatively, the present transport data are in agreement with earlier reports on Fe$_{x}$Cr$_{1-x}$ for $0 \leq x \leq 0.112$~\cite{1966_Arajs_JApplPhys}. \section{Characterization of the spin-glass behavior} \label{sec:discussion} In spin glasses, random site occupancy of magnetic moments, competing interactions, and geometric frustration lead to a collective freezing of the magnetic moments below a freezing temperature $T_{\mathrm{g}}$. The resulting irreversible metastable magnetic state shares many analogies with structural glasses. Depending on the densities of magnetic moments, different types of spin glasses may be distinguished. For small densities, the magnetic properties may be described in terms of single magnetic impurities diluted in a nonmagnetic host, referred to as canonical spin-glass behavior. These systems are characterized by strong interactions and the cooperative spin freezing represents a phase transition. For larger densities, clusters form with local magnetic order and frustration between neighboring clusters, referred to as cluster glass behavior, developing superparamagnetic characteristics as the cluster size increases. In these systems, the inter-cluster interactions are rather weak and the spin freezing takes place in the form of a gradual blocking. When the density of magnetic moments surpasses the percolation limit, long-range magnetic order may be expected. For compositions close to the percolation limit, so-called reentrant spin-glass behavior may be observed. In such cases, as a function of decreasing temperature first a transition from a paramagnetic to a magnetically ordered state occurs before a spin-glass state emerges at lower temperatures. As both the paramagnetic and the spin-glass state lack long-range magnetic order, the expression ‘reentrant’ alludes to the disappearance of long-range magnetic order after a finite temperature interval and consequently the re-emergence of a state without long-range order~\cite{1993_Mydosh_Book}. The metastable nature of spin glasses manifests itself in terms of a pronounced history dependence of both microscopic spin arrangement and macroscopic magnetic properties, translating into four key experimental observations; (i) a frequency-dependent shift of the maximum at $T_{\mathrm{g}}$ in the ac susceptibility, (ii) a broad maximum in the specific heat located 20\% to 40\% above $T_{\mathrm{g}}$, (iii) a splitting of the magnetization for different cooling histories, and (iv) a time-dependent creep of the magnetization~\cite{2015_Mydosh_RepProgPhys}. The splitting of the magnetization and the broad signature in the specific heat were addressed in Figs.~\ref{fig:5} and \ref{fig:6}. In the following, the frequency dependence of the ac susceptibility will be analyzed by means of three different ways, namely the Mydosh parameter, power law fits, and the Vogel--Fulcher law, permitting to classify the spin-glass behavior in Fe$_{x}$Cr$_{1-x}$ and its change as a function of composition. \begin{figure} \includegraphics[width=0.97\linewidth]{figure7} \caption{\label{fig:7}Imaginary part of the zero-field ac susceptibility as a function of temperature for Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$ measured at different excitation frequencies $f$. Analysis of the frequency-dependent shift of the spin freezing temperature $T_{\mathrm{g}}$ allows to gain insights on the microscopic nature of the spin-glass state.} \end{figure} In the present study, the freezing temperature $T_{\mathrm{g}}$ was inferred from a maximum in the imaginary part of the ac susceptibility as measured at an excitation frequency of 1~kHz. However, in a spin glass the temperature below which spin freezing is observed depends on the excitation frequency $f$, as illustrated in Fig.~\ref{fig:7} for the example of Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$. Under increasing frequency, the imaginary part remains qualitatively unchanged but increases in absolute size and the maximum indicating $T_{\mathrm{g}}$ shifts to higher temperatures. Analyzing this shift in turn provides information on the microscopic nature of the spin-glass behavior. The first and perhaps most straightforward approach utilizes the empirical Mydosh parameter $\phi$, defined as \begin{equation} \phi = \left[\frac{T_{\mathrm{g}}(f_{\mathrm{high}})}{T_{\mathrm{g}}(f_{\mathrm{low}})} - 1\right] \left[\ln\left(\frac{f_{\mathrm{high}}}{f_{\mathrm{low}}}\right)\right]^{-1} \end{equation} where $T_{\mathrm{g}}(f_{\mathrm{high}})$ and $T_{\mathrm{g}}(f_{\mathrm{low}})$ are the freezing temperatures as experimentally observed at high and low excitation frequencies, $f_{\mathrm{high}}$ and $f_{\mathrm{low}}$, respectively~\cite{1993_Mydosh_Book, 2015_Mydosh_RepProgPhys}. Small shifts associated with Mydosh parameters below 0.01 are typical for canonical spin glasses such as Mn$_{x}$Cu$_{1-x}$, while cluster glasses exhibit intermediate values up to 0.1. Values exceeding 0.1 suggest superparamagnetic behavior~\cite{1993_Mydosh_Book, 2015_Mydosh_RepProgPhys, 1980_Tholence_SolidStateCommun, 1986_Binder_RevModPhys}. \begin{figure} \includegraphics[width=1.0\linewidth]{figure8} \caption{\label{fig:8}Evolution of the Mydosh-parameter in Fe$_{x}$Cr$_{1-x}$. (a)~Schematic depiction of the five different sequences of magnetic regimes observed as a function of temperature for different $x$. The following regimes are distinguished: paramagnetic~(PM), antiferromagnetic~(AFM), ferromagnetic~(FM), spin-glass~(SG). A precursor phenomenon~(PC) may be observed between FM and SG. (b)~Mydosh parameter $\phi$ as a function of the iron concentration $x$, allowing to classify the spin-glass behavior as canonical ($\phi \leq 0.01$, gray shading), cluster-glass ($0.01 \leq \phi \leq 0.1$, yellow shading), or superparamagnetic ($\phi \geq 0.1$, brown shading). } \end{figure} \begin{table*} \caption{\label{tab:3}Parameters inferred from the analysis of the spin-glass behavior in Fe$_{x}$Cr$_{1-x}$, namely the Mydosh parameter $\phi$, the zero-frequency extrapolation of the spin freezing temperature $T_\mathrm{g}(0)$, the characteristic relaxation time $\tau_{0}$, the critical exponent $z\nu$, the Vogel--Fulcher temperature $T_{0}$, and the cluster activation energy $E_{a}$. The errors were determined by means of Gaussian error propagation ($\phi$), the distance of neighboring data points ($T_\mathrm{g}(0)$), and statistical deviations of the linear fits ($\tau_{0}$, $z\nu$, $T_{0}$, and $E_{a}$).} \begin{ruledtabular} \begin{tabular}{ccccccc} $x$ & $\phi$ & $T_\mathrm{g}(0)$ (K) & $\tau_{0}$ ($10^{-6}$~s) & $z\nu$ & $T_{0}$ (K) & $E_{a}$ (K) \\ \hline 0.05 & - & - & - & - & - & - \\ 0.10 & $0.064 \pm 0.011$ & - & - & - & - & - \\ 0.15 & $0.080 \pm 0.020$ & $9.1 \pm 0.1$ & $0.16 \pm 0.03$ & $5.0 \pm 0.1$ & $8.5 \pm 0.1$ & $19.9 \pm 0.8$ \\ 0.16 & $0.100 \pm 0.034$ & $13.4 \pm 0.1$ & $1.73 \pm 0.15$ & $2.2 \pm 0.0$ & $11.9 \pm 0.1$ & $14.4 \pm 0.3$ \\ 0.17 & $0.107 \pm 0.068$ & $18.3 \pm 0.1$ & $6.13 \pm 1.52$ & $1.5 \pm 0.1$ & $16.3 \pm 0.3$ & $12.8 \pm 0.9$ \\ 0.18 & $0.108 \pm 0.081$ & $14.5 \pm 0.1$ & $1.18 \pm 0.46$ & $7.0 \pm 0.5$ & $16.9 \pm 0.5$ & $24.2 \pm 2.3$ \\ 0.19 & $0.120 \pm 0.042$ & $14.2 \pm 0.1$ & $0.47 \pm 0.15$ & $4.5 \pm 0.2$ & $14.6 \pm 0.4$ & $16.3 \pm 1.4$ \\ 0.20 & $0.125 \pm 0.043$ & $13.5 \pm 0.1$ & $1.29 \pm 0.34$ & $4.1 \pm 0.2$ & $13.6 \pm 0.3$ & $18.8 \pm 1.3$ \\ 0.21 & $0.138 \pm 0.048$ & $9.5 \pm 0.1$ & $1.67 \pm 0.21$ & $4.7 \pm 0.1$ & $10.3 \pm 0.4$ & $12.0 \pm 1.3$ \\ 0.22 & $0.204 \pm 0.071$ & $11.7 \pm 0.1$ & $2.95 \pm 0.80$ & $2.6 \pm 0.1$ & $11.3 \pm 0.4$ & $11.3 \pm 1.2$ \\ 0.25 & $0.517 \pm 0.180$ & $2.8 \pm 0.1$ & $75.3 \pm 5.34$ & $1.8 \pm 0.1$ & - & - \\ 0.30 & - & - & - & - & - & \\ \end{tabular} \end{ruledtabular} \end{table*} As summarized in Tab.~\ref{tab:3} and illustrated in Fig.~\ref{fig:8}, the Mydosh parameter in Fe$_{x}$Cr$_{1-x}$ monotonically increases as a of function of increasing iron concentration. For small $x$, the values are characteristic of cluster-glass behavior, while for large $x$ they lie well within the regime of superparamagnetic behavior. This evolution reflects the increase of the mean size of ferromagnetic clusters as inferred from the analysis of the neutron depolarization data. \begin{figure} \includegraphics[width=1.0\linewidth]{figure9} \caption{\label{fig:9}Analysis of spin-glass behavior using power law fits and the Vogel--Fulcher law for Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$. (a)~Logarithm of the relaxation time as a function of the logarithm of the normalized shift of the freezing temperature. The red solid line is a power law fit allowing to infer the characteristic relaxation time $\tau_{0}$ and the critical exponent $z\nu$. Inset: Goodness of fit for different estimated zero-frequency extrapolations of the freezing temperature, $T_{\mathrm{g}}^{\mathrm{est}}(0)$. The value $T_{\mathrm{g}}(0)$ used in the main panel is defined as the temperature of highest $R^{2}$. (b)~Spin freezing temperature as a function of the inverse of the logarithm of the ratio of characteristic frequency and excitation frequency. The red solid line is a fit according to the Vogel--Fulcher law allowing to infer the cluster activation energy $E_{a}$ and the Vogel--Fulcher temperature $T_{0}$.} \end{figure} The second approach employs the standard theory for dynamical scaling near phase transitions to $T_{\mathrm{g}}$~\cite{1977_Hohenberg_RevModPhys, 1993_Mydosh_Book}. The relaxation time $\tau = \frac{1}{2\pi f}$ is expressed in terms of the power law \begin{equation} \tau = \tau_{0} \left[\frac{T_{\mathrm{g}}(f)}{T_{\mathrm{g}}(0)} - 1\right]^{z\nu} \end{equation} where $\tau_{0}$ is the characteristic relaxation time of a single moment or cluster, $T_{\mathrm{g}}(0)$ is the zero-frequency limit of the spin freezing temperature, and $z\nu$ is the critical exponent. In the archetypical canonical spin glass Mn$_{x}$Cu$_{1-x}$, one obtains values such as $\tau_{0} = 10^{-13}~\mathrm{s}$, $T_{\mathrm{g}}(0) = 27.5~\mathrm{K}$, and $z\nu = 5$~\cite{1985_Souletie_PhysRevB}. The corresponding analysis is illustrated in Fig.~\ref{fig:9}(a) for Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$. First the logarithm of the ratio of relaxation time and characteristic relaxation time, $\ln(\frac{\tau}{\tau_{0}})$, is plotted as a function of the logarithm of the normalized shift of the freezing temperature, $\ln\left[\frac{T_{\mathrm{g}}(f)}{T_{\mathrm{g}}(0)} - 1\right]$, for a series of estimated values of the zero-frequency extrapolation $T_{\mathrm{g}}^{\mathrm{est}}(0)$. For each value of $T_{\mathrm{g}}^{\mathrm{est}}(0)$ the data are fitted linearly and the goodness of fit is compared by means of the $R^{2}$ coefficient, cf.\ inset of Fig.~\ref{fig:9}(a). The best approximation for the zero-frequency freezing temperature, $T_{\mathrm{g}}(0)$, is defined as the temperature of highest $R^{2}$. Finally, the characteristic relaxation time $\tau_{0}$ and the critical exponent $z\nu$ are inferred from a linear fit to the experimental data using this value $T_{\mathrm{g}}(0)$, as shown in Fig.~\ref{fig:9}(a) for Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$. The same analysis was carried out for all compositions Fe$_{x}$Cr$_{1-x}$ featuring spin-glass behavior, yielding the parameters summarized in Tab.~\ref{tab:3}. Characteristic relaxation times of the order of $10^{-6}~\mathrm{s}$ are inferred, i.e., several order of magnitude larger than those observed in canonical spin glasses and consistent with the presence of comparably large magnetic clusters, as may be expected for the large values of $x$. Note that these characteristic times are also distinctly larger than the $10^{-12}~\mathrm{s}$ to $10^{-8}~\mathrm{s}$ that neutrons require to traverse the magnetic clusters in the depolarization experiments. Consequently, the clusters appear quasi-static for the neutron which in turn is a prerequisite for the observation of net depolarization across a macroscopic sample. The critical exponents range from 1.5 to 7.0, i.e., within the range expected for glassy systems~\cite{1980_Tholence_SolidStateCommun, 1985_Souletie_PhysRevB}. The lack of systematic evolution of both $\tau_{0}$ and $z\nu$ as a function of iron concentration $x$ suggests that these parameters in fact may be rather sensitive to details of microscopic structure, potentially varying substantially between individual samples. The third approach uses the Vogel--Fulcher law, developed to describe the viscosity of supercooled liquids and glasses, to interpret the properties around the spin freezing temperature $T_{\mathrm{g}}$~\cite{1993_Mydosh_Book, 1925_Fulcher_JAmCeramSoc, 1980_Tholence_SolidStateCommun, 2013_Svanidze_PhysRevB}. Calculating the characteristic frequency $f_{0} = \frac{1}{2\pi\tau_{0}}$ from the characteristic relaxation time $\tau_{0}$ as determined above, the Vogel--Fulcher law for the excitation frequency $f$ reads \begin{equation} f = f_{0} \exp\left\lbrace-\frac{E_{a}}{k_{\mathrm{B}}[T_{\mathrm{g}}(f)-T_{0}]}\right\rbrace \end{equation} where $k_{\mathrm{B}}$ is the Boltzmann constant, $E_{a}$ is the activation energy for aligning a magnetic cluster by the applied field, and $T_{0}$ is the Vogel--Fulcher temperature providing a measure of the strength of the cluster interactions. As a point of reference, it is interesting to note that values such as $E_{a}/k_{\mathrm{B}} = 11.8~\mathrm{K}$ and $T_{0} = 26.9~\mathrm{K}$ are observed in the archetypical canonical spin glass Mn$_{x}$Cu$_{1-x}$~\cite{1985_Souletie_PhysRevB}. For each composition Fe$_{x}$Cr$_{1-x}$, the spin freezing temperature $T_{\mathrm{g}}(f)$ is plotted as a function of the inverse of the logarithm of the ratio of characteristic frequency and excitation frequency, $\frac{1}{\ln(f/f_{0})}$, as shown in Fig.~\ref{fig:9}(b) for Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$. A linear fit to the experimental data allows to infer $E_{a}$ and $T_{0}$ from the slope and the intercept. The corresponding values for all compositions Fe$_{x}$Cr$_{1-x}$ featuring spin-glass behavior are summarized in Tab.~\ref{tab:3}. All values of $T_{0}$ and $E_{a}$ are of the order 10~K and positive, indicating the presence of strongly correlated clusters~\cite{2012_Anand_PhysRevB, 2011_Li_ChinesePhysB, 2013_Svanidze_PhysRevB}. Both $T_{0}$ and $E_{a}$ follow roughly the evolution of the spin freezing temperature $T_{\mathrm{g}}$, reaching their maximum values around $x = 0.17$ or $x = 0.18$. \section{Conclusions} \label{sec:conclusion} In summary, a comprehensive study of the magnetic properties of polycrystalline Fe$_{x}$Cr$_{1-x}$ in the composition range $0.05 \leq x \leq 0.30$ was carried out by means of x-ray powder diffraction as well as measurements of the magnetization, ac susceptibility, and neutron depolarization, complemented by specific heat and electrical resistivity data for $x = 0.15$. As our central result, we present a detailed composition--temperature phase diagram based on the combination of a large number of quantities. Under increasing iron concentration $x$, antiferromagnetic order akin to pure Cr is suppressed above $x = 0.15$, followed by the emergence of weak magnetic order developing distinct ferromagnetic character above $x = 0.18$. At low temperatures, a wide dome of reentrant spin-glass behavior is observed for $0.10 \leq x \leq 0.25$, preceded by a precursor phenomenon. Analysis of the neutron depolarization data and the frequency-dependent shift in the ac susceptibility indicate that with increasing $x$ the size of ferromagnetically ordered clusters increases and that the character of the spin-glass behavior changes from a cluster glass to a superparamagnet. \acknowledgments We wish to thank P.~B\"{o}ni and S.~Mayr for fruitful discussions and assistance with the experiments. This work has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under TRR80 (From Electronic Correlations to Functionality, Project No.\ 107745057, Project E1) and the excellence cluster MCQST under Germany's Excellence Strategy EXC-2111 (Project No.\ 390814868). Financial support by the Bundesministerium f\"{u}r Bildung und Forschung (BMBF) through Project No.\ 05K16WO6 as well as by the European Research Council (ERC) through Advanced Grants No.\ 291079 (TOPFIT) and No.\ 788031 (ExQuiSid) is gratefully acknowledged. G.B., P.S., S.S., M.S., and P.J.\ acknowledge financial support through the TUM Graduate School.
2024-02-18T23:39:40.791Z
2020-07-22T02:11:10.000Z
algebraic_stack_train_0000
45
9,343
proofpile-arXiv_065-242
\section{Meta-Learning for Optimal Agent} The trade-off described above begs the following question: How can it be managed in an environment with unknown properties? That is, how does an agent decide whether to pursue single-task or multitask training in a new environment so that it can learn the tasks most efficiently while maximizing the rewards it receives? Suppose an agent must learn how to optimize its performance on a given set of tasks in an environment over $\tau$ trials. At trial $t$, for each task, the agent receives a set of inputs $\mathbf{X} = \{\mathbf{x}_k\}_{k=1}^{K}$ and is expected to produce the correct labels $\mathbf{Y} = \{\mathbf{y}_k\}_{k=1}^{K}$ corresponding to the task. Assuming that each is a classification task, the agent's reward is its accuracy for the inputs, i.e. $R_t = \frac{1}{K} \sum_{k=1}^{K} \mathbbm{1}_{\hat{\mathbf{y}}_k = \mathbf{y}_k}$ where $\hat{\mathbf{y}}_k$ is the predicted label by the agent for each input $\mathbf{x}_k$. On each trial, the agent must perform all the tasks, and it can choose to do so either serially (i.e. by single-tasking) or simultaneously (i.e. by multitasking). After completion of the task and observation of the rewards, the agent also receives the correct labels $\mathbf{Y}$ for the tasks in order to train itself to improve task performance. Finally, assume that the agent's performance is measured across these trials through the entire course of learning and its goal is to maximize the sum of these rewards across all tasks. To encode the time cost of single-tasking execution, we assume the environment has some unknown \emph{serialization} cost $c$ that determines the cost of performing tasks serially, i.e. one at a time. We assume that the reward for each task when done serially is $\frac{R_t}{1 + c}$ where $R_t$ is the reward as defined before. The serialization cost therefore discounts the reward in a multiplicative fashion for single-tasking. We assume that $0 \le c \le 1$ so that $c=0$ indicates there is no cost enforced for serial performance whereas $c=1$ indicates that the agent receives half the reward for all the tasks by performing them in sequence. Note that the training strategy the agent picks not only affects the immediate rewards it receives but also the future rewards, as it influences how effectively the agent learns the tasks to improve its performance in the future. Thus, depending on the serialization cost, the agent may receive lower reward for picking single-tasking but gains a benefit in learning speed that may or may not make up for it over the course of the entire learning episode. This question is at the heart of the trade-off the agent must navigate to make the optimal decision. We note that this is one simple but intuitive way to encode the cost of doing tasks serially but other mechanisms are possible. \subsection{Approximate Bayesian Agent} We assume that, on each trial, the agent has the choice between two training strategies to execute and learn the given tasks - by single-tasking or multitasking. The method we describe involves, on each trial, the agent modeling the reward dynamics under each training strategy for each task and picking the strategy that is predicted to give the highest discounted total future reward across all tasks. To model the reward progress under each strategy, we first define the reward function for each strategy, $f_{A,i}(t)$, which gives the reward for a task $i$ under strategy $A$ assuming strategy $A$ has been selected $t$ times. The reward function captures the effects of both the strategy's learning dynamics and unknown serialization cost (if it exists for the strategy). Here, $A \in \{S, M\}$ where $S$ represents the single-tasking strategy and $M$ represents the multitasking strategy. We can use the reward function to get the reward for a task $i$ at trial $t'$ when selecting strategy $A$. Let $a_1, a_2, \ldots, a_{t' - 1}$ be the strategies picked at each trial until trial $t'$. Then, \[ R^{(A,i)}_{t'} = f_{A,i}\left( \sum_{t=1}^{t'-1} \mathbbm{1}_{a_t = A} \right). \] is the reward for task $i$ at trial $t'$ assuming we pick strategy $A$. Given the reward for each task, the agent can get the total discounted future reward for a strategy $A$ from trial $t'$ onward assuming we repeatedly select strategy $A$: \[ R^{(A)}_{\ge t'} = \sum_{t=t'}^{\tau} \mu(t) \left( \sum_{i=1}^{N} R^{(A,i)}_{t} \right), \] where $\mu(t)$ is the temporal discounting function, $N$ is the total number of tasks, and $\tau$ is the total number of trials the agent has to maximize its reward on the tasks. We now discuss how the agent maintains its estimate of each strategy's reward function for each task. The reward function is modeled as a sigmoidal function, the parameters of which are updated on each trial. Specifically, for a strategy $A$ and task $i$, using parameters $\theta_{A,i} = \{w_1, b_1, w_2, b_2 \}$, we model the reward function as $f_{A,i}(t) = \sigma(w_2 \cdot \sigma(w_1 \cdot t + b_1) + b_2)$. We place a prior over the parameters $p(\theta_{A,i})$ and compute the posterior at each trial $t'$ over the parameters $p(\theta_{A,i} | D_{t'})$, where $D_{t'}$ is the observed rewards until trial $t'$ using strategy $A$ on task $i$. Because the exact posterior is difficult to compute, we calculate the approximate posterior $q(\theta_{A,i} | D_{t'})$ using variational inference \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{wainwright2008graphical}. Specifically, we use Stein variational gradient descent (SVGD) \cite{liu2016stein}, which is a deterministic variational inference method that approximates the posterior using a set of particles that represent samples from the approximate posterior. The benefit of using SVGD is that it allows the number of particles used to be selected so as to increase the complexity of the approximate posterior, while ensuring that the time it takes to compute this approximation is practical. Because the posterior needs to be calculated repeatedly during training of the network, we found SVGD to offer the best properties - the approximate posterior is much quicker to compute than using MCMC techniques, while allowing for a complex approximation compared to using a simple Gaussian variational approximation to the posterior. At each trial $t'$, the agent uses its estimate of the total discounted future reward for single-tasking and multitasking ($R^{(S)}_{\ge t'}$ and $R^{(M)}_{\ge t'}$, respectively) to decide which strategy to use. This can be thought of as a two-armed bandit problem, in which the agent needs to adequately explore and exploit to decide which arm, or strategy, is better. Choosing the single-tasking training regimen may give initial high reward (because of the learning speed benefit) but choosing multitasking may be the better long-term strategy because it does not suffer from any serialization cost. Thompson sampling \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{thompson1933likelihood,chapelle2011empirical,gershman2018deconstructing} is an elegant solution to the explore-exploit problem, involving sampling from the posterior over the parameters and taking decisions greedily with respect to the sample. It provides initial exploration as the posterior variance is large at the start because of a lack of data and then turns to exploitation when the posterior is more confident due to seeing enough data. On each trial, we use Thompson sampling to pick between the training strategies by sampling from the approximate posterior over parameters for each strategy, calculating the total discounted future reward for each strategy according to the sampled parameters, and picking the strategy corresponding to the higher reward. Note that in practice we do not re-estimate the posterior in each trial (as one new reward will not change the posterior much) but instead do it periodically when enough new rewards have been observed. \begin{comment} We tackle this trade-off using the classic Botlzmann exploration method, where an action is sampled according to the distribution computed using the softmax over the future reward estimates for each strategy. The temperature parameter for the softmax is annealed over time to $0$ so that there is a time-period of initial exploration after which the decisions become greedy. \end{comment} \section{Supplementary Material} \subsection{Experimental Details} Our data consists of $29,250$ data-points generated in AirSim. Each data-point consists of examples for each input (GPS and image) and labels for all four tasks. The GPS-input is a two-dimensional input whereas the image-input is $84 \times 84$. The specific network architecture we use involves processing the GPS-input using a single-layer neural network with $50$ hidden units and image-input using a $4$-layer convolutional network with $32$ feature maps in each layer. The final hidden-layer representations for each type of network are then mapped to the different outputs using a fully-connected layer. All networks are trained using SGD with learning rate of $0.1$ that is decayed across the trials. At each trial, the network receives $160$ items for which it is trained on all $4$ tasks either via single-tasking or multitasking. For single-tasking, the network is trained on each task one after another, where the data for each task is treated as one mini-batch. For multitasking, the network is trained on performing Tasks $1$ and $4$ concurrently and then on performing Tasks $2$ and $3$ concurrently, where again data for each multitasking execution is treated as one mini-batch. We measure the learning speed for each task by measuring the accuracy on the data for a task before the network is updated to be trained on that data. For the results shown in Figures $4$ and $5$ in the main text, the networks are trained for $20,000$ trials. The error bars represent $95 \%$ confidence intervals computed using $10$ different network initializations, where each initialization involves using a different random seed when sampling weights according to the Xavier initialization scheme for both convolutional and fully-connected layers. Lastly, all models were trained on a Nvidia Titan X GPU. The multitasking error for a network is computed by measuring how much worse the average performance for all the data for a task is when the task is executed in multitasking fashion vs when it is executed as a single task. Thus, for example, to get the multitasking error for Task $1$, we would measure the error in average performance when executing the task in multitasking fashion (where we execute Tasks $1$ and $4$ concurrently) vs executing the task just by itself. The amount of sharing of representations between two tasks is computed by taking the average representations for the two tasks (when executed in single-tasking fashion) across all the data and measuring the correlation between these two average representations. We can compute this layer-wise by only considering the average representation at a certain layer. \subsection{Meta-Learning Experimental Details} Information about hyper-parameters used for meta-learning are shown in Table~\ref{table:hyperparam}. The hyper-parameters for number of particles and posterior re-computation were primarily picked to have a feasible running-time whereas the prior parameters were picked to be representative of a reward function that was non-decreasing over time and converging to perfect performance by the end of the total number of trials. The error bars in Figure $6$a and $6$b represent $95 \%$ confidence intervals computed using $15$ different runs of the meta-learner. \begin{table}[t] \centering \begin{tabular}{|c|c|} \hline Hyper-parameter Description & Value \\ \hline \hline Number of Particles for SVGD & $5$ \\ \hline \shortstack{Amount of new trial data \\ to re-compute posterior} & $50$ \\ \hline Prior distribution for $w_1$ & $\mathcal{N}(0.001, 0.2)$ \\ \hline Prior distribution for $w_2$ & $\mathcal{N}(10, 1)$ \\ \hline Prior distribution for $b_1$ & $\mathcal{N}(-2, 1)$ \\ \hline Prior distribution for $b_2$ & $\mathcal{N}(-5, 1)$ \\ \hline \end{tabular} \caption{Hyper-parameters for meta-learning.} \label{table:hyperparam} \end{table} \subsection{Visualization of Multitasking Error} Figure~\ref{fig:multierrorvis} visualizes the outputs of a single-tasking trained and a multitasking trained network asked to perform Task $1$ (GPS-localization) and Task $4$ (Image-classification) concurrently. The examples show mis-classifications by the single-tasking trained network on Task $1$ when multitasking, as the single-tasking trained network seems to err towards the output of Task $3$ (Image-localization). Executing Task $1$ and $4$ requires activation of representations for both tasks in the hidden layers by the task-input layer. This leads to an implicit engagement of Task $3$ which shares a representation with Task $4$, leading to cross-talk with Task $1$ at the location output layer. In these examples, we see that the prediction for the GPS location is biased toward the location of the object which would correspond to the correct label for the implicitly activated Task $3$. The multitasking trained network, on the other hand, does not suffer from this cross-talk and is able to execute Tasks $1$ and $4$ concurrently with no error. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{figures/multi_error_vis.png} \caption{Visualization of predictions from concurrent execution of Tasks 1 and 4 in a single-tasking trained (left) and multitasking trained (right) network. For a correct output for Task 1 (GPS-localization), the predicted output (green box) should contain the GPS-input (green point).} \label{fig:multierrorvis} \end{figure} \subsection{Effect of Sharing Representations on Learning Speed and Multitasking Ability with Different Initialization} In Figure~\ref{fig:multitask_diff_init}, we show results comparing single-task vs multitask training when the network isn't as biased towards using shared representations because of initialization using smaller task-associated weights. We see the same conclusions as the previous related experiment in the main text; however, the learning speed benefit of the single-task trained network seems even larger in this case. \begingroup \makeatletter \renewcommand{\p@subfigure}{ \begin{figure} \centering \begin{subfigure}[b]{.38\textwidth} \centering \includegraphics[width=1.15\linewidth]{plots/multitask_learning_speed_eb_init.png} \caption{} \label{fig:multitask_learning_speed} \end{subfigure} \begin{subfigure}[b]{0.34\textwidth} \centering \begin{minipage}[b]{0.55\textwidth} \centering \includegraphics[width=\textwidth]{plots/multitask_error_eb_init.png} \caption{} \label{fig:multitask_error} \end{minipage} \begin{minipage}[b]{0.55\textwidth} \centering \includegraphics[width=\textwidth]{plots/multitask_layer_corr_eb_init.png} \caption{} \label{fig:multitask_corr} \end{minipage} \end{subfigure} \vspace{-5pt} \caption{Effect of single-task vs multitask training. (\subref{fig:multitask_learning_speed}) Comparison of learning speed of the networks. (\subref{fig:multitask_error}) Comparison of the error in average task performance over all data when multitasking compared to single-tasking (the lack of a bar indicates no error). (\subref{fig:multitask_corr}) Correlation of convolutional layer representations between Tasks $3$ and Tasks $4$ computed using the average representation for each layer across all the data. We again show results for the tasks involving the convolutional network.} \label{fig:multitask_diff_init} \vspace{-5pt} \end{figure} \endgroup \subsection{Visualization of Meta-Learner} In Figure~\ref{fig:posterior}, we visualize the predictive distribution of rewards at various trials when varying amount of data has been observed. We see that the predictive distribution is initially uncertain when observing a small amount of rewards for each strategy (which is useful for exploration) and grows certain as more data is observed (which is utilized to be greedy). \begin{figure} \begin{subfigure}[t]{\textwidth} \captionsetup{justification=raggedright,singlelinecheck=false,margin=4.33cm} \includegraphics[width=0.5\linewidth]{plots/posterior1.png} \caption{} \label{fig:posterior1} \end{subfigure} \begin{subfigure}[t]{\textwidth} \captionsetup{justification=raggedright,singlelinecheck=false,margin=4.33cm} \includegraphics[width=0.5\linewidth]{plots/posterior2.png} \caption{} \label{fig:posterior2} \end{subfigure} \begin{subfigure}[t]{\textwidth} \captionsetup{justification=raggedright,singlelinecheck=false,margin=4.33cm} \includegraphics[width=0.5\linewidth]{plots/posterior3.png} \caption{} \label{fig:posterior3} \end{subfigure} \caption{Visualization of actual rewards and predictive distribution of rewards for a specific task. Shaded areas correspond to $\pm 3$ standard deviations around mean. For each of (\subref{fig:posterior1}), (\subref{fig:posterior2}), and (\subref{fig:posterior3}), we show the actual rewards accumulated over trials for each strategy (on top) and the predictive distribution over reward data computed using samples from the posterior distribution over parameters for each strategy given the reward data (on bottom).} \label{fig:posterior} \end{figure} \section{Background} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{figures/example_network_s.eps} \caption{Neural network architecture from \citet{musslick2016controlled}.} \label{fig:example_NN} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/basis_vs_tensor_s.eps} \caption{Network structure for minimal basis set (left) and tensor product (right) representations and the effects of multitasking in each. Red cross indicates error in execution of task because of interference whereas green check-mark indicates successful execution.} \label{fig:network_structure} \end{figure} \subsection{Definition of Tasks and Multitasking} Consider an environment in which there are multiple stimulus input dimensions (e.g. corresponding to different sensory modalities) and multiple output dimensions (corresponding to different response modalities). Given an input dimension $I$ (e.g. an image) and an output dimension $O$ (e.g. object category) of responses, a task $T: I \to O$ represents a mapping between the two (e.g. mapping a set of images to a set of object categories), such that the mapping is independent of any other. Thus, given $N$ different input dimensions and $K$ possible output dimensions, there is a total of $NK$ possible independent tasks that the network can learn to perform. Finally, multitasking refers to the simultaneous execution of multiple tasks, i.e. within one forward-pass from the inputs to the outputs of a network. Note that such multitasking differs from multi-task learning in that multitasking requires tasks to map different input dimensions to different output dimensions \cite{pashler1994dual} in a way that each is independent of the other, whereas typically in multi-task learning all tasks map the same input dimension to different output dimensions \cite{caruana1997multitask}. \subsection{Processing Single and Multiple Tasks Based on Task Projections} \label{task-projection} We focus on a network architecture that has been used extensively in previous work \cite{cohen1990control,botvinick2001conflict,musslick2017multitasking} (shown in Figure~\ref{fig:example_NN}). Here, in addition to the set of stimulus inputs, there is also an input dimension to indicate which task the network should perform. This task vector is projected to the hidden units and output units using learned weights. The hidden unit task projection biases the hidden layer to calculate a specific representation required for each task, whereas the output unit projection biases the outputs to only allow the output that is relevant for the task. The functional role of the task layer is inspired by the notion of cognitive control and attention in psychology and neuroscience, that is, the ability to flexibly guide information processing according to current task goals \cite{shiffrin1977controlled,posnerr,cohen1990control}. Assuming that the task representations used to specify different tasks are orthogonal to one another (e.g., using a one hot code for each), then multitasking can be specified by a superposition (sum) of the representations for the desired tasks in the task input layer. The weights learned for the projections from the task input units to units in the hidden layers, together with those learned within the rest of the network, co-determine what type of representation (shared or separate) the network uses. \subsection{Minimal Basis Set vs Tensor Product Representations} Previous work \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{feng2014multitasking,musslick2016controlled,musslick2017multitasking} has established that, in the extreme, there are two ways that different tasks can be represented in the hidden layer of a two-layer network. The first representational scheme is the \emph{minimal basis set} (shown on the left in Figure~\ref{fig:network_structure}), in which all tasks that rely on the same input encode the input in the same set of hidden representations. The second scheme is the \emph{tensor product} (shown on the right in Figure~\ref{fig:network_structure}), in which the input for each task is separately encoded in its own set of hidden representations. Thus, the minimal basis set maximally shares representations across tasks whereas the tensor product uses separate representations for each task. These two representational schemes pose a fundamental trade-off. The minimal basis set provides a more efficient encoding of the inputs, and allows for faster learning of the tasks because of the sharing of information across tasks. However, it prohibits executing more than one task at a time (i.e. any multitasking). This is because, with the minimal basis set, attempting to execute two tasks concurrently causes the implicit execution of other tasks due to the representational sharing between tasks. In contrast, while the tensor product network scheme is less compact, multitasking is possible since each task is encoded separately in the network, so that cross-talk does not arise among them (see Figure \ref{fig:network_structure} for an example of multitasking and its effects in both types of networks). However, learning the tensor product representation takes longer since it cannot exploit the sharing of representations across tasks. The type of representation learned by the network can be determined by the type of task-processing on which it is trained. Single-task training (referred to in the literature as multi-task training), involves training on tasks one at a time and generally induces shared representations. In contrast, multitask training involves training on multiple tasks concurrently and produces separate representations. This occurs because using shared representations when multitasking causes interference and thus error in task execution. In order to minimize this error and the cross-talk that is responsible for it, the network learns task projection weights that lead to separate representations for the tasks. In single-tasking training, there is no such pressure, as there is no potential for interference when executing one task at a time, and so the network can use shared representations. These effects have been established both theoretically and experimentally for shallow networks with one hidden-layer trained to perform simple synthetic tasks \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{musslick2016controlled,musslick2017multitasking}. Below, we report results suggesting that they generalize to deep neural networks trained on more complex tasks. \section{Discussion} In this work we study the trade-off between using shared vs separated representations in deep neural networks. We experimentally show that using shared representations leads to faster learning but at the cost of degraded multitasking performance\footnote{Note that limitations in multitasking due to shared representations may be bypassed by executing different single tasks across multiple copies of the trained network. However, this strategy appears inefficient as it requires a higher amount of memory and computation that scales with the number of tasks to be executed.}. We additionally propose and evaluate a meta-learning algorithm to decide which training strategy is best to use in an environment with unknown serialization cost. We believe simultaneous task execution as considered here could be important for real-world applications as it minimizes the number of forward passes needed to execute a set of tasks. The cost of a forward pass is an important factor in embedded devices (in terms of both time and energy required) and scales badly as we consider larger task spaces and more complex networks. Thus, optimally managing the trade-off between learning speed vs multitasking could be crucial for maximizing efficiency in such situations. A promising direction for future studies involves application of this meta-learner to more complex tasks. As we add more tasks, the potential for interference increases across tasks; however, as tasks become more difficult, the minimal basis set becomes more desirable, as there is even bigger benefit to sharing representations. Furthermore, in this more complicated setting, we would also like to expand our meta-learning algorithm to decide explicitly which set of tasks should be learned so that they can be executed in multitasking fashion and which set of tasks should only be executed one at a time. This requires a more complicated model, as we have to keep track of many possible strategies in order to see what will give the most reward in the future. \section{Related Work} The most relevant work from the multi-task learning literature focuses on maximizing positive transfer across tasks while minimizing negative transfer. This list includes work on minimizing learning interference when doing multi-task training \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{teh2017distral,rosenbaum2017routing} and reducing catastrophic interference when learning tasks one after another \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{rusu2016progressive,kirkpatrick2017overcoming}. However, to our knowledge, none of these works explicitly deal with the issue of how the type of representations a network uses affects whether it can execute tasks serially or in parallel. Additionally, as mentioned previously, we build on previous work studying the trade-off of learning speed vs multitasking ability in artificial neural networks \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{feng2014multitasking,musslick2016controlled,musslick2017multitasking}. Additionally, our meta-learning algorithm is similar to the one proposed by \citet{sagivefficiency}. However, we explicitly use the model's estimate of future rewards under each strategy to also decide how to train the network, whereas the meta-learner in \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{sagivefficiency} was not applied to a neural network's learning dynamics. Instead, the actual learning curve for each strategy $A$ was defined according to pre-defined synthetic function. Our algorithm is thus applied in a much more complex setting in which estimation of each strategy's future rewards directly affects how the network chooses to be trained. Furthermore, our method is fully Bayesian in the sense that we utilize uncertainty in the parameter posterior distribution to control exploration vs exploitation via Thompson sampling. In \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{sagivefficiency} logistic regression was combined with the $\epsilon$-greedy method to perform this trade-off, which requires hyper-parameters to control the degree of exploration. Lastly, we assume that the serialization cost is unknown and model its effects on the future reward of each strategy whereas \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{sagivefficiency} makes the simplifying assumption that the cost is known. Modeling the effects of an unknown serialization cost on the reward makes the problem more difficult but is a necessary assumption when deploying agents that need to make such decisions in a new environment with unknown properties. Lastly, previous work on bounded optimality \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{russell1994provably,lieder2017strategy} is also relevant, as it is closely related to the idea of optimizing a series of computations given a processing cost as our proposed meta-learner does. \section{Introduction} Many recent advances in machine learning can be attributed to the ability of neural networks to learn and to process complex representations by simultaneously taking into account a large number of interrelated and interacting constraints - a property often referred to as parallel distributed processing \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{mcclelland1986appeal}. Here, we refer to this sort of parallel processing as interactive parallelism. This type of parallelism stands in contrast to the ability of a network architecture to carry out multiple processes independently at the same time. We refer to this as independent parallelism and it is heavily used, for example, in computing clusters to distribute independent units of computation in order to minimize compute time. Most applications of neural networks have exploited the benefits of interactive parallelism \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{bengio2013representation}. For instance, in the multi-task learning paradigm, learning of a task is facilitated by training a network on various related tasks \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{caruana1997multitask,collobert2008unified,kaiser2017one,kendall2018multi}. This learning benefit has been hypothesized to arise due to the development of shared representation between tasks \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{baxter1995learning,caruana1997multitask}. However, the capacity of such networks to execute multiple tasks simultaneously\footnote{Here we refer to the simultaneous execution of multiple tasks in a single feed-forward pass.} (what we call multitasking) has been less explored. Recent work \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{musslick2016controlled,musslick2017multitasking,PetriInPrep} has hypothesized that the trade-off between these two types of computation is critical to certain aspects of human cognition. Specifically, though interactive parallelism allows for quicker learning and greater generalization via the use of shared representations, it poses the risk of cross-talk, thus limiting the number of tasks that can be executed at the same time (i.e. multitasking). Navigation of this trade-off by the human brain may explain why we are able to multitask some tasks in daily life (such as talking while walking) but not others (for example, doing two mental arithmetic problems at the same time). \citet{musslick2017multitasking} have shown that this trade-off is also faced by artificial neural networks when trained to perform simple synthetic tasks. This previous work demonstrates both computationally and analytically that the improvement in learning speed through the use of shared representation comes at the cost of limitations in concurrent multitasking \cite{musslick2020rationalizing}. While these studies were informative, they were limited to shallow networks and simple tasks.\footnote{See \citet{Alon2017,musslick2020rational} for a graph-theoretic analysis of multitasking capability as a function of network depth.} Moreover, this work raises an important, but as yet unanswered question: how can an agent optimally trade-off the efficiency of multi-task learning against multitasking capability? In this work, we: (a) show that this trade-off also arises in deep convolutional networks used to learn more complex tasks; (b) demonstrate that this trade-off can be managed by using single-task vs multitask training to control whether or not representations are shared; (c) propose and evaluate a meta-learning algorithm that can be used by a network to regulate its training and optimally manage the trade-off between multi-task learning and multitasking in an environment with unknown serialization costs. \section{Experiments} In this section, we evaluate experimentally the aforementioned trade-off and proposed meta-learner model to resolve it. We first start by describing the task environment we use and the set of tasks we consider. We then describe the neural network architecture used, including the specific form of the task projection layer mentioned in section \ref{task-projection} that we use and how training occurs for single-tasking and multitasking. In sections \ref{tradeoff1} and \ref{tradeoff2}, we show through experiments explicitly how the trade-off arises in the task environment. Lastly, in section \ref{meta-learning}, we evaluate our proposed meta-learner's ability to navigate this trade-off in the environment given that there is an unknown serialization cost. \subsection{Experimental Setup} \begin{figure} \centering \includegraphics[width=3.8cm]{figures/conv_net_example.eps} \caption{Neural network architecture used.} \label{fig:example_convNN} \end{figure} We create a synthetic task environment using AirSim \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{shah2018airsim}, an open-source simulator for autonomous vehicles built on Unreal Engine\footnote{Code and data will be released in final version of paper.}. We assume a drone-agent that has two stimulus-inputs: (1) a GPS-input through which it can be given location-relevant information; (2) an image-input providing it visual information (e.g. from a camera). The agent also has two outputs: (1) a location-output designating a location in the input image; (2) an object-output designating the object the agent believes is present in the input. Based on the definition of a task as a mapping from one input to one output, this give us the following four tasks that the agent can perform: \begin{enumerate}[itemsep=0mm, leftmargin=3\parindent] \item[Task 1] (GPS-localization): given a GPS location, output the position in the image of that location. \item[Task 2] (GPS-classification): given a GPS location, output the type of object the agent expects to be in that area based on its experience. \item[Task 3] (Image-localization): given a visual image, output the location of the object in the image. \item[Task 4] (Image-classification): given a visual image, output the type of object in the image. \end{enumerate} Using AirSim, we simulate an ocean-based environment with a set of different possible objects (such as whales, dolphins, orcas, and boats). We create training examples for the agent by randomizing the location of the agent within the environment, the type of object present in the visual input, the location and rotation of the object, and the GPS location provided to the agent. Thus, each training instance contains a set of randomized inputs and a label for each of the tasks with respect to the specific inputs. The agent can execute each task using either single-tasking (one after another) or multitasking (in which it can execute Tasks $1$ and $4$ together or Tasks $2$ and $3$ together). Note that in this setup, only $2$ tasks at most can be performed simultaneously as we will have conflicting outputs if we attempt to multitask more than $2$ tasks. \subsection{Neural Network Architecture} The GPS-input is processed using a single-layer neural network, whereas the image-input is processed using a multi-layer convolutional neural network. The encoded inputs are then mapped via fully-connected layers to each output. We allow the task input to modify each hidden, or convolutional, layer using a learned projection of the task input specific to each layer. This is related to the idea of cognitive control in psychology \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{cohen1990control} but also to attention mechanisms used in machine learning \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{hochreiter1997long}. More formally, the task-specific projection for the $i^{\text{th}}$ layer $\mathbf{c}_i$ is computed using a matrix multiplication with learned task projection matrix $\mathbf{W}_{t,i}$ and task-input $\mathbf{x}_t$, followed by a sigmoid: \[ \mathbf{c}_i = \sigma(\mathbf{W}_{t,i} \mathbf{x}_t - \beta), \] where $\beta$ is a positive constant. The subtraction by $\beta > 0$ means that task projections are by default ``off'' i.e. close to being $0$. For a fully-connected layer, the task projection $\mathbf{c}_i$ modifies the hidden units for the $i^{th}$ layer $\mathbf{h}_i$ through multiplicative gating to compute the hidden units $\mathbf{h}_{i+1}$: \[ \mathbf{h}_{i+1} = g\left( \left( \mathbf{W}_{h,i} \mathbf{h}_{i} + \mathbf{b}_i \right) \odot \mathbf{c}_i \right), \] where $\mathbf{W}_{h,i}$ and $\mathbf{b}_i$ are the typical weight matrix and bias for the fully-connected layer, and $g$ is the non-linearity. For the hidden units, we let $g$ be the rectified linear activation function (ReLU) whereas for output units it is the identity function. Similarly, for a convolutional layer the feature maps $\mathbf{h}_{i+1}$ are computed from $\mathbf{h}_{i}$ as: \[ \mathbf{h}_{i+1} = g\left( \left(\mathbf{h}_{i} * \mathbf{W}_{h,i} + \mathbf{b}_i \right) \odot \mathbf{c}_i \right), \] where $\mathbf{W}_{h,i}$ is now the convolutional kernel. Note that we use multiplicative biasing via the task projection whereas previous work \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{musslick2016controlled,musslick2017multitasking} used additive biasing. We found multiplicative biasing to work better for settings in which the task projection matrix needs to be learned. A visual example of the network architecture is shown in Figure~\ref{fig:example_convNN}. Training in this network occurs in the typical supervised way with some modifications. To train for a specific task, we feed in the stimulus-input and associated task-input, and train the network to produce the correct label at the output associated with the task. For outputs not associated with the task, we train the network to output some default value. In this work, we focus on classification-based tasks for simplicity, and so the network is trained via cross-entropy loss computed using the softmax over the network output logits and the true class label. To train the network on multitasking, we feed in the stimulus-input and the associated task-input (indicating which set of tasks to perform concurrently) and train the network on the sum of losses computed at the outputs associated with the set of tasks. Note that we consider the localization-based tasks as classification tasks by outputting a distribution over a set of pre-determined bounding boxes that partition the image space. \begingroup \makeatletter \renewcommand{\p@subfigure}{ \begin{figure}[t] \centering \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=0.82\linewidth]{plots/overlap_learning_speed_eb.eps} \caption{} \label{fig:overlap_learning_speed} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \centering \begin{minipage}[b]{0.42\textwidth} \centering \includegraphics[width=\textwidth]{plots/overlap_error_eb.eps} \caption{} \label{fig:overlap_error} \end{minipage} \begin{minipage}[b]{0.42\textwidth} \centering \includegraphics[width=\textwidth]{plots/overlap_layer_corr_eb.eps} \caption{} \label{fig:overlap_corr} \end{minipage} \end{subfigure} \caption{Effect of varying representational overlap. (\subref{fig:overlap_learning_speed}) Comparison of learning speed of the networks. (\subref{fig:overlap_error}) Comparison of the error in average task performance over all data when multitasking compared to single-tasking. (\subref{fig:overlap_corr}) Correlation of convolutional layer representations between Tasks $3$ and Tasks $4$ computed using the average representation for each layer across all the data. We show results for the tasks involving the convolutional network, as those are the more complex tasks we are interested in.} \label{fig:overlap} \end{figure} \endgroup \subsection{Effect of Sharing Representations on Learning Speed and Multitasking Ability} \label{tradeoff1} First, we consider the effect of the degree of shared representations on learning speed and multitasking ability. We control the level of sharing in the representations used by the network by manipulating the task-associated weights $\mathbf{W}_{t,i}$, which implement, in effect, the task projection for each task. The more similar the task projections are for two tasks, the higher the level of sharing because more of the same hidden units are used for the two tasks. We vary $\mathbf{W}_{t,i}$ to manipulate what percent of hidden units overlap for the tasks. Thus, $100 \%$ overlap indicates that all hidden units are used by all tasks; $50 \%$ overlap indicates that $50 \%$ of the hidden units are shared between the tasks whereas the remaining $50 \%$ are split to be used independently for each task; and $0 \%$ overlap indicates that the tasks do not share any hidden units in a layer. Note that in this experiment, during training task-associated weights are frozen based on the initialization that results in the specific overlap percentage, but the weights in the remainder of the network are free to be learned. Based on previous work \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{musslick2016controlled,musslick2017multitasking}, we measure the degree of sharing at a certain layer between two task representations by computing the correlation between the mean representation for the tasks, where the mean is computed by averaging the activity at the layer across all training examples for a given task. The results of the experiment manipulating the level of overlap are shown in Figure~\ref{fig:overlap}. These show that as overlap is increased, sharing of representations across tasks increases (as evidenced by the increase in correlations), which is associated with an increase in the learning speed. However, this is associated with a degradation in the multitasking ability of the network, as a result of the increased interference caused by increased sharing of the representations. Note that the network with $0 \%$ overlap does not achieve error-free multitasking performance. This suggests that there is a residual amount of interference in the network induced by single-task training that cannot be attributed do the chosen manipulation i.e. overlap between task representations. \subsection{Effect of Single-task vs Multitask Training} \label{tradeoff2} Having established that there is a trade-off in using shared representations in the deep neural network architecture described, we now focus on how different training regimens - using single-tasking vs multitasking - impact the representations used by the network and the network's learning speed. Previous work indicated that single-task training promotes shared representations and learning efficiency \cite{caruana1997multitask,musslick2017multitasking} whereas training a network to execute multiple tasks in parallel yields separated representations between tasks and improvements in multitasking \cite{MusslickCohen2019}. We compare different networks that vary on how much multitasking they are trained to do, from $0 \%$, in which the network is given only single-task training, to $90 \%$, in which the network is trained most of the time to do multitasking. Here, the task-associated weights $\mathbf{W}_{t,i}$ are initialized to be uniformly high across the tasks, meaning that the network is initially biased towards using shared representations, and all the weights (including task weights) are then learned based on the training regimen encountered by the network. We also conduct an experiment in which the network isn't as biased towards using shared representations by initializing smaller task-associated weights (see supplementary material). We note that the number of examples and the sequence of examples for each task are the same for both types of conditions (single-tasking or multitasking). The only difference is that in the case of single-task learning each task is learned independently using different forward and backward passes whereas in multitasking, multiple tasks can be processed together and thus learned together. The results of this experiment (Figure~\ref{fig:multitask}) show that as the network is trained to do more multitasking, the learning speed of the network decreases and the correlation of the task representations also decreases. Because the network is initialized to use highly shared representations, we see that a multitasking training regimen clearly forces the network to move away from this initial starting point. The effect is stronger in the later layers, possibly because these layers may contribute more directly to the interference caused when multitasking. \begingroup \makeatletter \renewcommand{\p@subfigure}{ \begin{figure}[t] \centering \begin{subfigure}[b]{.5\textwidth} \centering \includegraphics[width=0.82\linewidth]{plots/multitask_learning_speed_eb.eps} \caption{} \label{fig:multitask_learning_speed} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \centering \begin{minipage}[b]{0.42\textwidth} \centering \includegraphics[width=\textwidth]{plots/multitask_error_eb.eps} \caption{} \label{fig:multitask_error} \end{minipage} \begin{minipage}[b]{0.42\textwidth} \centering \includegraphics[width=\textwidth]{plots/multitask_layer_corr_eb.eps} \caption{} \label{fig:multitask_corr} \end{minipage} \end{subfigure} \caption{Effect of single-task vs multitask training. (\subref{fig:multitask_learning_speed}) Comparison of learning speed of the networks. (\subref{fig:multitask_error}) Comparison of the error in average task performance over all data when multitasking compared to single-tasking (the lack of a bar indicates no error). (\subref{fig:multitask_corr}) Correlation of convolutional layer representations between Tasks $3$ and Tasks $4$ computed using the average representation for each layer across all the data. We again show results for the tasks involving the convolutional network.} \label{fig:multitask} \end{figure} \endgroup \begin{figure}[t!] \centering \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=\linewidth]{plots/meta_learner_results1.eps} \caption{} \label{fig:meta_learner_eval1} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=\linewidth]{plots/meta_learner_results2.eps} \caption{} \label{fig:meta_learner_eval2} \end{subfigure} \begin{subfigure}[b]{.23\textwidth} \centering \includegraphics[width=\linewidth]{plots/meta_learner_single_percent.eps} \caption{} \label{fig:single_percent} \end{subfigure} \caption{Evaluation of meta-learning algorithm. (\subref{fig:meta_learner_eval1}) Comparison of all methods on trade-off induced in original environment. (\subref{fig:meta_learner_eval2}) Comparison of all methods on trade-off induced in environment where noise is added to inputs. (\subref{fig:single_percent}) Percent of trials for which meta-learner picks to do single-tasking in both environments.} \label{fig:meta_learner_eval} \end{figure} \subsection{Meta-Learning} \label{meta-learning} Finally, having established the trade-off between single-task and multitask training, we evaluate the meta-learning algorithm to test its effectiveness in optimizing this trade-off. In order to test this in an environment with unknown serialization cost, we compare it with the extremes of always picking single-task or multitask training. We fix the total number of trials to be $\tau = 5000$ and evaluate each of the methods on varying serialization costs. For the meta-learner, we average the performances over $15$ different runs in order to account for the randomness involved in its sampling choices and measure its confidence interval. We fix the order in which data is presented for the tasks for all options when comparing them. Note that the meta-learner does not know the serialization cost and so has to model its effects as part of the received reward. We create two different environments to induce different trade-offs for rewards between single-tasking and multitasking. The first is a deterministic environment whereas in the second we add noise to the inputs. Adding noise to the inputs makes the tasks harder and seems to give bigger benefit to the minimal basis set (and single-task training). We hypothesize that this is the case because sharing information across tasks becomes more valuable when noisy information is provided for each task. Figures \ref{fig:meta_learner_eval1} and \ref{fig:meta_learner_eval2} show that the meta-learning algorithm achieves a reward rate that closely approximates the one achieved by the strategy that yields the greatest reward for a given serialization cost. Additionally, note that in the extremes of the serialization cost, the meta-learner seems better at converging to the correct training strategy, while it achieves a lower reward when the optimal strategy is harder to assess. This difference is even clearer when we study the average percent of trials for which the meta-learner picks single-task training as a function of the serialization cost in Figure \ref{fig:single_percent}. We see that the meta-learning algorithm is well-behaved, in that as the serialization cost increases, the percent of trials in which it selects to do single-tasking smoothly decreases. Additionally, at the points at which the optimal strategy is harder to determine, the meta-learner achieves reward closer to the worst strategy because it needs more time to sample each strategy before settling on one.
2024-02-18T23:39:40.794Z
2021-01-06T02:23:41.000Z
algebraic_stack_train_0000
46
7,720
proofpile-arXiv_065-251
\section{Introduction} In recent years, advances in GPU technology and machine learning libraries enabled the trend towards deeper neural networks in Automatic Speech Recognition (ASR) systems. End-to-end ASR systems transcribe speech features to letters or tokens without any intermediate representations. There are two major techniques: \begin{inparaenum}[1)] \item Connectionist Temporal Classification (CTC~\cite{graves2006connectionist}) carries the concept of hidden Markov states over to end-to-end neural networks as training loss for sequence classification networks. Neural networks trained with CTC loss calculate the posterior probability of each letter at a given time step in the input sequence. \item Attention-based encoder-decoder architectures such as~\cite{chan2016listen}, are trained as auto-regressive sequence-generative models. The encoder transforms the input sequence into a latent representation; from this, the decoder generates the sentence transcription. \end{inparaenum} The hybrid CTC/attention architecture combines these two approaches in one single neural network~\cite{watanabe2017hybrid}. Our work is motivated by the observation that adding a small amount of specially crafted noise to a sample given to a neural network can cause the neural network to wrongly classify its input~\cite{szegedy2013intriguing}. From the standpoint of system security, those algorithms have implications on possible attack scenarios. A news program or sound that was augmented with a barely noticeable noise can give hidden voice commands, e.g. to open the door, to the ASR system of a personal assistant~\cite{carlini2016hidden,carlini2018audio}. From the perspective of ASR research, a network should be robust against such small perturbations that can change the transcription of an utterance; its speech recognition capability shall relate more closely to what humans understand. In speech recognition domain, working Audio Adversarial Examples (AAEs) were already demonstrated for CTC-based~\cite{carlini2018audio}, as well as for attention-based ASR systems~\cite{sun2019adversarial}. The contribution of this work is a method for generation of untargeted adversarial examples in feature domain for the hybrid CTC/attention ASR system. For this, we propose two novel algorithms that can be used to generate AAE for attention-based encoder-decoder architectures. We then combine these with CTC-based AAEs to introduce an algorithm for joint CTC/attention AAE generation. To further evaluate our methods and exploit the information within AAEs, the ASR network training is then augmented with generated AAEs. Results indicate improved robustness of the model against adversarial examples, as well as a generally improved speech recognition performance by a moderate $10\%$ relative to the baseline model. \section{Related Work} \paragraph{Automatic Speech Recognition (ASR) Architecture.} Our work builds on the hybrid CTC/attention ASR architecture as proposed and described in~\cite{watanabe2018espnet,watanabe2017hybrid}, using the location-aware attention mechanism~\cite{ChorowskiEtAl15}. This framework combines the most two popular techniques in end-to-end ASR: Connectionist Temporal Classification (CTC), as proposed in~\cite{graves2006connectionist}, and attention-based encoder-decoder architectures. Attention-based sequence transcription was proposed in the field of machine language translation in~\cite{BahdanauEtAl14} and later applied to speech recognition in Listen-Attend-Spell~\cite{chan2016listen}. Sentence transcription is performed with the help of a RNN language model (RNNLM) integrated into decoding process using {shallow fusion}~\cite{GulcehreEtAl15}. \paragraph{Audio Adversarial Examples (AAEs).} Adversarial examples were originally porposed and developed in the image recognition field and since then, they have been amply investigated in~\cite{szegedy2013intriguing,kurakin2016adversarialphysical,kurakin2016adversarial}. The most known method for generation is the Fast Gradient Sign Method (FGSM)~\cite{goodfellow2014explaining}. Adversarial examples can be prompt to label leaking~\cite{kurakin2016adversarial}, that is when the model does not have difficulties finding the original class of the disguised sample, as the transformation from the original is ``simple and predictable''. The implementation of AAEs in ASR systems has been proven to be more difficult than in image processing~\cite{cisse2017houdini}. Some of them work irrespective of the architecture \cite{neekhara2019universal,vadillo2019universal,abdoli2019universal}. However, these examples are crafted and tested using simplified architectures, either RNN or CNN. They lack an attention mechanism, which is a relevant component of the framework used in our work. Other works focus on making AAEs remain undetected by human subjects, e.g., by {psychoachustic hiding}~\cite{schonherr2018adversarial,qin2019imperceptible}. Carlini et al.~\cite{carlini2016hidden} demonstrated how to extract AAEs for the CTC-based DeepSpeech architecture~\cite{Hannun2014DeepSS} by applying the FGSM to CTC loss. Hu et al. gives a general overview over adversarial attacks on ASR systems and possible defense mechanisms in~\cite{hu2019adversarial}. In it, they observe that by treating the features matrix of the audio input as the AAE seed, it is possible to generate AAE with algorithms developed in the image processing field. However, this leads to the incapacity of the AAE to be transformed back to audio format, as the feature extraction of log-mel f-bank features is lossy. Some have proposed ways to overcome this problem~\cite{Andronic2020}. AAEs on the sequence-to-sequence attention-based LAS model~\cite{chan2016listen} by extending FGSM to attention are presented in~\cite{sun2019adversarial}. In the same work, Sun et al. also propose adversarial regulation to improve model robustness by feeding back AAEs into the training loop. \section{Audio Adversarial Example (AAE) Generation} The following paragraphs describe the proposed algorithms to generate AAEs (a) from two attention-based gradient methods, either using a static or a moving window adversarial loss; (b) from a CTC-based FGSM, and (c) combining both previous approaches in a joint CTC/attention approach. In general, those methods apply the single-step FGSM~\cite{goodfellow2014explaining} on audio data and generate an additive adversarial noise $\bm{\delta}(\bm{x}_t)$ from a given audio feature sequence $\bm{X}=\bm{x}_{1:T}$, i.e., \begin{equation} \hat{\bm{x}}_t = \bm{x}_t + \bm{\delta}(\bm{x}_t),\hspace{0.02\textwidth} \forall t\in[1,T]. \end{equation} We assume a \textit{whitebox} model, i.e., model parameters are known, to perform backpropagation through the neural network. For any AAE algorithm, its reference sentence $y_{1:L}^*$ is derived from the network by decoding $\bm{x}_{1:T}$, instead of the ground truth sequence, to avoid label leaking~\cite{kurakin2016adversarial}. \subsection{Attention-based Static Window AAEs} For attention-based AAEs, the cross-entropy loss $\text{J}(\bm{X}, y_{l}; \bm{\theta} )$ w.r.t. $\bm{x}_{1:T}$ is extracted by iterating over {sequential token posteriors $p(y^*_{l}|y^*_{1:(l-1)})$} obtained from the attention decoder. Sequence-to-sequence FGSM, as proposed in~\cite{sun2019adversarial}, then calculates $\bm{\delta}(\bm{x}_t)$ from the \emph{total} sequence as \begin{align}\label{eq:seq-2-seq-fsgm} \bm{\delta}(\bm{x}_t) &= \epsilon \cdot \sgn(\nabla_{\bm{x}_t} \sum_{l = 1}^{L} J(\bm{X}, y_l^*; \bm{\theta} )), \quad l\in [1;L]. \end{align} As opposed to this algorithm, our approach does not focus on the total token sequence, but only a portion of certain sequential steps. This is motivated by the observation that attention-based decoding is auto-regressive; interruption of the attention mechanism targeted at one single step in the sequence can change the corresponding portion of the transcription as well as throw off further decoding up to a certain degree. A sum over all sequence parts as in Eq.~\ref{eq:seq-2-seq-fsgm} may dissipate localized adversarial noise. From this, the first attention-based method is derived that takes a single portion out of the output sequence. We term this algorithm in the following as \emph{static window} method. Gradients in the sentence are summed up from the start token $ \gamma $ on to the following $ l_w $ tokens, such that \begin{equation}\label{eq:window-static} \bm{\delta}_{\text{SW}}(\bm{x}_t) = \epsilon\cdot \sgn(\nabla_{\bm{x}_t} \sum_{l = \gamma}^{l_w} J(\bm{X}, y_l^*; \bm{\theta} )), \quad l\in [1;L]. \end{equation} \subsection{Attention-based Moving Window AAEs} As observed from experiments with the static window, the effectiveness of the {static window} method strongly varies depending on segment position. Adversarial loss from some segments has a higher impact than from others. Some perturbations only impact local parts of the transcription. Therefore, as an extension to the static window gradient derived from Eq.~\ref{eq:window-static}, multiple segments of the sequence can be selected to generate $\bm{\delta}_{MW}(\bm{x}_t)$. We term this the {\emph{moving window}} method. For this, gradients from a sliding window with a fixed length $l_w$ and {stride} $ \nu $ are accumulated to $ \nabla_{\text{MW}}(\bm{x}_t) $. The optimal values of length and stride are specific to each sentence. Similar to the iterative FGSM based on momentum~\cite{dong2018boosting}, gradient normalization is applied in order to accumulate gradient directions. \begin{align} \label{eq:window-moving} \nabla_{\text{MW}}(\bm{x}_t) &= \sum_{i = 0}^{\lceil L/\nu \rceil} \left( \frac{\nabla_{\bm{x}_t} \sum\limits_{l = i\cdot\nu}^{l_w} J(\bm{X}, y_l^*; \bm{\theta} ) } {||\nabla_{\bm{x}_t} \sum\limits_{l = i\cdot\nu}^{l_w} J(\bm{X}, y_l^*; \bm{\theta} )||_1} \right), \quad l\in [1;L]\\ \bm{\delta}_{MW}(\bm{x}_t) &= \epsilon\cdot \sgn( \nabla_{\text{MW}}(\bm{x}_t) ) \end{align} \subsection{AAEs from Connectionist Temporal Classification} From regular CTC loss $\mathcal{L}_{\text{CTC}}$ over the total reconstructed label sentence $\bm{y}^*$, the adversarial noise is derived as \begin{align} \label{eq:ctc-fsgm} \bm{\delta}_{\text{CTC}}(\bm{x}_t) &= \epsilon\cdot \sgn(\nabla_{\bm{x}_t} \mathcal{L}_{\text{CTC}}(\bm{X}, \bm{y}^*; \bm{\theta} )). \end{align} \subsection{Hybrid CTC/Attention Adversarial Examples} A multi-objective optimization function~\cite{lu2017multitask} is then applied to combine CTC and attention adversarial noise $\bm{\delta}_{\text{att}}$, that was either generated from $\bm{\delta}_{\text{SW}}$ or from $\bm{\delta}_{\text{MW}}$, by introducing the factor $\xi \in [0;1]$. \begin{align} \label{eq:hybrid-aae} \hat{\bm{x}}_t = \bm{x}_t + (1-\xi)\cdot\bm{\delta}_{\text{att}}(\bm{x}_t) + \xi\cdot\bm{\delta}_{\text{CTC}}(\bm{x}_t), \hspace{0.02\textwidth} \forall t\in[1,T] \end{align} The full process to generate hybrid CTC/attention AAEs is shown in Fig.~\ref{fig:advexgeneration}. \begin{figure}[!htb] \centering \includegraphics[width=0.95\linewidth]{fig/AdvEx.eps} \caption{Generation of AAEs. The unmodified speech sentence $\bm{x}_{1:T}$ and the reference sentence $\bm{y}^*_{1:L}$ are given as input. Then, using the hybrid CTC/attention model, the adversarial loss by the CTC-layer as well as the windowed attention sequence parts are calculated. Those are then combined by the {weighting parameter $\xi$} and noise factor $\epsilon$ to obtain the adversarial example. } \label{fig:advexgeneration} \end{figure} \subsection{Adversarial Training} Similar as data augmentation, Adversarial Training (AT) augments samples of a minibatch with adversarial noise $\bm{\delta}_{\text{AAE}}(x_t)$. The samples for which we create the AAEs are chosen randomly with a probability of $p_a$, as proposed by Sun et al.~\cite{sun2019adversarial}. Because of its successive backpropagation in a single step, this method is also termed \textit{adversarial regularization} and is applied not from the beginning of the neural network training but after the $N$th epoch. Sun et al. additionally included a weighting factor $\alpha$ to distinct sequence components that we omit, i.e., set to $1$; instead, the gradient is calculated for the minibatch as a whole. Furthermore, our AT algorithm also samples randomly the perturbation step size $\epsilon$ to avoid overfitting as originally described in~\cite{kurakin2016adversarial}. The expanded gradient calculations for the {sequence-based training} is then written as \begin{equation}\label{eq:adv_training_seq2seq} \hat{J}(\bm{X}, y; \bm{\theta}) = \sum_i (J(\bm{X}, y_i; \bm{\theta}) + J(\hat{\bm{X}}, y_i; \bm{\theta})). \end{equation} \section{Evaluation} Throughout the experiments, the hybrid CTC/attention architecture is used with an LSTM encoder with projection neurons in the encoder and location-aware attention mechanism, classifying from log-mel f-bank features~\cite{watanabe2017hybrid}. As we evaluate model performance, and not human perception on AAEs, we limit our investigation to the feature space. Evaluation is done on the TEDlium v2 \cite{rousseau2014enhancing} speech recognition task consisting of over $200$h of speech. The baseline model we compare our results to is provided by the ESPnet toolkit~\cite{watanabe2018espnet}. It has each four encoder and one an attention decoder layers with each $1024$ units per layer, and in total $50$m parameters. We use the BLSTM architecture for our experiments with each two layers in the encoder and the location-aware attention decoder; the number of units in encoder, decoder and attention layers was reduced to 512 units~\cite{Chavez2020}. This results in a model that has only one quarter in size compared to the baseline model, i.e., $14$m parameters. For both models, $500$ unigram units serve as text tokens, as extracted from the corpus with SentencePiece~\cite{kudo2018subword}. In all experiments, the reference token sequence $\bm{y^*}$ is previously decoded using the attention mechanism, as this is faster than hybrid decoding and also can be done without the RNNLM. We also set $\epsilon = 0.3$, as done in AAE generation for attention-based ASR networks~\cite{sun2019adversarial}. Throughout the experiments, we use the decoded transcription $\bm{y^*}$ as reference, to avoid label leaking. The dataset used in the evaluation, TEDlium v2, consists of recordings from presentations in front of an audience and therefore is already noisy and contains reverberations. To better evaluate the impact of adversarial noise generated by our algorithms, two noise-free sample sentences are used for evaluation. Both sample sentences are created artificially using Text-to-Speech (TTS) toolkits so that they remain noise-free. \subsection{Generation of AAEs: Two Case Studies} The first noise-free sentence \emph{Peter} is generated from the TTS algorithm developed by Google named Tacotron 2~\cite{shen2018natural}. It was generated using the pre-trained model by Google\footnote{https://google.github.io/tacotron/publications/tacotron2/index.html} and reads ``\emph{Peter Piper picked a peck of pickled peppers. How many pickled peppers did Peter Piper pick?}'' The second sentence \emph{Anie} was generated from the ESPNet TTS\footnote{The \texttt{ljspeech.tacotron2.v2} model.} and reads ``\emph{Anie gave Christina a present and it was beautiful.}'' We first test the CTC-based algorithm. The algorithm outputs for \emph{Peter} an AAE that has $41.3\%$ CER w.r.t. the ground-truth, whereas an error rate of $36.4\%$ for \emph{Anie}. For our experiments with the static window algorithm, we observe that it intrinsically runs the risk of changing only local tokens. We take, for example, the sentence \textit{Anie} and set the parameter of $l_w = 3$ and $\gamma = 4$. This gives us the following segment, as marked in bold font, out of the previously decoded sequence $\bm{y^*}$ \vspace{0.1cm} \centerline{\emph{any gave ch{\textbf{ristin}}a a present and it was beautiful.}} \vspace{0.1cm} After we compute the AAE, the ASR system transcribes \vspace{0.1cm} \centerline{\emph{any game christian out priasant and it was beautiful}} \vspace{0.1cm} as the decoded sentence. We obtain a sequence that strongly resembles the original where most of the words remain intact, while some of them change slightly. Translated to CER and WER w.r.t the original sequence, we have 50.0 and 55.6 respectively. We also test its hybrid version given $\xi = 0.5$, which is analogue to the decoding configuration of the baseline model. It outputs a sequence with rates of $31.8\%$ CER, lower than its non-hybrid version. The moving window method overcomes this problem, as it calculates a non-localized AAE. For example, a configuration with the parameters $\nu = 4 $ and $l_w = 2$ applied to \emph{Peter} generates the pattern \vspace{0.1cm} \centerline{\emph{\textbf{pe}ter pip\textbf{er p}icked a p\textbf{eck} of pickle\textbf{ pe}ppers.}} \centerline{\emph{\textbf{ many p}ickle pe\textbf{pp}ers did pe\textbf{ter p}iper pa\textbf{ck}}} \vspace{0.1cm} for which we obtain the decoded sentence \vspace{0.1cm} \centerline{\emph{huter reperber picked a pick of piggle pebpers. }} \centerline{\emph{how many tickle taper state plea piper pick.}} \vspace{0.1cm} This transcribed sentence then exhibits a CER of $54.3\%$ w.r.t the ground-truth. The same parameter configuration applied in the hybrid version with $\xi = 0.5$ achieves error rates of $34.8\%$ CER. Throughout the experiments, higher error rates were observed on the moving window than static window or CTC-based AAE generation. \subsection{Evaluation of Adversarial Training}\label{subsec:at} Throughout the experiments, we configured the {moving window method} with $\nu = 2$ and $l_w = 4 $ as arbitrary constant parameters, motivated by the observation that those parameters performed well on both sentences \emph{Peter} and \emph{Ani}. By inspection, this configuration is also suitable for sentences of the TEDlium v2 dataset. Especially for its shorter utterances, a small window size and overlapping segments are effective. Each model is trained for $10$ epochs, of which $N=5$ epochs are done in a regular fashion from regular training data; then, the regular minibatch is augmented with its adversarial counterpart with a probability $p_a=0.05$. Adversarial training examples are either attention-only, i.e. $\xi=0$, or hybrid, i.e. $\xi=0.5$. Finally, the noise factor $\epsilon$ is sampled uniformly from a range of $[0.0;0.3]$ to cover a wide range of possible perturbations. The trained model is compared with the baseline model as reported in~\cite{watanabe2018espnet}. We use the moving window and its hybrid in the AT algorithm, because we hypothesize that both can benefit the training process of the hybrid model. The RNNLM language model that we use is provided by the ESPnet toolkit~\cite{watanabe2018espnet}; it has $2$ layers with each $650$ units and its weight in decoding was set to $\beta=1.0$ in all experiments. We did \emph{not} use data augmentation, such as SpecAugment, or language model rescoring; both are known to improve ASR results, but we omit them for better comparability of the effects of adversarial training. Results are collected by decoding four datasets: (1) the regular test set of the TEDlium v2 corpus; (2) AAEs from the test set, made with the {attention}-based {moving window} algorithm; (3) the test set augmented with regular white noise at 30 dB SNR; and (4) the test set with clearly noticeable 5 dB white noise. \begin{table}[tb!] \centering \setlength{\tabcolsep}{4pt} \caption{Decoding results for all models. The first value in each cell corresponds to the CER and the second to the WER. The parameter $\lambda$ determines the weight of the CTC model during the decoding. Trained models with attention-only AAEs are marked with $\xi=0$; with hybrid AAEs with $\xi=0.5$.} \label{tab:decode_results_at} \begin{tabular}{c c c c c c c c} & & & & \multicolumn{4}{c}{Dataset} \\ \cmidrule(lr){5-8} \textbf{CER/WER}& $\xi$ & $\lambda$ & LM & test & \begin{tabular}{@{}c@{}}test \\ AAE\end{tabular} & \begin{tabular}{@{}c@{}}noise \\ 30dB \end{tabular} & \begin{tabular}{@{}c@{}}noise \\ 5dB \end{tabular} \\ \cmidrule(lr){2-8} \multirow{3}{*}[-1pt]{baseline~\cite{watanabe2018espnet}} & - & 0.0 & \textbf{-} & 20.7/22.8 & 90.7/89.1 & 23.6/25.8 & 78.8/78.8 \\ & - & 0.5 & \textbf{-} & 15.7/18.6 & 86.1/89.9 & 18.1/21.3 & 66.1/68.3 \\ & -& 0.5 & \checkmark & 16.3/18.3 & \textbf{98.5/92.2} & 19.2/20.8 & 73.2/72.7\\ \midrule \multirow{3}{*}[-1pt]{\parbox{2.5cm}{adv. trained with att.-only AAE}} & 0.0 & 0.0 & \textbf{-} & 17.7/19.6 & 63.6/63.3 & 21.0/22.8 & 74.7/74.4 \\ & 0.0 & 0.5 & \textbf{-} & 14.3/16.9 & \textbf{53.5/56.8} & 16.5/18.9 & 62.6/65.0 \\ & 0.0 & 0.5 & \checkmark & 15.1/16.9 & 60.3/58.3 & 17.5/18.9 & 69.0/68.0\\ \midrule \multirow{3}{*}[-1pt]{\parbox{2.5cm}{adv. trained with hybrid AAE}} & 0.5 & 0.0 & \textbf{-} & 17.9/19.8 & 65.2/65.0 & 20.4/22.3 & 74.9/75.0 \\ & 0.5 & 0.5 & \textbf{-} & \textbf{14.0/16.5} & \textbf{54.8/58.6} & \textbf{16.2/18.7} & \textbf{63.5/65.8} \\ & 0.5 & 0.5 & \checkmark & {14.8/16.6} & 61.8/59.9 & 17.0/18.5 & 70.0/69.2\\ \bottomrule \end{tabular} \end{table} \paragraph{General trend.} Some general trends during evaluation are manifested in Tab.~\ref{tab:decode_results_at}. Comparing decoding performances between the regular test set and the AAE test set, all models perform worse. In other words, the moving window technique used for creating the AAEs performs well against different model configurations. Setting the CTC weight lowers error rates in general. The higher error rates in combination with a LM are explained by the relatively high weight $\beta=1.0$. Rescoring leads to improved performance, however, listed results are more comparable to each other when set to a constant in all decoding runs. \paragraph{Successful AAEs.} Notably, the baseline model performed worst on this dataset with almost $100\%$ error rates, even worse when decoding noisy data. This manifests in wrong transcriptions of around the same length as the ground truth, with on average $90\%$ substitution errors but only $20\%$ insertion or deletion errors. Word loops or dropped sentence parts were observed only rarely, two architectural behaviors when the attention decoder looses its alignment. We report CERs as well as WERs, as a relative mismatch between those indicates certain error patterns for CTC and attention decoding~\cite{kurzinger2019exploring}; however, the ratio of CER to WER of transcribed sentences was observed to stay roughly at the same levels in the experiments with the AAE test set~\cite{kurzinger2019exploring} \paragraph{Adv. trained models are more robust.} Both models obtained from adversarial training perform better in general, especially in the presence of adversarial noise, than the baseline model; the model trained with hybrid AAEs achieves a WER of $16.5\%$ on the test set, a performance of absolute $1.8\%$ over the baseline model. At the same time, the robustness on regular noise and specially on adversarial noise was improved. For the latter we have an improvement of $ 24-33\% $ absolute WER. The most notable difference is in decoding in combination with CTC and LM, where the regularly trained model had a WER of $92.2\%$, while the corresponding adv. trained model had roughly $60\%$ WER. The att.-only adv. trained model with $\xi=0$ seems to be slightly more robust. On the one hand that might be a side effect from the the AAEs that are generated in an attention-only manner; on the other hand, this model also slightly performed better on regular noise. \section{Conclusion} In this work, we demonstrated audio adversarial examples against hybrid attention/CTC speech recognition networks. The first method we introduced was to select a \emph{static window} over a selected segment of the attention-decoded output sequence to calculate the adversarial example. This method was then extended to a \emph{moving window} that slides over the output sequence to better distribute perturbations over the transcription. In a third step, we applied the fast gradient sign method to CTC-network. AAEs constructed with this method induced on a regular speech recognition model a word error rate of up to $90\%$. In a second step, we employed these for adversarial training a hybrid CTC/attention ASR network. This process improved its robustness against audio adversarial examples, with $55\%$ WER, and also slightly against regular white noise. Most notably, the speech recognition performance on regular data was improved by absolute $1.8\%$ from $18.3\%$ compared to baseline results. \bibliographystyle{splncs04}
2024-02-18T23:39:40.833Z
2020-07-22T02:14:15.000Z
algebraic_stack_train_0000
48
3,987
proofpile-arXiv_065-287
"\\section{Data Specifications Table}\n\n\\begin{table}[htb]\n\\centering\n\\footnotesize\n\\label{D(...TRUNCATED)
2024-02-18T23:39:40.960Z
2021-05-31T02:02:16.000Z
algebraic_stack_train_0000
55
17,146
proofpile-arXiv_065-341
"\\section{INTRODUCTION}\n\n\n\n\n\nThe study of the elementary excitations of ultrarelativistic pl(...TRUNCATED)
2024-02-18T23:39:41.191Z
1996-09-02T12:52:05.000Z
algebraic_stack_train_0000
68
3,609
proofpile-arXiv_065-405
"\\section{Dynamics and time orientation}\n\nWe introduce a treatment of dynamics which may be readi(...TRUNCATED)
2024-02-18T23:39:41.360Z
1996-09-18T21:55:10.000Z
algebraic_stack_train_0000
79
4,776
proofpile-arXiv_065-507
"\\section{Introduction}\n\nSupersymmetry (SUSY) predicts the existence of a spin 1/2 partner\nof t(...TRUNCATED)
2024-02-18T23:39:41.742Z
1997-09-18T00:18:33.000Z
algebraic_stack_train_0000
95
7,943
proofpile-arXiv_065-516
"\\section{INTRODUCTION}\nSimulations of full QCD with Wilson fermions at zero temperature so\nfar h(...TRUNCATED)
2024-02-18T23:39:41.762Z
1996-09-12T18:46:32.000Z
algebraic_stack_train_0000
98
1,400
proofpile-arXiv_065-592
"\\section{Introduction}\n\nThe environment at the centers of galaxy cluster cooling flows is exotic(...TRUNCATED)
2024-02-18T23:39:41.975Z
1996-09-25T00:50:31.000Z
algebraic_stack_train_0000
112
3,767
proofpile-arXiv_065-794
"\\section{Introduction}\n\nSince the very moment when one is introduced to the quantum formalism {\(...TRUNCATED)
2024-02-18T23:39:42.619Z
1996-09-04T21:25:10.000Z
algebraic_stack_train_0000
154
2,611
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
11