robench-2024b
Collection
48 items
•
Updated
context
stringlengths 119
2.44k
| A
stringlengths 103
2.34k
| B
stringlengths 103
2.9k
| C
stringlengths 100
2.02k
| D
stringlengths 113
2.01k
| label
stringclasses 4
values |
---|---|---|---|---|---|
1111222233334444λe+=λe++superscriptsubscript𝜆𝑒superscriptsubscript𝜆𝑒absent\lambda_{e}^{+}=\lambda_{e}^{++}italic_λ start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT = italic_λ start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT start_POSTSUPERSCRIPT + + end_POSTSUPERSCRIPT|λc−|=λe+superscriptsubscript𝜆𝑐superscriptsubscript𝜆𝑒|\lambda_{c}^{-}|=\lambda_{e}^{+}| italic_λ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT | = italic_λ start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT|λc−|=λe++superscriptsubscript𝜆𝑐superscriptsubscript𝜆𝑒absent|\lambda_{c}^{-}|=\lambda_{e}^{++}| italic_λ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT | = italic_λ start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT start_POSTSUPERSCRIPT + + end_POSTSUPERSCRIPTSN
|
In 1D, with σ=3.2𝜎3.2\sigma=3.2italic_σ = 3.2 and ζ<σ2=1.6𝜁𝜎21.6\zeta<\frac{\sigma}{2}=1.6italic_ζ < divide start_ARG italic_σ end_ARG start_ARG 2 end_ARG = 1.6, we are able to
|
=γw−c(1−ρ−(σ+ζ)a+ζb).absent𝛾𝑤𝑐1𝜌𝜎𝜁𝑎𝜁𝑏\displaystyle=\gamma w-c(1-\rho-(\sigma+\zeta)a+\zeta b).= italic_γ italic_w - italic_c ( 1 - italic_ρ - ( italic_σ + italic_ζ ) italic_a + italic_ζ italic_b ) .
|
ODEs (3), in (γ,ζ)𝛾𝜁(\gamma,\zeta)( italic_γ , italic_ζ ) parameter space, with σ=3.2𝜎3.2\sigma=3.2italic_σ = 3.2.
|
and red curve (4ζ=γ24𝜁superscript𝛾24\zeta=\gamma^{2}4 italic_ζ = italic_γ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT) are tangent at (γ,ζ)=(2σ,σ/2)𝛾𝜁2𝜎𝜎2(\gamma,\zeta)=(\sqrt{2\sigma},\sigma/2)( italic_γ , italic_ζ ) = ( square-root start_ARG 2 italic_σ end_ARG , italic_σ / 2 ) and divide the parameter space into four
|
C
|
For each network architecture tested in this study, the same procedure is used: the model is trained on the training set for 15 epochs, with an evaluation on the validation set after each epoch. Depending on the accurracy value of the model, the weights are saved after each epoch to keep the best model, which is then evaluated on the test set. For some models, I used the KerasTuner framework with the Hyperband algorithm to optimize some hyperparameters [17].
|
Twelve different CNN models were used in this study, with different levels of depth and number of parameters (Table 1). The models B2 and B6 were coded from scratch and have a relatively simple architecture. The B2 model has only two convolutional layers while the B6 model is slightly deeper and more complex with six convolutional layers. The architecture for the B6 model was inspired by the Kaggle models of F. Marazzi (https://bit.ly/35wINGv) and H. Mello (https://bit.ly/3xwI6cl). Note that the B6 model includes batch normalization [18] and dropout steps to improve performance.
|
All the other models are CNN models that are part of the TensorFlow Keras library (Table 1). They were developed and tested by several research groups on the Imagenet Challenge, a competition with hundreds of object categories and millions of images [25]. For instance, InceptionV3 is a model created in 2015 with a very deep architecture (94 convolutional layers) that performs very well on various computer vision tasks [19]. As for most of the models available in the Keras llibrary, it is possible to load the model pre-weighted with ImageNet training weights, thus enabling transfer learning (TF). TF is a popular approach in deep learning where pre-trained models are used as the starting point on computer vision and natural language processing tasks in order to save computing and time resources. In this study I have used models both pre-trained with imagenet weights and fully re-trained with the two datasets. For the pre-trained version, only the last layers of the model are re-trained with the dataset (global average pooling layer, dense layer and final output). Of course, given that the number of training parameters is much greater in the case of the fully re-trained model, the computation time needed for training the model is also expected to be much longer. Table 1 is detailing the architecture and parameters for each of the models used in this study. Note that some CNN models could not be used with the IDC dataset because the images are smaller than the minimum size required by these models.
|
The computational running time was analysed for the for B2, B6 and the more complex InceptionV3 (IV3) model, both fully re-trained (F) and with transfer learning (TL) on the PCAM dataset. The results are shown in Table 2. Note that the time corresponds to the average time observed for one epoch. We can compare the model architecture and the hardware GPUs acceleration effects. As expected, the running time is increasing with the complexity and depth of the model. The IV3-F model takes 4 to 10 times longer to train than the simple 2 convolutional layers B2 model, depending on the GPU card utilised. The B6 CNN model is taking 1.7 to 2 times longer than the B2 model to train. With the InceptionV3 model, using transfer learning is obviously saving a lot of training time, as a full model training is taking ∼similar-to\sim∼3 times longer to train on all GPU models. In fact, even though the IVF-TL model (transfer learning) is much more complex, the running time is comparable to the B2 and B6 models. Regarding the different GPU cards tested here, more recent and powerful GPU cards decrease the computing time quite drastically, with an acceleration factor between 5 and 12 times for the most recent architecture tested here (A100) on all the CNN models compared to the oldest model tested here (K80). It is worth noting that the deepest model tested here can be fully trained in about one hour with a V100 or A100 GPU card.
|
The performance of all the models on the PCAM and IDC datasets is described in Table 3 and 4. All the indicators are measured on the test sets. Most of the model show a very good performance, with AUC scores around 0.90 or above. However, when we look at the details, there are clear differences. For instance the AUC of the simple 2-layers B2 model is 0.85, increasing to 0.91 with the B6 model, which is slightly more complex. If we take the best performing models in terms of AUC score, for the PCAM dataset we have VGG19 (0.95), followed by the MobileNet (MN, 0.93) and the EfficientNet V2 B2 (ENB2, 0.93). For the IDC dataset, the best performing models are again VGG19 (0.95), together with MobileNet (MN, 0.95), and followed by the B6 (0.94), DenseNet 121 (DN121, 0.94) and EfficientNet V2 B2 (ENB2, 0.94). Looking at the AUC values versus depth and number of parameters of the models for the PCAM dataset (Figure 3), we can see that the VGG19 model has a high AUC value but a low depth, while the ENB0 model has a very high depth but a very low AUC. For the number of parameters, VGG19 has a very high number of parameters and the best AUC, but models with a much lower number of parameters, such as MN or ENB2, also have high AUC values, in fact very close to the value for VGG19. Taken together, these results show that the more complex models perform better than very simple models (such as the B2 model), but the relationship is not entirely straightforward.
|
A
|
With this knowledge, let us construct the jump chain step by step. The first two jumps are determined easily, noting that
|
𝐯(3)={8}.superscript𝐯38\displaystyle\mathbf{v}^{(3)}=\{8\}.bold_v start_POSTSUPERSCRIPT ( 3 ) end_POSTSUPERSCRIPT = { 8 } .
|
𝐯(0)={0},superscript𝐯00\displaystyle\mathbf{v}^{(0)}=\{0\},bold_v start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT = { 0 } ,
|
𝐯(0)superscript𝐯0\displaystyle\mathbf{v}^{(0)}bold_v start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT
|
𝐯(2)={5},superscript𝐯25\displaystyle\mathbf{v}^{(2)}=\{5\},bold_v start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT = { 5 } ,
|
C
|
(S0,I0,R0)=(θη,0,0).subscript𝑆0subscript𝐼0subscript𝑅0𝜃𝜂00\bigg{(}S_{0},I_{0},R_{0}\bigg{)}=\bigg{(}\frac{\theta}{\eta},0,0\bigg{)}.( italic_S start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_I start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_R start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) = ( divide start_ARG italic_θ end_ARG start_ARG italic_η end_ARG , 0 , 0 ) .
|
Further, computing the reproduction number of model (2). Let y=(I,S)𝑦𝐼𝑆y=(I,S)italic_y = ( italic_I , italic_S ) and rewrite the model (2) for susceptible and infected classes as in the general form
|
The subsequent sections of the paper unfold as follows: Section 2: Model formulation- In this section, we meticulously detail the formulation of the model, providing a comprehensive overview of its deterministic aspects. Section 3: Dynamics of the deterministic model- we discuss the reproduction number and stability of the system. Section 4: Formulation and description of stochastic COVID-19 model- we explore and elucidate the dynamic properties and behaviors inherent in the stochastic aspects of the model. Section 5: Numerical experiments- this section is dedicated to presenting the numerical solutions derived for the proposed model. Through numerical simulations, we offer insights into the practical implications and outcomes of the model. Section 6: Conclusion - we present the conclusive remarks and findings of the entire paper. This section serves to summarize key results, implications, and potential avenues for future research.
|
By selecting a twice-differentiable function Ytsubscript𝑌𝑡Y_{t}italic_Y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and applying Ito’s formula to Ytsubscript𝑌𝑡Y_{t}italic_Y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, we can derive the stochastic basic reproduction number. Let’s set Yt=lnItsubscript𝑌𝑡subscript𝐼𝑡Y_{t}=\ln{I_{t}}italic_Y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = roman_ln italic_I start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, using the natural logarithm of the infectious class. This choice allows us to express the dynamics of the infectious class in a form that facilitates the application of stochastic calculus, leading to the characterization of the stochastic basic reproduction number.
|
Numerical solutions of systems are invaluable in the study of epidemic models. This section presents the numerical results of our model, shedding light on how the parameters of the deterministic model (2) and the intensity of non-Gaussian noise in the stochastic model (4) impact the dynamics. We conduct numerical experiments to illustrate the extinction and persistence of the novel coronavirus, COVID-19, in both the deterministic model and its corresponding stochastic system for comparison.
|
A
|
The predominant method for computing time-varying correlation in time series data, particularly in neuroimaging studies, involves Sliding Windows (SW). This technique entails computing correlations between brain regions across various time windows (Allen et al., 2014; Hutchison et al., 2013; Shakil et al., 2016; Mokhtari et al., 2019; Huang et al., 2020). However, the use of discrete windows in SW can lead to artificially high-frequency fluctuations in dynamic correlations (Oppenheim et al., 1999). While tapering methods can occasionally mitigate these effects (Allen et al., 2014), the correlation computations within these windows remain susceptible to the influence of outliers (Devlin et al., 1975).
|
The derivation follows by simply replacing all the terms with the WFS representation. Correlation (20) is the formula we used to compute the dynamic correlation in this study. Figure 7 displays the WFS-based dynamic correlation for different bandwidths. A similar weighted correlation was proposed in Pozzi et al. (2012), where the time varying exponential weights proportional to et/θsuperscript𝑒𝑡𝜃e^{t/\theta}italic_e start_POSTSUPERSCRIPT italic_t / italic_θ end_POSTSUPERSCRIPT with exponential decay factor θ𝜃\thetaitalic_θ were used. However, our exponential weight term is related to the spectral decomposition of heat kernel in the spectral domain and invariant over time. The WFS based correlation is not related to Pozzi et al. (2012).
|
To circumvent these limitations, we employed the Weighted Fourier Series (WFS) representation (Chung et al., 2007, 2008). This approach extends the traditional cosine Fourier transform by incorporating an additional exponential weight. This modification effectively smooths out high-frequency noise and diminishes the Gibbs phenomenon (Chung et al., 2007; Huang et al., 2019a). Crucially, WFS eliminates the need for sliding windows (SW) when computing time-correlated data. Given the necessity for robust signal denoising methods to ensure the efficacy of the persistent homology method across various subjects and time points, such an approach is needed. Consider an arbitrary noise signal f(t),t∈[0,1]𝑓𝑡𝑡01f(t),t\in[0,1]italic_f ( italic_t ) , italic_t ∈ [ 0 , 1 ], which will undergo denoising through the diffusion process.
|
However, such average is the average of the connectivity strength. Such an approach is usually sensitive to topological outliers (Chung et al., 2019a). We address the problem through the Wasserstein distance. A similar concept was proposed in the persistent homology literature through the Wasserstein barycenter (Agueh and Carlier, 2011; Cuturi and Doucet, 2014), which is motivated by the Fréchet mean (Le and Kume, 2000; Turner et al., 2014; Zemel and Panaretos, 2019; Dubey and Müller, 2019). However, the method has not seen many applications in modeling graphs and networks.
|
The predominant method for computing time-varying correlation in time series data, particularly in neuroimaging studies, involves Sliding Windows (SW). This technique entails computing correlations between brain regions across various time windows (Allen et al., 2014; Hutchison et al., 2013; Shakil et al., 2016; Mokhtari et al., 2019; Huang et al., 2020). However, the use of discrete windows in SW can lead to artificially high-frequency fluctuations in dynamic correlations (Oppenheim et al., 1999). While tapering methods can occasionally mitigate these effects (Allen et al., 2014), the correlation computations within these windows remain susceptible to the influence of outliers (Devlin et al., 1975).
|
B
|
The Stupp protocol has become standard of care for the treatment of gliomas. It consists of radiotherapy and concomitant chemotherapy with temozolomide. Precisely, radiotherapy (RT) is provided with the standard dose of 60 Gy delivered in 30 daily fractions of 2 Gy (Monday to Friday) over 6 weeks. RT planning is based on the definition of specific tumor volumes allowing to assess the margins of the gross tumor volume and the area to treat. These could be identified using advanced imaging techniques. The administration of temozolomide changes from 7 days per week at 75 mg per square meter of body-surface area per day, during radiotherapy, to six cycles of 150-200 mg per square meter for 5 days during each 28-day cycle, after the radiotherapy. This second part of the chemotherapy treatment is also named adjuvant chemotherapy. Together with these standard approaches, new treatments are emerging to target molecules involved in various tumor mechanisms. Among them, angiogenesis inhibitors have been tested as anti-migratory agents for glioblastoma (GB). For instance, bevacizumab, a neutralizing monoclonal antibody against vascular endothelial growth factors (VEGFs), demonstrated a promising response in patients with recurrent malignant gliomas in combination with other drugs. However, the role and efficacy of bevacizumab are still debated, especially when combined with the classical treatment approaches [42].
|
Firstly, we observe the tumor dynamics (first column of Figure 12), which is characterized by a strong reduction of the tumor density in the area originally occupied by the tumor mass. This shows the efficacy of the combined treatment in reducing the overall tumor volume. Moreover, we notice that, even after the resting period of 4 weeks, no evident regrowth of the tumor mass emerges. Concerning VEGFs and ECs, we observe that the production of VEGFs by tumor cells and their accumulation in the region originally affected by the tumor determines a correspondingly increasing concentration of ECs in the same area. In turn, the increasing density of ECs, together with the reduction of tumor density, leads to a noticeable decrease of VEGF concentration, as shown in the third column of Figure 12. In particular, the EC density (especially after three weeks of therapy) shows a certain level of heterogeneity, which may be due to the random distribution we chose as initial condition for this population and which determines a corresponding heterogeneity in the VEGFs concentration (first plot in the third column of Figure 12). An outer rim of VEGFs seems to remain after the 10 weeks and it may be due to a corresponding low-dense outer rim of tumor cells surviving the treatment. Looking at the evolution of necrotic matter and healthy tissue, we notice the effects of having assumed a space-dependent radiotherapy dose distribution. In fact, the degradation of the healthy tissue is not homogenous and the further the region from the tumor area, the smaller the effect of the radiations on healthy tissue. In fact, the most distant regions (e.g. those in the right hemisphere) do not seem to be affected by the therapy. The effects of both heterogeneous tissue degradation and tumor cell death are, then, collected in the necrotic compartment.
|
The model proposed in this note is a therapy-oriented development of that considered in [7] and aligns to the approaches in [6, 8, 9, 13, 14, 15, 26]. Unlike [7, 15] we do not differentiate here between moving and proliferating cancer cells, but account for a single population of cells forming the tumor. In fact, the analysis in [7] showed (at least in our framework) that the main effect of splitting the tumor population into moving and resting (and proliferating) cells was the slower evolution of tumor mass, but qualitatively it behaved similarly to the situation with one population of moving and proliferating cancer cells. Therefore, in this note we do not differentiate between the two cell phenotypes and focus instead on therapy effects. We consider tumor cells interacting with their environment, whose main physical components are the anisotropic brain tissue as well as brain-own and tumor-generated vasculature. We also modeled explicitly the evolution of VEGF controlling angiogenesis and the necrotic component of the neoplasm, which is relevant for assessing the tumor stage and for the segmentation needed for treatment planning. The model investigates the impact of including microscopic dynamics also in the equation for endothelial cells, which has not been done before, comparing our approach with other two possible descriptions of EC dynamics. Then, we analyze via simulations various therapy approaches and their effects on tumor development and on normal tissue, thereby following treatment schedules addressed in clinical studies available in the literature. Although this has been done in [26] in a rather rudimentary way, we do not include tumor resection as part of the therapy; the problems have already been stated in that reference: it is hardly possible to provide a proper mathematical characterization of the tissue and neoplasm dynamics in the resected region in a continuous manner and the solution infers sharp discontinuities, which are difficult to handle. Considering the real data employed in Subsection 4.3, the simulations showed qualitatively reasonable results, however, a quantitative assessment could not be performed, due to missing data, as that patient’s therapy was stopped.
|
From the mathematical viewpoint, several classes of models have been proposed during the last two decades with the aim of using modern biomedical visualization techniques to help in predicting the tumor volumes (CTV=clinical target volume, PTV=planning target volume) for therapy planning. Some of those models [27, 51] give purely macroscopic descriptions of glioma evolution in interaction with the tumor environment, others, more recent, have a multiscale character and obtain effective equations on the macroscale of the whole tumor from settings describing microscopic and/or mesoscale dynamics [6, 7, 8, 9, 12, 13, 14, 15, 26, 31, 39, 50, 55]. Among these models, [7, 8, 51] suggest ways to assess tumor growth and grading. Settings taking into account the interplay of glioma with vascularization and other components (e.g., acidity) of the peritumoral space have been proposed within both model classes [7, 12, 34, 51]. Models that pay attention to various therapy approaches, in a more or less detailed way, are, instead, relatively scarce in this context, see e.g. [25, 26, 43]. Therefore, there is still a need for mathematical formulations aiming at describing and comparing personalized treatment schedules, which involve state-of-the-art methods, with the purpose of identifying the most effective ones. This paper aligns with this aim.
|
In [31] we commented about the feasibility of using such multiscale models to predict tumor spread and establish CTV and PTV margins for treatment planning. Those observations still apply here - the main issue remains the relatively large number of parameters and therewith related uncertainties. However, the increasingly fast development of biomedical imaging, computing power, and technology for necessary cell biology experiments will provide a means to assess at least some of the missing quantitative information. On the other hand, such multiscale models seem to offer an adequate frame for studying the effects of various therapy ansatzes. In fact, chemo- and radiotherapy primarily act on the level of single cells and ultimately lead to the observed effects on the whole tumor and this mathematical approach allows us to account for dynamics on both levels in a reasonably detailed manner. Our numerical experiments also suggest that letting the clinical studies have a longer follow-up might provide useful information about the tumor behavior after ceasing the actual therapy. That would presumably lead to higher costs, but these might be justified by the achieved understanding. Mathematical models could help in identifying the adequate duration of such studies. Moreover, they have the potential to investigate a great variety of therapeutic scenarios (of which we showed here just a few examples) in an unprecedented complexity and accuracy - provided the necessary quantitative information becomes available. Intra- and interdisciplinary studies including such models are called upon to shed light on the intricate biological processes associated with tumor growth, expansion, and treatment response.
|
C
|
The brains of manatees and dugongs exhibit unique structural characteristics, as demonstrated in Fig. 5. For these species, we make similar measurements to facilitate comparative analyses.
|
More complex cases are illustrated in Fig. 6. For example, the American beaver’s cortex features shallow dimples [1, 20]. To manage this intricacy, one could extend the 2D method, as illustrated in Fig. 2, to a 3D measurement framework if a 3D brain image is available. The brain of the western grey kangaroo presents comparable challenges due to its sparse and irregular sulci, rendering the methods demonstrated in Fig. 1 and Fig. 2 ineffective.
|
Challenging cases: the brains of the American beaver, which has cortical dimples, and the western grey kangaroo, which features irregular sulci [1]. These examples pose challenges for measuring equivalent gyral sizes. In the case of the American beaver, it remains ambiguous whether a cortical dimple can be classified as a sulcus. For the western grey kangaroo, the gyral sizes are not uniform, rendering the method outlined in Fig.1 unsuitable. Furthermore, the sparse number of sulci compromises the accuracy of the approach delineated in Fig.2. Extending the methodology of Fig.2 to a 3-dimensional framework could offer a solution to these challenges. Additional examples of such challenging cases are compiled in Table 6 for brains with dimples and in Table 7 for those with irregular sulci.
|
Extending our analysis beyond gyrencephalic brains, Fig. 12 indicates that we can categorize mammalian brains into three primary classifications: lissencephalic, quasi-gyrencephalic, and gyrencephalic. However, this classification scheme does not accommodate the challenging cases illustrated in Fig. 6. For example, the American beaver and the western grey kangaroo serve as illustrative cases that blur the lines between these three categories. It may be necessary to introduce two additional categories in between, represented by the samples listed in Table 6 and Table 7, respectively.
|
Quasi-gyrencephalic brains with underdeveloped sulci. Their drawn sizes are measured as illustrated in Fig. 4 and Fig. 5 and are plotted in Fig. 12.
|
A
|
Conventional train-test data splits may fall short of an ideal scenario, merely ensuring the exclusion of triplets (drug A, drug B, and cell line) observed in the training set from the test set. However, they do not guarantee the absence of certain drugs or cell lines in the training set. To address this limitation, we expanded our model training to incorporate various data split strategies. Specifically, we excluded all triplets containing certain cell lines or drugs from the training set. This meticulous approach ensures that the training data is devoid of any triplets involving the cell lines/drugs on which the model is being trained. As table 5 shows, our model demonstrates its ability to predict on the cell lines or drugs that are fully unseen during the training process. This indicates the robustness of our model in handling scenarios with a more controlled and stringent data split, providing a more reliable evaluation of its predictive capabilities.
|
Conventional train-test data splits may fall short of an ideal scenario, merely ensuring the exclusion of triplets (drug A, drug B, and cell line) observed in the training set from the test set. However, they do not guarantee the absence of certain drugs or cell lines in the training set. To address this limitation, we expanded our model training to incorporate various data split strategies. Specifically, we excluded all triplets containing certain cell lines or drugs from the training set. This meticulous approach ensures that the training data is devoid of any triplets involving the cell lines/drugs on which the model is being trained. As table 5 shows, our model demonstrates its ability to predict on the cell lines or drugs that are fully unseen during the training process. This indicates the robustness of our model in handling scenarios with a more controlled and stringent data split, providing a more reliable evaluation of its predictive capabilities.
|
Table 5: Performance evaluation of our DDoS model across various train-test split strategies: the ‘Cell Lines’ row represents the model’s performance when specific cell lines are excluded from the training set and used for testing. Similarly, the “Drug 1” row signifies the model’s evaluation when certain drugs from the Drug 1 set are omitted during training.
|
In our study, we also conducted a calibration analysis to compare the predicted probabilities of synergy against the observed frequencies of actual synergistic outcomes. Figure 2 showcases the calibration curve derived from the results of our trained model, specifically for the ZIP synergy score. This model was trained using a leave-some-drugs-out approach, as detailed in Table 5. The figure illustrates five distinct calibration plots, each corresponding to one of the five models obtained through 5-fold cross-validation. These plots serve to visually assess the accuracy of our probabilistic predictions in relation to the true synergistic interactions observed. Please note that this analysis corresponds to the model that exhibited relatively lower performance, as indicated in Table 5. This outcome is primarily attributed to our “leave-drugs-out” training and testing strategy, wherein certain drugs were omitted from the training set, allowing the model to be evaluated on previously unseen drugs. Consequently, for the other models, which are detailed in Table 3, we anticipate a more robust calibration performance, as they were not subjected to this specific train-test data split approach.
|
In the model training process, we adopted a Stratified 5-Folds cross-validation strategy. This method ensures that the test split maintains a balanced representation of samples for each class, preserving the proportionality of class distributions in each train-test split. Additionally, 10%percent1010\%10 % of the training partition of each fold was reserved for validation and hyperparameter tuning. We initially selected random hyperparameter values for the training of each model on a random fold (out of 5555 folds). Subsequently, we repeated the training of each model on all 5555 folds based on the best-performing hyperparameters of the initial random fold. Finally, the trained models (on all 5555 folds) were tested on the corresponding test split.
|
B
|
Fig. S9 also functions as a sensitivity analysis of our results with respect to the technical characterization of U𝑈Uitalic_U (resp. γ𝛾\gammaitalic_γ): while decreasing U𝑈Uitalic_U (resp. γ𝛾\gammaitalic_γ) decreases the mixing, so that microphytoplankton could in fact be slightly more aggregated, the dominance index never gets above 0.7 at the interaction radius threshold—the results are not modified substantially. However, combination of such lower U𝑈Uitalic_U and a slightly lower interaction threshold (see Discussion) may create some intraspecific dominance in microphytoplankton too.
|
One of the reasons why estimating K(r)𝐾𝑟K(r)italic_K ( italic_r ), and even more so g(r)𝑔𝑟g(r)italic_g ( italic_r ),
|
as g(r)=K′(r)4πr2𝑔𝑟superscript𝐾′𝑟4𝜋superscript𝑟2g(r)=\frac{K^{\prime}(r)}{4\pi r^{2}}italic_g ( italic_r ) = divide start_ARG italic_K start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_r ) end_ARG start_ARG 4 italic_π italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG.
|
K(r)𝐾𝑟K(r)italic_K ( italic_r ). Using its marked version, CjKij(r)subscript𝐶𝑗subscript𝐾𝑖𝑗𝑟C_{j}K_{ij}(r)italic_C start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_K start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ( italic_r ) is the average
|
{\partial G}{\partial r}\right)+2\lambda C= 4 italic_π ( 2 italic_D italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT divide start_ARG ∂ italic_G end_ARG start_ARG ∂ italic_r end_ARG + italic_γ italic_r start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT divide start_ARG ∂ italic_G end_ARG start_ARG ∂ italic_r end_ARG ) + 2 italic_λ italic_C
|
A
|
When these images are displayed, pixel values undergo gamma correction to recover the original statistics for human eyes to process.
|
What adds to the confusion is the fact that for the widely used ICA, the two objectives have indeed been proven to coincide [18].
|
All nodes have similar activation probabilities, indicating an even distribution at the coarse scale across all nodes.
|
after the model has been trained the vast majority of the output values are either at 0 or 1, signifying that our model encoded the images using binary representation.
|
In the following sections, we will assume that all pixel values x𝑥xitalic_x have already been processed by dedicated IPUs,
|
D
|
We cannot apply Fisher’s fundamental theorem for natural selection to the evolution of novel traits in polyploidy organisms.
|
B and C) Dependence of (B) the variance of phenotype and (C) the evolutionary rate on the number of chromosomes.
|
III.3 Evolutionary innovation is described by the large deviation theory and it depends on the third-order moment of chromosomes
|
Conversely, if the mutation rate is excessively large, then the evolution of novel traits is no longer a rare event, and thus, the large deviation theory cannot be applied.
|
Note that the normalized fourth-order moment kurtosis did not exhibit non-monotonic dependence on the number of chromosomes, and it was almost independent of the mutation rate (Supplementary Fig. 2).
|
B
|
Figures 1 and 2 illustrate the Hodge decompositions of a complete graph and a non-complete graph, respectively. The MATLAB code for performing Hodge decomposition in the least squares fashion is available at
|
Fig. 1: Illustration of the Hodge decomposition, which decomposes the edge flow into non-loop and loop flows. These networks are then separately subjected to birth-death decomposition to obtain the topological features.
|
The proposed ∞\infty∞-Wasserstein distance-based test statistic exhibits robust performance on both the loop and non-loop flows. The 𝔏∞subscript𝔏\mathfrak{L}_{\infty}fraktur_L start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT distance effectively discriminated networks in both the non-loop (gradient) and loop (curl) components when network differences were present (as shown in the top rows of the table). In scenarios with no network differences (bottom rows of the table), both the loop and non-loop flow yielded satisfactory results. This underscores that the modularity in the network is aptly captured by both the non-loop and loop components of the Hodge decomposition, and that our ∞\infty∞-Wasserstein distance is capable of discerning variations in modularity.
|
partition graphs into topologically distinct subgraphs[14, 15]. We first apply graph filtration, a technique involving the sequential removal of edges from a graph G𝐺Gitalic_G, starting with the smallest edge weight and progressing to the largest [6, 8]. We identify the birth set B(G)𝐵𝐺B(G)italic_B ( italic_G ), associated with the emergence of connected components, by computing the maximum spanning tree (MST) of G𝐺Gitalic_G using Kruskal’s or Prim’s algorithms [6]. The death set D(G)𝐷𝐺D(G)italic_D ( italic_G ) then consists of the edges not present in B(G)𝐵𝐺B(G)italic_B ( italic_G ) (Figure 1), which consists of death values of cycles (loops) during the filtration. We perform BDD independently on both non-loop and loop flows, allowing us to characterize the topology of each component of the Hodge decomposition. To measure the topological disparities between components, we use the Wasserstein distance applied to their respective BDD. Wasserstein distance provides optimal matching that are stable to infinitesimal noise and provide robustness [16, 15].
|
PH quantifies multiscale topological features of data through a filtration process [8]. Hodge theory provides a unified framework combining simplicial homology and spectral geometry, offering insights into network topology [9, 10, 11]. While the Hodge Laplacian, a generalization of the graph Laplacian, offers insights into the topological features of higher order simplices, the Hodge decomposition allows to establish relationships between simplices of different dimensions [10]. Hodge decomposition breaks data defined on edges (edge flow) into three orthogonal components: gradient, curl, and harmonic flows, each providing unique topological insights. The gradient flow, driven by node gradients, represents the network’s gradient-like behavior. The curl flow, arising from triangle-induced flows, captures rotational patterns, while the harmonic flow exposes loop structures and topological signatures [10]. Using a Wasserstein distance-based statistical approach on each component, this study assesses the topological similarities and differences between loop and non-loop flows. Further, leveraging on the properties of the decomposed networks, the study seeks to elucidate the most discriminating topological disparities in female and male functional brain networks.
|
A
|
N}}\in\mathsf{L}^{1}.roman_exp { - italic_η italic_γ ( 1 + italic_ϵ ) divide start_ARG ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT italic_S start_POSTSUBSCRIPT italic_i - 1 end_POSTSUBSCRIPT italic_i start_POSTSUPERSCRIPT - italic_γ / italic_α end_POSTSUPERSCRIPT end_ARG start_ARG italic_u start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT end_ARG } ≤ roman_sup start_POSTSUBSCRIPT italic_N ∈ blackboard_N end_POSTSUBSCRIPT italic_e start_POSTSUPERSCRIPT - italic_η italic_γ ( 1 + italic_ϵ ) italic_S start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ∈ sansserif_L start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT .
|
Similarly, whenever b2≥2subscript𝑏22b_{2}\geq 2italic_b start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ≥ 2 and using Jensen’s inequality, we have
|
Thus, plugging these estimates and using Hölder’s inequality (p=1+ϵ𝑝1italic-ϵp=1+\epsilonitalic_p = 1 + italic_ϵ),
|
Plugging these into (4), Theorem 2.1 yields a general criterion for the convergence of the genealogy of AWF populations.
|
Plugging these into (4), Theorem 2.1 yields a general criterion for the convergence of the genealogy of AC populations.
|
B
|
In the diagnostic task, we assessed models trained on datasets with varying degrees of heterogeneity, as indicated by the proportion factor 𝜶∈{0.05,0.1,0.3,0.5}𝜶0.050.10.30.5\bm{\alpha}\in\left\{0.05,0.1,0.3,0.5\right\}bold_italic_α ∈ { 0.05 , 0.1 , 0.3 , 0.5 } in Table 4. Our experiments aimed to validate the performance of each local center, FedAvg, FACL, and Centralized model across seven internal datasets and two external datasets (DiagSet-A and QHD).
|
The validation results for the Gleason scoring task are listed in Table 7. Compared to models trained on single-center data, the FACL model exhibited significant improvements in the Kappa score and AUC. The average Kappa across the six centers (Hebei-1, Hebe-2, PANDA-1-1, PANDA-1-2, PANDA-2-1, and PANDA-2-2) was 0.7379, whereas FACL achieved a Kappa of 0.8463. This highlights the effectiveness of federated learning for prostate cancer diagnosis across multiple categories. Notably, the proposed FACL model consistently outperformed the FedAvg model in terms of the Kappa score, regardless of the addition of noise (N𝑁Nitalic_N). The Kappa scores for the FACL surpassed those for the FedAvg, and the FACL-N𝑁Nitalic_N outperformed the FedAvg-N𝑁Nitalic_N. This underscores the efficacy of the proposed attention-consistent learning method.
|
The distribution of data splits for the diagnostic task is presented in Table 4. To assess the impact of data imbalance on the model’s performance, we partitioned the dataset from four centers (DiagSet-B-1, DiagSet-B-2, PANDA-1, PANDA-2), which contains a substantial number of samples, into positive proportions denoted as α𝛼\alphaitalic_α (specifically, α∈{0.05,0.1,0.3,0.5}𝛼0.050.10.30.5\alpha\in\{0.05,0.1,0.3,0.5\}italic_α ∈ { 0.05 , 0.1 , 0.3 , 0.5 }). Notably, we focused on conducting this verification solely within the realm of binary classification, primarily because of significant variations in the quantity and category of the datasets. Therefore, we did not conduct a similar distribution experiment in the context of multiclass classification.
|
In this study, we investigated the diagnosis of prostate cancer using a two-level classification approach to differentiate between benign and malignant conditions. Furthermore, we examined the Gleason grading of prostate cancer by employing a six-level classification system based on the International Society of Urological Pathology (ISUP [15]) categories 0, 1, 2, 3, 4, and 5. A series of preprocessing steps was implemented on the datasets to demonstrate the efficacy of the FL model. These steps aim to enhance the heterogeneity of the datasets by splitting those with a substantial number of samples into multiple independent centers, thus expanding the number of client centers within the FL model. The distribution of the datasets utilized in the cancer diagnosis task is presented in Table 2. The DiagSet-B and PANDA datasets were split into two separate centers, DiagSet-B-1, DiagSet-B-2, PANDA-1, and PANDA-2, with the proportion of positive data ranging from 21% to 90%. Similarly, the dataset distribution for Gleason grading is illustrated in Table 3, the PANDA datasets have been split into four separate centers, namely PANDA-1-1, PANDA-1-2, PANDA-2-1, and PANDA-2-2, where the categories of ISUP 0-5 exhibit a notable imbalance. According to the definition of federated learning classification, our study uses horizontal federated learning [78].
|
The experimental results for the diagnosis task on the validation set are presented in Table 5, demonstrating metrics such as AUC, F1, ACC, and Recall. As α𝛼\alphaitalic_α increases, the overall performance of the local center model improves due to the different proportions of categories in the diagnostic task. When α𝛼\alphaitalic_α is 0.05, the average ACC of the models from seven centers (Hebei-1, Hebei-2, Nanchang, DiagSet-B-1, DiagSet-B-2, PANDA-1, PANDA-2) is only 0.8169, with an average AUC of 0.9393. However, when α𝛼\alphaitalic_α is 0.5, the average ACC of these models (Hebei-1, Hebei-2, Nanchang, DiagSet-B-1, DiagSet-B-2, PANDA-1, PANDA-2) reaches 0.8592, accompanied by an average AUC of 0.9499.
|
D
|
We describe how interpolating aligned data can provide better reference processes for use in classical DSBs, paving the way to hybrid aligned/non-aligned Schrödinger bridges.
|
The task of modeling conformational changes starting from a given protein structure is largely unexplored, mainly due to the lack of high-quality large datasets. Here we utilize the recently proposed D3PM dataset (Peng et al., 2022) that provides protein structures before (apo) and after (holo) binding, covering various types of protein motions. The dataset was generated by filtering examples from the Protein Data Bank (PDB) corresponding to the same protein but bound to different biomolecules, with additional quality control criteria. For the scope of this work, we only focus on protein pairs where the provided Root Mean Square Deviation (RMSD) of the Cα𝛼\alphaitalic_α carbon atoms between unbound and bound 3D structures is >3.0absent3.0>3.0> 3.0Å, which amounts to 2370 examples in the D3PM dataset.
|
In this paper, we propose a new framework to tackle the interpolation task with aligned data via diffusion Schrödinger bridges. Our central contribution is a novel algorithmic framework derived from the Schrödinger bridge theory and Doob’s hℎhitalic_h-transform. Via a combination of the two notions, we derive novel loss functions which, unlike all prior methods for solving diffusion Schrödinger bridges, do not rely on the iterative proportional fitting procedure and are hence numerically stable. We verify our proposed algorithm on various synthetic and real-world tasks and demonstrate noticeable improvement over the previous state-of-the-art, thereby substantiating the claim that data alignment is a highly relevant feature that warrants further research.
|
In this section, we presented a proof of concept application of SBAlign for modelling conformational changes in proteins during docking. associated with the protein docking task. A combination of SBAlign for conformational change modeling, with more recent methods for rigid-protein docking (Ketata et al., 2023) can provide a complete solution for the protein docking task, which we leave to future work.
|
We evaluate our proposed framework on both synthetic and real data. For experiments utilizing real data, we consider two tasks where such aligned data is naturally available. The first is the task of developmental processes in single-cell biology, and the second involves protein docking. While diffusion models have been applied to model the relative orientation of proteins during docking, they have not been extended to model protein flexibility. We showcase a proof-of-concept application of our method on modelling conformational changes between unbound and bound states of a protein. Our method demonstrates a considerable improvement over prior methods across various metrics, thereby substantiating the importance of taking the data alignment into account.
|
D
|
-\mu a}y(t-a)\,\mbox{d}a=:\int_{0}^{\infty}k(a)y_{t}(-a)\,\mbox{d}aitalic_y ( italic_t ) = italic_D caligraphic_F ( 0 ) italic_y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = ∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT italic_β ( italic_x start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT + italic_g ( 0 ) italic_a ) italic_e start_POSTSUPERSCRIPT - italic_μ italic_a end_POSTSUPERSCRIPT italic_y ( italic_t - italic_a ) d italic_a = : ∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT italic_k ( italic_a ) italic_y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( - italic_a ) d italic_a
|
(as indeed one can understand by using only the interpretation: it describes the linear population model corresponding to the virgin environment E=0𝐸0E=0italic_E = 0).
|
can be replaced by a more explicit one using the proof of Lemma 7 (i.e. using the definitions of B~~𝐵\tilde{B}over~ start_ARG italic_B end_ARG and a~~𝑎\tilde{a}over~ start_ARG italic_a end_ARG):
|
Our motivation to study the specific model above is to understand whether the evolution (the interpretation of x𝑥xitalic_x and u𝑢uitalic_u is explained below) of a tree population can be explained by taking into account only competition for light through a hierarchical structure affecting individual growth, assuming that resources (such as water, space, etc.) are readily available.
|
Equation (12)12(\ref{scalar2})( ) provides the delay formulation of the model, which we are going to study here. In the delay formulation the state variable is the population birth rate history Bt:=B(t+⋅)B_{t}:=B(t+\cdot)italic_B start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT := italic_B ( italic_t + ⋅ ), instead of the population density u(⋅,t)𝑢⋅𝑡u(\cdot,t)italic_u ( ⋅ , italic_t ) with respect to height. Specifically, one can consider the state space (of the weighted birth rate histories)
|
A
|
One way to increase the efficiency of greedy search procedures is by applying the, so-called, principle of coherence (Gabriel, 1969) that is used as a strategy for pruning the search space. We show that for the family of pdRCON models the twin lattice allows a more straightforward implementation of the principle of coherence.
|
The greedy search procedure of the previous section has been implemented in the program language R, and here we describe its application to both synthetic and real-world data, including an empirical comparison with the penalized likelihood method of Ranciati and Roverato (2023).
|
We introduce a stepwise backward elimination procedure that exploits the twin lattice both for the computation of the meet operation and the implementation of the coherence principle.
|
compare it with the stepwise backward elimination procedure given in Roverato and Nguyen (2022) that does not exploit the twin lattice for the computation of the set of candidate models, and where the principle of coherence is naively implemented by only considering the submodel relationship.
|
We implement a stepwise backward elimination procedure with local moves on the twin lattice which satisfies the coherence principle, and we show that it is more efficient than an equivalent procedure on the model inclusion lattice. This procedure is implemented in the statistical programming language R and its behavior is investigated both on synthetic and real-world data.
|
D
|
To enhance performance, we perform multimodal feature integration using features extracted from the short-axis, four-chamber, and Cardiac Measurements (CM). We adopt two strategies for feature integration, namely the early and late fusion of features [6]. In early fusion, the features are fused at the input level without doing any transformation. We concatenate features from the short-axis and four-chamber to perform this fusion. We then apply MPCA [11] on the concatenated tensor, enabling the selection of multimodal features. In late fusion, the integration of features is performed at the common latent space that allows the fusion of features that have different dimensionalities. In this way, we can perform a late fusion of CM features with short-axis and four-chamber features. However, we can not perform an early fusion of CM features with short-axis and four-chamber features.
|
3) Clinical utility: Decision curve analysis indicates the diagnostic value of our pipeline, which can be used in screening high-risk patients from a large population.
|
Cardiac MRI scans contain high-dimensional spatial and temporal features generated throughout the cardiac cycle. The small number of samples compared to the high-dimensional features poses a challenge for machine learning classifiers. To address this issue, Multilinear Principal Component Analysis (MPCA) [11] utilizes a tensor-based approach to reduce feature dimensions while preserving the information for each mode, i.e. spatial and temporal information in cardiac MRI. Hence, the MPCA method is well-suited for analyzing cardiac MRI scans. The application of the MPCA method to predict PAWP might further increase the diagnostic yield of cardiac MRI in heart failure patients and help to establish cardiac MRI as a non-invasive alternative to RHC. Existing MPCA-based pipelines for cardiac MRI [17, 18, 2] rely on manually labeled landmarks that are used for aligning heart regions in cardiac MRI. The manual labeling of landmarks is a cumbersome task for physicians and impractical for analyzing large cohorts. Moreover, even small deviations in the landmark placement may significantly impact the classification performance of automatic pipelines [16]. To tackle this challenge, we leverage automated landmarks with uncertainty quantification [15] in our pipeline. We also extract complementary information from multimodal data from short-axis, four-chamber, and Cardiac Measurements (CM). We use CM features (i.e., left atrial volume and left ventricular mass) identified in the baseline work by Garg et al. [5] for PAWP prediction.
|
In this paper, we use three primary metrics: Area Under Curve (AUC), accuracy, and Matthew’s Correlation Coefficient (MCC), to evaluate the performance of the proposed pipeline. Decision Curve Analysis (DCA) is also conducted to demonstrate the clinical utility of our methodology.
|
This paper proposed a tensor learning-based pipeline for PAWP classification. We demonstrated that: 1111) tensor-based features have a diagnostic value for PAWP, 2222) the integration of CM features improved the performance of unimodal and bi-modal methods, 3333) the pipeline can be used to screen a large population, as shown using decision curve analysis. However, the current study is limited to single institutional data. In the future, we would like to explore the applicability of the method for multi-institutional data using domain adaptation techniques.
|
C
|
In parallel, the use of Transformers for multimodal fusion has gained significant attention in classification and generative tasks [70, 86, 61]. Multimodal tokens can be concatenated and fed to a regular Transformer [72, 18], a hierarchical Transformer [43], or a cross-attention Transformer [55, 46, 52]. As the number and dimensionality of modalities increase, the typical sequence length can become too large to be fed to vanilla Transformers, hence the need for low-complexity methods. Several models have proposed re-formulations of self-attention to reduce memory and computational requirements [4, 85, 31, 12, 83, 29, 15, 14], for instance, by approximating self-attention with a low-rank decomposition [47, 85], using latent bottleneck distillation [31, 53, 32], by optimizing GPU reads/writes [15, 14] or using sparse attention patterns [4, 56]. Recently, interpretable multimodal models or post-hoc interpretation methods [41, 75, 1] have also emerged as a critical area of research, especially in sensitive human-AI collaborative decision-making scenarios such as healthcare and human-computer interactions.
|
Early vs. Late fusion: Early fusion methods (MCAT [10], MOTCat [87] and SurvPath) outperform all late fusion methods. We attribute this observation to the creation of a joint feature space that can model fined-grained interactions between transcriptomics and histology tokens. Overall, these findings justify the need for (1) modeling dense interactions between pathway and patch tokens and (2) unifying fusion in a single Transformer attention.
|
1. Tokenizing transcriptomics modality: Modalities based on image and text can be unequivocally tokenized into object regions and word tokens [40, 73], however, tokenizing transcriptomics in a semantically meaningful and interpretable way is challenging. As transcriptomics data is already naturally represented as a feature vector, many prior studies ignore tokenization and directly concatenate the entire feature with other modalities, which limits multimodal learning to late fusion operations [80, 39]. Alternatively, genes can be partitioned into coarse functional sets that represent different gene families (e.g., tumor-suppressor genes and oncogenes) that can be used as tokens [10]. Nevertheless, such sets provide a rudimentary and incomplete depiction of intracellular interactions as one gene family can be involved in different cellular functions. Consequently, they may lack semantic correspondence with fine-grained morphologies. Instead, we propose tokenizing genes according to established biological pathways [67, 42, 22]. Pathways are gene sets with known interactions that relate to specific cellular functions, such as the TGF-β𝛽\betaitalic_β signaling cascade, which contributes to the epithelial-mesenchymal transition in breast cancer [78]. Compared to coarse sets (e.g., N𝒫=6subscript𝑁𝒫6N_{\mathcal{P}}=6italic_N start_POSTSUBSCRIPT caligraphic_P end_POSTSUBSCRIPT = 6 [10]), pathway-based gene grouping can yield hundreds to thousands of tokens that represent unique molecular processes (N𝒫=331subscript𝑁𝒫331N_{\mathcal{P}}=331italic_N start_POSTSUBSCRIPT caligraphic_P end_POSTSUBSCRIPT = 331 in our work), which we hypothesize are more suitable representations for multimodal fusion with histology. In addition, as pathways represent unique cellular functions, they constitute appropriate basic reasoning units for interpretability (see Fig. 1).
|
Multimodal integration is an important objective in cancer prognosis [64], as combining histology and omics data such as genomics or transcriptomics is the current clinical practice for many cancer types. The majority of these works employ late fusion mechanisms [71, 9], and mostly differ in the way modality fusion is operated. Fusion can be based on vector concatenation [51], modality-level alignment [7], bilinear pooling (i.e., Kronecker product) [9, 80], or factorized bilinear pooling [39, 57].
|
While histology provides phenotypic information about cell types and their organization into tissues, alternate modalities can provide complementary signals that may independently be linked to prognosis. For instance, bulk transcriptomics, which represents the average gene expression in a tissue, can reveal a richer global landscape of cell types and cell states [39, 80] and has been shown to be a strong predictor of patient survival [24, 54, 59]. By combining both modalities, we can integrate the global information provided by bulk transcriptomics with the spatial information from the WSI. While most existing methods adopt late fusion mechanisms [11, 39] (i.e., fusing modality-level representations), we design an early fusion method that can explicitly model fine-grained cross-modal relationships between local morphological patterns and transcriptomics. In comparison with widely employed vision-language models [58, 2, 73], multimodal fusion of transcriptomics and histology presents two key technical challenges:
|
C
|
There are various initial conditions that we consider in this work, based on the assumption that a stationary limiting distribution for 𝐗(t)𝐗𝑡\mathbf{X}(t)bold_X ( italic_t ) exists (since θ,k>0𝜃𝑘0\theta,k>0italic_θ , italic_k > 0, this is always true for the form of 𝚯𝚯\bm{\Theta}bold_Θ expressed above, and more generally provided that all eigenvalues of 𝚯𝚯\bm{\Theta}bold_Θ have positive real part [26]). The first relevant initial condition is where 𝐗0subscript𝐗0\mathbf{X}_{0}bold_X start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is entirely specified. We refer to this choice as the fixed initial condition. For the virus-cell lysis problem, we may be interested in setting 𝐗0=[μ,0,…,0]⊺subscript𝐗0superscript𝜇0…0⊺\mathbf{X}_{0}=\big{[}\mu,0,\ldots,0\big{]}^{\intercal}bold_X start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = [ italic_μ , 0 , … , 0 ] start_POSTSUPERSCRIPT ⊺ end_POSTSUPERSCRIPT; i.e., the concentration is zero in all compartments, and the input is initiated at its mean. A second, more biologically realistic initial condition, is where all compartments are initiated with zero concentration, but where the input is initiated from its stationary distribution I(0)∼𝒩(μ,σ2/(2θ))similar-to𝐼0𝒩𝜇superscript𝜎22𝜃I(0)\sim\mathcal{N}(\mu,\sigma^{2}/(2\theta))italic_I ( 0 ) ∼ caligraphic_N ( italic_μ , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / ( 2 italic_θ ) ). We refer to this as the partially-fixed initial condition. The final initial condition, of interest given that it greatly simplifies some of the analysis, is where all compartments are initiated from the joint stationary distribution for the system. We refer to this as the stationary initial condition and the system as a whole in this case as the stationary system.
|
The system can still be viewed as multivariate Ornstein-Uhlenbeck process, although 𝐒𝐒\mathbf{S}bold_S (eqs. 2 and 3) is no longer a single-element matrix but has elements on both the main and lower diagonal.
|
We highlight that the non-stationary covariance matrix (eq. 5b) does not depend on the initial condition 𝐗0subscript𝐗0\mathbf{X}_{0}bold_X start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and that the mean 𝐦(t)𝐦𝑡\mathbf{m}(t)bold_m ( italic_t ) is an affine transformation of the initial condition 𝐗0subscript𝐗0\mathbf{X}_{0}bold_X start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. Therefore, for 𝐗0∼𝒩(𝐦0,𝚺0)similar-tosubscript𝐗0𝒩subscript𝐦0subscript𝚺0\mathbf{X}_{0}\sim\mathcal{N}(\mathbf{m}_{0},\mathbf{\Sigma}_{0})bold_X start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∼ caligraphic_N ( bold_m start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , bold_Σ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ), we have that
|
There are various initial conditions that we consider in this work, based on the assumption that a stationary limiting distribution for 𝐗(t)𝐗𝑡\mathbf{X}(t)bold_X ( italic_t ) exists (since θ,k>0𝜃𝑘0\theta,k>0italic_θ , italic_k > 0, this is always true for the form of 𝚯𝚯\bm{\Theta}bold_Θ expressed above, and more generally provided that all eigenvalues of 𝚯𝚯\bm{\Theta}bold_Θ have positive real part [26]). The first relevant initial condition is where 𝐗0subscript𝐗0\mathbf{X}_{0}bold_X start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is entirely specified. We refer to this choice as the fixed initial condition. For the virus-cell lysis problem, we may be interested in setting 𝐗0=[μ,0,…,0]⊺subscript𝐗0superscript𝜇0…0⊺\mathbf{X}_{0}=\big{[}\mu,0,\ldots,0\big{]}^{\intercal}bold_X start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = [ italic_μ , 0 , … , 0 ] start_POSTSUPERSCRIPT ⊺ end_POSTSUPERSCRIPT; i.e., the concentration is zero in all compartments, and the input is initiated at its mean. A second, more biologically realistic initial condition, is where all compartments are initiated with zero concentration, but where the input is initiated from its stationary distribution I(0)∼𝒩(μ,σ2/(2θ))similar-to𝐼0𝒩𝜇superscript𝜎22𝜃I(0)\sim\mathcal{N}(\mu,\sigma^{2}/(2\theta))italic_I ( 0 ) ∼ caligraphic_N ( italic_μ , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / ( 2 italic_θ ) ). We refer to this as the partially-fixed initial condition. The final initial condition, of interest given that it greatly simplifies some of the analysis, is where all compartments are initiated from the joint stationary distribution for the system. We refer to this as the stationary initial condition and the system as a whole in this case as the stationary system.
|
The multivariate Ornstein-Uhlenbeck process conditioned on the initial condition 𝐗0subscript𝐗0\mathbf{X}_{0}bold_X start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT has exact solution [26, 27]
|
D
|
The available MoleculeNet benchmark [9] uses SMILES for its molecular representation. After reviewing some of the molecule strings, not all are canonical. Including non-canonical SMILES is problematic as SMILES grammar is already complex; the molecules are converted to RDKit’s canonical form to reduce complexity. The next issue is caused by RNNs, one of the many advantages of RNN is the allowance of variable length inputs to account for a variable length of history. This is only true theoretically; in practice, RNN memory has limits, which is the focus of many newer works [27]. Despite this limitation, it has been recently shown that RNNs can handle input lengths of around 45-50 before the performance begins to degrade [28, 29]. Using this knowledge, we set a maximum SMILES length of 46 for the molecules. The limitation keeps a minor majority of the molecules while allowing us to ensure the RNN is performing well. After limiting the SMILES molecular length, the SMILES are converted to SELFIES. The intention of converting SMILES to SELFIES is to reduce the grammar complexity and simplify the learning process of the RNN. SELFIES converts each element and structural component, such as rings or branches, into their label. These labels are then encoded into a numerical value based on their dictionary index.
|
Fig. 3 offers a visualization of the methodology used to train the RNN. The molecules are first loaded in from a dataset from the MoleculeNet benchmark [9] and converted to SELFIES representation using the method described in Section III-A. The converted SELFIES are then processed through an embedding layer with a dimensional space matching the size of the label dictionary. The dictionary consists of all the unique SELFIES components within the dataset and the embedding dimension equals the dictionary size to maintain as much information as possible. The input, hidden, and output dimensions of the RNN are also equal to the size of the dictionary. Maintaining the dimensional space and not reducing it before output generation gives the RNN a chance of learning the molecular context. Fig. 1 and Fig. 2 are visualizations of the RNN architectures used to process the SELFIES. RNNs historically use the Tanh activation function, but we use the LeakyReLU as it reduces saturation possibilities and typically results in higher performance [30, 31]. In addition to this, we also include a dropout layer on the output of the RNN which helps prevent overfitting and reduce the error rate of RNNs [32]. After processing the SELFIES through the RNN, the final state should have all important prior information encoded into it. This vector then passes through an additional LeakyReLU and dropout layer before being fed to a fully connected layer. The fully connected layer reduces the vector from the dictionary-sized dimension down to the number of classes present in the molecular property. Subsequently, a soft-max operation finds the most likely class.
|
Before training on the selected MoleculeNet datasets referenced in Section II-A, we perform an additional reduction to the dataset by setting the lower bound of 31 molecules to the SMILES string allowing for the search space to remain sufficiently complex while reducing the overall run time. The lower bound reduces the datasets before stratified splitting the data using 80% for training and 20% for testing [33]. The stratified splitting intends to maintain the known sample rate of a given side effect to model real-world testing. However, during training, we want to remove the sampling bias to ensure our model accurately learns the causes of a side effect. The minority samples within the training set are duplicated to have an even sample count between the side effect present and the side effect not present to reduce the sampling bias. After replicating training samples, the SMILES conversion to SELFIES occurs. Typical natural language processing (NLP) methods use a word, sub-word, or character tokenization to convert strings into numerical values, but we opt for a slightly different method, which we explain by referring to equation 7. It shows the SELFIES representation of benzene where each molecule and structural element are between brackets. Using this representation, we decide to tokenize based on each set of brackets that exist within the SELFIES converted dataset. This results in a total of 47 unique values. After tokenizing the SELFIES, the embedding dimension, input dimension of the RNN, and the hidden dimension of the RNN are set to a size of 47 to match the dimensional space of the tokens. To give the RNN model the best opportunity to make accurate classifications, we use a single model to perform a single side effect classification prediction. For SIDER, instead of predicting all 27 potential side effect classifications, we opt to predict 20 side effect classifications due to extreme imbalances present in the side effect data. The vanilla RNN architecture results in a model with 11.5K parameters and the GRU architecture results in a model with 18.8k parameters. Both train in under 2 minutes on an Nvidia GeForce RTX 3090. To compare our performance with other works that use MoleculeNet we evaluate using the suggested metric, the receiver operating characteristic curve (ROC) [1, 34].
|
The available MoleculeNet benchmark [9] uses SMILES for its molecular representation. After reviewing some of the molecule strings, not all are canonical. Including non-canonical SMILES is problematic as SMILES grammar is already complex; the molecules are converted to RDKit’s canonical form to reduce complexity. The next issue is caused by RNNs, one of the many advantages of RNN is the allowance of variable length inputs to account for a variable length of history. This is only true theoretically; in practice, RNN memory has limits, which is the focus of many newer works [27]. Despite this limitation, it has been recently shown that RNNs can handle input lengths of around 45-50 before the performance begins to degrade [28, 29]. Using this knowledge, we set a maximum SMILES length of 46 for the molecules. The limitation keeps a minor majority of the molecules while allowing us to ensure the RNN is performing well. After limiting the SMILES molecular length, the SMILES are converted to SELFIES. The intention of converting SMILES to SELFIES is to reduce the grammar complexity and simplify the learning process of the RNN. SELFIES converts each element and structural component, such as rings or branches, into their label. These labels are then encoded into a numerical value based on their dictionary index.
|
Unfortunately, Vanilla RNNs suffer from memory saturation issues, so they are not always reliable. There have been many methods proposed to overcome this issue, but one of the most popular is the Gated Recurrent Unit (GRU)[17]. The basic structure of a GRU is in Fig. 2. We can mathematically describe each of the components using Equation 3, Equation 4, Equation 5, and Equation 6. Equation 5 represents the candidate hidden state function, representing the potential updated state. Equation 6 performs the actual update to the hidden state based on the previous hidden state and the candidate hidden state. Both Equation 3 and Equation 4 allow the network to tune the importance of the contribution of the previous hidden state to the new hidden state. Because of the rtsubscript𝑟𝑡r_{t}italic_r start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and ztsubscript𝑧𝑡z_{t}italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT parameters, the GRU can better control its memory state offering a practical improved performance over RNNs.
|
A
|
Even though the CEP furnishes an intuitive explanation in the assemblage of ecological communities, counterexamples of the CEP have been found in nature.
|
One such example is observed in the ocean with phytoplankton, known as the paradox of the plankton [9].
|
By incorporating intraspecific suppression into GCRM, we investigate the role of intraspecific suppression on consumer diversity and comprehend the paradox of the plankton.
|
To address the paradox within the niche theory and explain the coexistence of diverse species, we introduce intraspecific suppression as a novel mechanism, which is known for its capacity to enhance the stability of large ecological systems [15, 16].
|
Even though resources are externally supplied such that resource species never go extinct, the calculated bound of coexisting consumer species cannot explain the paradox of the plankton in the generalized MCRM (GCRM).
|
A
|
\right).over¯ start_ARG roman_T end_ARG ( italic_r ) ≤ divide start_ARG 1 end_ARG start_ARG 1 - italic_R start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( italic_G , italic_τ ) end_ARG roman_log ( divide start_ARG 1 end_ARG start_ARG italic_r end_ARG ) .
|
To compare different processes, we introduce the notation y(t;G,τ,V(0))𝑦𝑡𝐺𝜏𝑉0y(t;G,\tau,V(0))italic_y ( italic_t ; italic_G , italic_τ , italic_V ( 0 ) ) for the prevalence of the static NIMFA SIS process at time t𝑡titalic_t, on the graph G𝐺Gitalic_G with effective infection rate τ𝜏\tauitalic_τ and starting infection probability vector V(0)𝑉0V(0)italic_V ( 0 ).
|
We start with the matrix NIMFA equation (6) and upper bound the derivative of the infection probability vector V(t)𝑉𝑡V(t)italic_V ( italic_t ) by disregarding the non-linear term. After rescaling time such that δ=1𝛿1\delta=1italic_δ = 1 we find
|
In this section, we explore the interplay between the timescale of the epidemic process and the timescale of the topology updating process. We assume at first that the inter-update times Tmsubscript𝑇𝑚T_{m}italic_T start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT are constant and equal to ΔtΔ𝑡\Delta{t}roman_Δ italic_t. The timescales of the epidemic process are characterized by the average infection time 1β1𝛽\frac{1}{\beta}divide start_ARG 1 end_ARG start_ARG italic_β end_ARG between infection attempts on links and the average curing time 1δ1𝛿\frac{1}{\delta}divide start_ARG 1 end_ARG start_ARG italic_δ end_ARG. Fig. 2 shows the prevalence y(t)𝑦𝑡y(t)italic_y ( italic_t ) of three temporal NIMFA processes, that correspond to the three regimes in Fig. 1. Three processes with the same infection rate β𝛽\betaitalic_β, the same curing rate δ𝛿\deltaitalic_δ and random Erdős-Rényi 333An Erdős-Rényi random graph (ER graph) Gp(N)subscript𝐺𝑝𝑁G_{p}(N)italic_G start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ( italic_N ) is characterized by the link between each pair of the N𝑁Nitalic_N nodes existing with probability p𝑝pitalic_p, independent of any other link (see, e.g. [38]). contact graphs Gmsubscript𝐺𝑚G_{m}italic_G start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT with the same distribution, but with different inter-update times ΔtΔ𝑡\Delta{t}roman_Δ italic_t are illustrated. The solid red line in Fig. 2 shows the averaging behavior of the annealed regime. The dotted blue line illustrates the convergence to equilibrium on each network topology of the quenched regime. The dashed black line shows the irregular process of the intermediate regime.
|
which are the “rescaled” NIMFA governing equations. The same method can be applied to the system (5), where an equivalent system with δ=1𝛿1\delta=1italic_δ = 1 is found. The intervals [tm−1,tm)subscript𝑡𝑚1subscript𝑡𝑚[t_{m-1},t_{m})[ italic_t start_POSTSUBSCRIPT italic_m - 1 end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ) of the “rescaled” system are measured in units of the average curing time 1δ1𝛿\frac{1}{\delta}divide start_ARG 1 end_ARG start_ARG italic_δ end_ARG. In the following, we will use (7) instead of (3) and we write the dimensionless time t𝑡titalic_t instead of θ𝜃\thetaitalic_θ when using (7) for clarity. In the rescaled NIMFA governing equations (7), the effective infection rate τ𝜏\tauitalic_τ equals the infection rate β𝛽\betaitalic_β because δ=1𝛿1\delta=1italic_δ = 1.
|
B
|
(Tkj,i−Tki,j)Vk=(T11,2−T12,1)V1+(T21,2−T22,1)V2=0,subscript𝑇𝑘𝑗𝑖subscript𝑇𝑘𝑖𝑗subscript𝑉𝑘subscript𝑇112subscript𝑇121subscript𝑉1subscript𝑇212subscript𝑇221subscript𝑉20(T_{kj,i}-T_{ki,j})V_{k}=(T_{11,2}-T_{12,1})V_{1}+(T_{21,2}-T_{22,1})V_{2}=0\,,( italic_T start_POSTSUBSCRIPT italic_k italic_j , italic_i end_POSTSUBSCRIPT - italic_T start_POSTSUBSCRIPT italic_k italic_i , italic_j end_POSTSUBSCRIPT ) italic_V start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT = ( italic_T start_POSTSUBSCRIPT 11 , 2 end_POSTSUBSCRIPT - italic_T start_POSTSUBSCRIPT 12 , 1 end_POSTSUBSCRIPT ) italic_V start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + ( italic_T start_POSTSUBSCRIPT 21 , 2 end_POSTSUBSCRIPT - italic_T start_POSTSUBSCRIPT 22 , 1 end_POSTSUBSCRIPT ) italic_V start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 0 ,
|
In the full hypothesis (19) of Proposition 1, we use the standard construction of a primitive of an exact differential form in order to build the general related energy E(V)𝐸𝑉E(V)italic_E ( italic_V ):
|
In the first part of our account, i.e., Section III, we show that the hypothesis of these authors is a special case of a strictly more general necessary and sufficient condition (19),
|
This variational principle would like to translate the fact that the cerebral mechanism moves in the search and/or in the construction of equilibria with the minimum expense in the deviations from the original synaptic conductivities.
|
We are forced to extend the time to the full interval [0,+∞)0[0,+\infty)[ 0 , + ∞ ) because, typically, we need an infinite time to reach an equilibrium, even though in any meaningful case we see that by a trivial estimate we arrive very near to an equilibrium in a
|
A
|
Cherry reduction involves replacing the cherry with a single vertex. Reticulated cherry reduction involves deleting the arc between the parents of the two leaves and then suppressing degree-2 vertices.
|
Many core features discussed in the context of networks, such as reticulations, paths, cherries, siblings, and so on, have been translated into the language of covers; a summary is given in Table 9. These translations of features have been necessary for characterising several important classes of phylogenetic network in the language of covers. This includes some of the most prominent classes, including normal, tree-child, tree-sibling, orchard, and tree-based networks (relationships among the classes, determined by properties of their covers, are represented in Figure 6). However there are many classes, each of which is important for its own reasons, and this list is not complete. Some classes that have been omitted in the present paper might be difficult to define with covers (for instance, level-k𝑘kitalic_k networks or HGT networks), whereas others might just be a matter of following through with the first steps we have taken here (for example, reticulation-visible networks, and non-binary orchard networks).
|
By a theorem of [5, 14], for orchard networks, the order in which these are performed is not important.
|
A network is orchard, by definition, if and only if it can be reduced to a trivial network by cherry or reticulated cherry reductions. According to a result of [5, 14], the order of such reductions is not important. The procedures in Algorithm 4 exactly reflect the effect on the cover of these operations on the network, as can be seen in Figure 4.
|
Orchard networks are non-degenerate phylogenetic networks defined by the property that they can be reduced to a trivial network (a single vertex) by a series of cherry or reticulated cherry reductions [5, 14, 19]. In the present paper, we will restrict our attention to binary orchard networks.
|
B
|
This is because the population dynamics now take place in high or infinite dimension (Hallatschek and Nelson, 2008; Barton et al., 2010; Durrett and Fan, 2016; Louvet and Véber, 2023; Etheridge et al., 2023). For example, the spatial version of (1), the stochastic Fisher-Kolmogorov-Petrovsky-Piscunov (FKPP) equation introduced by Shiga (1988), is a stochastic partial differential equation that arises as the scaling limit of various discrete models under weak selection (Müller and Tribe, 1995; Durrett and Fan, 2016; Fan, 2021). Under the stochastic FKPP, Hallatschek and Nelson (2008) and Durrett and Fan (2016) studied the backward-time lineage dynamics of a single sample individual, conditioned on knowing its type. It would be interesting to see if our results in this paper can be extended to spatial stochastic models with selection.
|
Briefly, we find that positive selection does not in general lead to (7) and (8), that very strong positive selection (relative to the sample size) leads to neutral gene genealogies with a single ancient latent mutation for the favored allele. This is described in Section 3 for scenario (i) and for the case α~∈(1,∞)~𝛼1\widetilde{\alpha}\in(1,\infty)over~ start_ARG italic_α end_ARG ∈ ( 1 , ∞ ) in scenario (iii). On other hand, when selection is not too strong relative to the sample size, then extreme rarity of A1subscript𝐴1A_{1}italic_A start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT in the sample can effectively override strong positive selection and retrieve (7) and (8). This is described in Section 3 for scenario (ii) and for the case α~∈(−∞,1)~𝛼1\widetilde{\alpha}\in(-\infty,1)over~ start_ARG italic_α end_ARG ∈ ( - ∞ , 1 ) in scenario (iii). Figures 1, 2 and 3 illustrate our results in the three scenarios.
|
Some of our results for rare alleles have empirical relevance, specifically those for scenario (ii) including their robustness to time-varying population size demonstrated in Section 3.4, and those for scenario (iii) with α~<0~𝛼0\widetilde{\alpha}<0over~ start_ARG italic_α end_ARG < 0. In scenario (ii), as n𝑛nitalic_n increases for fixed but arbitrary α𝛼\alphaitalic_α, the distributions of latent mutations and the ages of those latent mutations become identical to those for neutral alleles described in Wakeley et al. (2023). Our results also show that selection does have an effect in this case, but it is only to raise or lower the rare-allele sampling probability (10) by the constant factor C𝐶Citalic_C for every value of n1subscript𝑛1n_{1}italic_n start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. This relative insensitivity to selection suggests confidence in using rare alleles for demographic inference and genome-wide association studies (O’Connor et al., 2015; Nait Saada et al., 2020; Zaidi and Mathieson, 2020). Slatkin and Rannala (1997b), who obtained the Ewens sampling formula result for rare deleterious alleles by assuming they evolve independently according to a linear birth-death process, cf. Slatkin and Rannala (1997a), suggested that deviations from this neutral prediction at two human-disease-associated loci were due to population growth. Reich and Lander (2001) made a similar argument for a number of other disease-associated loci starting from the mutation-selection balance model of Hartl and Campbell (1982) and Sawyer (1983) which also gives the Ewens sampling formula result for rare disease alleles.
|
Here we apply the model of coalescence in a random background described by Barton et al. (2004) to prove these results (7) and (8) for rare alleles in large samples and especially to extend the analysis of latent mutations to scenarios which include selection. We investigate both the number of latent mutations and their timing in the ancestry of the sample, and we allow that selection may be strong. We also show how the same scenarios can be treated using the conditional ancestral selection graph (Slade, 2000a), giving the same limiting results for all three scenarios.
|
We thank Alison Etheridge for raising the question about the applicability of our limiting results to Wright-Fisher reproduction (cf. Section 2.1.1). We also thank Shamil Sunyaev, Evan Koch and Joshua Schraiber for helpful discussions, and Daniel Rickert and Kejia Geng for assistance in producing the figures. Finally, we thank two anonymous reviewers for their insightful comments. This research was partially supported by National Science Foundation grants DMS-1855417 and DMS-2152103, and Office of Naval Research grant N00014-20-1-2411 to Wai-Tong (Louis) Fan.
|
D
|
Table 2: Configurations of generated datasets. The generated sets are separated into subsets A, B, C, D, E, F, G, H with respect to a varying parameter. Parameters n_ase, n, w_frac and p0 are fixed to 1000, 10000, 1212\frac{1}{2}divide start_ARG 1 end_ARG start_ARG 2 end_ARG and 0.990.990.990.99 respectively.
|
[1.0,b]1.0𝑏[1.0,\leavevmode\nobreak\ b][ 1.0 , italic_b ], b=𝑏absentb=italic_b = 1.1, 1.2, 1.3, 1.4, 1.5
|
y∼𝒩ℬ(𝓍,𝓅),𝓍∼𝒩ℬ(𝓎,1−𝓅)formulae-sequencesimilar-to𝑦𝒩ℬ𝓍𝓅similar-to𝓍𝒩ℬ𝓎1𝓅y\sim\mathpzc{NB}(x,p),\leavevmode\nobreak\ \leavevmode\nobreak\ x\sim\mathpzc%
|
y∼ℬ𝒾𝓃ℴ𝓂(𝓃,𝓅),𝓍∼ℬ𝒾𝓃ℴ𝓂(𝓃,1−𝓅).formulae-sequencesimilar-to𝑦ℬ𝒾𝓃ℴ𝓂𝓃𝓅similar-to𝓍ℬ𝒾𝓃ℴ𝓂𝓃1𝓅y\sim\mathpzc{Binom}(n,p),\leavevmode\nobreak\ \leavevmode\nobreak\ x\sim%
|
r(x,b,a)=bx+a,𝑟𝑥𝑏𝑎𝑏𝑥𝑎r(x,b,a)=bx+a,italic_r ( italic_x , italic_b , italic_a ) = italic_b italic_x + italic_a ,
|
A
|
_{1}+\dots+T_{i}\leq u<T_{1}+\dots+T_{i+1}\}}∀ italic_u ≥ 0 , italic_ζ ( italic_u ) = italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_u - ( italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + ⋯ + italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ) bold_1 start_POSTSUBSCRIPT { italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + ⋯ + italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ≤ italic_u < italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + ⋯ + italic_T start_POSTSUBSCRIPT italic_i + 1 end_POSTSUBSCRIPT } end_POSTSUBSCRIPT
|
often denoted by W𝑊Witalic_W (for waning) (Lavine et al., 2011; Carlsson et al., 2020). In our work, we will model the decay of immunity by
|
also been proposed, for instance in Hethcote et al. (1981); Cooke and Van Den Driessche (1996); Taylor and Carr (2009); Bhattacharya and Adler (2012). In such models,
|
complexity (Anderson and May, 1982; Farrington, 2003; Magpantay, 2017; Delmas et al., 2022). However, the bulk of this work
|
Acknowledgements.1Acknowledgements.1\EdefEscapeHexAcknowledgementsAcknowledgements\[email protected]\hyper@anchorend
|
D
|
In the present study, within the established framework of the GLV model featuring a fully connected random interaction network, we explicitly consider time-dependent species interactions and a Monod functional response, commonly used for modelling the growth of microorganisms [33].
|
Moreover, theoretical models as the GLV, stand as the scaffolding upon which empirical research is built, offering a controlled setting where foundational ecological mechanisms can be disentangled. As such, they serve as a crucible for testing the robustness and generalizability of ecological theories.
|
Specifically, we adopt the hypothesis that these interactions can be modeled as stochastic colored noises, which we call annealed GLV (AGLV).
|
To make a comparison between GLV with quenched and annealed interactions, we also investigate the phase diagram for the case τ=0𝜏0\tau=0italic_τ = 0 and J(x)=x𝐽𝑥𝑥J(x)=xitalic_J ( italic_x ) = italic_x. Since δ>0𝛿0\delta>0italic_δ > 0 (see eq. (5)) and E(x)>0E𝑥0\mathrm{E}(x)>0roman_E ( italic_x ) > 0, in order for the stationary solution to exist, we have the conditions σ<2(1−μ)𝜎21𝜇\sigma<\sqrt{2(1-\mu)}italic_σ < square-root start_ARG 2 ( 1 - italic_μ ) end_ARG and μ≤1𝜇1\mu\leq 1italic_μ ≤ 1, leading to a lower bound for the unbounded growth phase of the AGLV as shown in Fig. 4. However, by solving numerically the self-consistent eq. (2) (see Supplementary Methods) and also performing the numerical simulation of the entire GLV systems, we find that below this bound, even though a stationary solution exists, it may not be reached. In particular, in the red-purple region of Fig. 4, independently of the initial condition for x(t=0)𝑥𝑡0x(t=0)italic_x ( italic_t = 0 ), there is a singularity at finite times, leading to the explosion of the species population. In the light-blue region instead, if we start close to the predicted stationary solution P∗(x)superscript𝑃𝑥P^{*}(x)italic_P start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( italic_x ), then we always find that the stationary solution is reached and it coincides with the one predicted by the DMFT eq. (5). However, there is a set of initial conditions (for sufficiently large x(t=0)𝑥𝑡0x(t=0)italic_x ( italic_t = 0 )) for which x(t)𝑥𝑡x(t)italic_x ( italic_t ) may diverge for finite t𝑡titalic_t. Such divergent trajectories are also confirmed when we simulate the full eq. (1) for a large enough number of species (see Supplementary Methods).
|
In this study, we have undertaken an investigation into the GLV equations with annealed disorder, incorporating finite correlation time and simple functional responses. We have determined the corresponding dynamical mean-field equations for a large number of species, which do not depend on the specific form of J(x)𝐽𝑥J(x)italic_J ( italic_x ). The inclusion of temporal stochastic fluctuations in the strengths of species interactions has resulted in a remarkably diverse range of phenomena and ecologically significant outcomes.
|
B
|
In the previous section, we considered a multistep model of RNA production and degradation, and showed that it can be mapped to an infinite-server queue A/S/∞𝐴𝑆A/S/\inftyitalic_A / italic_S / ∞, where transcription is the arrival process A𝐴Aitalic_A, RNA degradation is the service process S𝑆Sitalic_S, and the number of observed RNA is the queue length (the number of busy servers). We showed that the arrival process is described by the Markovian arrival process (MAP𝑀𝐴𝑃MAPitalic_M italic_A italic_P), which includes the Poisson process (denoted by M𝑀Mitalic_M) and the Markov-modulated Poisson process (denoted by MMPP𝑀𝑀𝑃𝑃MMPPitalic_M italic_M italic_P italic_P) as special cases. We derived the renewal condition under which the MAP becomes a renewal process (denoted by G𝐺Gitalic_G). Finally, we showed that RNA degradation is fully specified by the RNA degradation time distribution, which is assumed to be the same for all RNA. We showed that the distribution of the RNA degradation time is a phase-type distribution (PH𝑃𝐻PHitalic_P italic_H), of which an exponential distribution (M𝑀Mitalic_M) is a special case, and that a deterministic (degenerate) distribution (D𝐷Ditalic_D) is a good approximation for service times that include numerous rate-limiting steps. In Fig. 3, we present a flow diagram which can be used to identify the arrival and service processes, once the matrices D0subscript𝐷0D_{0}italic_D start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, D1subscript𝐷1D_{1}italic_D start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and Ddegsubscript𝐷degD_{\text{deg}}italic_D start_POSTSUBSCRIPT deg end_POSTSUBSCRIPT describing the model in Eqs. (1a) and (1b) are identified.
|
Once we move away from renewal arrivals, there are many results potentially useful for gene expression modelling that we did not cover in detail. We first mention the BMAP/G/∞𝐵𝑀𝐴𝑃𝐺BMAP/G/\inftyitalic_B italic_M italic_A italic_P / italic_G / ∞ queue, where customers arrive in batches according to a batch Markovian arrival process (BMAP), and the service times are generally distributed. This queueing system describes our stochastic model of gene expression in Fig. 1 in the most general setting. Results for this queueing system are limited and quite complicated, however numerically feasible formulas have been derived for service times that are phase-type distributed [118]. Another type of non-renewal processes which we mentioned only briefly are semi-Markov processes (SMP). Semi-Markov processes change their interarrival time distribution according to a finite-state Markov process. In that sense, they can be considered as Markov-modulated renewal processes. A generalization of the G/M/∞𝐺𝑀G/M/\inftyitalic_G / italic_M / ∞ queue to semi-Markov arrivals is the SMP/M/∞𝑆𝑀𝑃𝑀SMP/M/\inftyitalic_S italic_M italic_P / italic_M / ∞ queue, for which the stationary queue length distribution and the Laplace transform of the non-stationary queue length distribution have been computed in Ref. [119]. More general is the SMP/G/∞𝑆𝑀𝑃𝐺SMP/G/\inftyitalic_S italic_M italic_P / italic_G / ∞ queue, in which the service time distribution is arbitrary. This queueing system was studied in Ref. [120], where recurrence relations for (binomial) moments of both non-stationary and stationary queue length distributions have been derived. We showed in Eq. (8) that the MAP is a special case of the SMP. An advantage of the latter approach is that interarrival time distributions can be described by any suitable, user-defined function rather than a phase-type distribution as for a MAP. Practically, this means that the SMP description has fewer parameters than MAP. For example, a phase-type distribution of the hypoexponential type which is the distribution of a random variable composed of N𝑁Nitalic_N exponential distributions each with their own rate could be approximated by a two-parameter continuous distribution such as the gamma distribution. Hence, the SMP maybe useful as a reduced version of complex models of gene expression.
|
Table 1: A summary of known results for selected infinite-server queues that are relevant for stochastic gene expression modelling. The table refers to the non-stationary and stationary RNA number distributions and their corresponding moments.
|
It is interesting to note that the GX/G/∞superscript𝐺𝑋𝐺G^{X}/G/\inftyitalic_G start_POSTSUPERSCRIPT italic_X end_POSTSUPERSCRIPT / italic_G / ∞ queue and Ref. [71] have been the sole point of reference for most of the literature connecting stochastic gene expression to queueing theory [57, 59, 23].That is in our opinion unfortunate, because the results for the GX/G/∞superscript𝐺𝑋𝐺G^{X}/G/\inftyitalic_G start_POSTSUPERSCRIPT italic_X end_POSTSUPERSCRIPT / italic_G / ∞ queue (in its general setting) are limited to the moments of the queue length distribution, whereas the queue length distribution itself remains elusive. This explains why queueing theory has so far played a minor role in analysing stochastic models of gene expression. On the other hand, if one sacrifices the generality of the GX/G/∞superscript𝐺𝑋𝐺G^{X}/G/\inftyitalic_G start_POSTSUPERSCRIPT italic_X end_POSTSUPERSCRIPT / italic_G / ∞ queue, and instead considers its special cases—G/M/∞𝐺𝑀G/M/\inftyitalic_G / italic_M / ∞, MX/G/∞superscript𝑀𝑋𝐺M^{X}/G/\inftyitalic_M start_POSTSUPERSCRIPT italic_X end_POSTSUPERSCRIPT / italic_G / ∞ and G/D/∞𝐺𝐷G/D/\inftyitalic_G / italic_D / ∞ queues, all of which are undoubtedly relevant for gene expression modelling—then for those queues it is possible to compute both non-stationary and stationary queue length distributions without using the chemical master equation. To the best of our knowledge, this fact has been largely overlooked in the biological modelling community.
|
In this section, we review known results for six infinite-server queues made by combining the arrival and service processes described above, which are of particular importance for stochastic gene expression modelling. We focus on queues whose arrivals are described by renewal (G𝐺Gitalic_G) and Markov-modulated processes (MMPP𝑀𝑀𝑃𝑃MMPPitalic_M italic_M italic_P italic_P), as these type of arrivals are present in most of the stochastic gene expression models in the literature. The main results are summarized in Table 1. For each queueing system, we report whether the non-stationary and stationary queue length distributions and their corresponding moments are known, along with a reference where these results can be found. Some results are in a closed form, whereas others require inverting the Laplace transform (denoted by LT), computing the moments by recursive relations (denoted by RR) or approximating the probability distribution by truncated series (TS). In the subsection below, we discuss these results in detail for the GX/G/∞superscript𝐺𝑋𝐺G^{X}/G/\inftyitalic_G start_POSTSUPERSCRIPT italic_X end_POSTSUPERSCRIPT / italic_G / ∞, G/M/∞𝐺𝑀G/M/\inftyitalic_G / italic_M / ∞, MX/G/∞superscript𝑀𝑋𝐺M^{X}/G/\inftyitalic_M start_POSTSUPERSCRIPT italic_X end_POSTSUPERSCRIPT / italic_G / ∞, G/D/∞𝐺𝐷G/D/\inftyitalic_G / italic_D / ∞ and MMPP/M/∞𝑀𝑀𝑃𝑃𝑀MMPP/M/\inftyitalic_M italic_M italic_P italic_P / italic_M / ∞ queues. We do not show results for the MMPP/G/∞𝑀𝑀𝑃𝑃𝐺MMPP/G/\inftyitalic_M italic_M italic_P italic_P / italic_G / ∞ queue, as they are quite complicated, and only the mean and the variance have been reported. Other infinite-server queues not mentioned in Table 1 are discussed later.
|
B
|
This external field is self-consistent in the sense that, at any time t≥0𝑡0t\geq 0italic_t ≥ 0, it is given by the very distribution of the state of the focal process.
|
The moment-mediated interactions we study allow for direct solution of self-consistent fields via a nonlinear moment equation, which is amenable to standard numerical ODE techniques.
|
This external field is self-consistent in the sense that, at any time t≥0𝑡0t\geq 0italic_t ≥ 0, it is given by the very distribution of the state of the focal process.
|
We calculate self-consistent fields by solving limiting nonlinear forward equations for the focal process.
|
First, the self-consistent fields 𝐫(t)𝐫𝑡\mathbf{r}(t)bold_r ( italic_t ) are calculated as in Example 4.2 by solving a D𝐷Ditalic_D-dimensional initial value problems (we reverse time such that the process starts at the tree root time τ>0𝜏0\tau>0italic_τ > 0, and ends at t=0𝑡0t=0italic_t = 0).
|
C
|
This sequential information is crucial for understanding the folding patterns and functional motifs within the protein.
|
These edges play a pivotal role in encoding the protein’s tertiary structure and folding patterns, enabling us to capture the intricate spatial arrangements of amino acids within the protein’s core.
|
To model the diverse interactions and relationships between amino acids, we introduce different types of edges connecting the nodes.
|
Sequential edges are employed to connect adjacent nodes in the protein sequence, effectively representing the sequential order of amino acids and capturing the linear arrangement of the protein’s primary structure.
|
Additionally, we utilize spatial edges to establish connections between nodes that are in close spatial proximity within the 3D structure of the protein.
|
D
|
{2}\,.italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = italic_N start_POSTSUBSCRIPT roman_E end_POSTSUBSCRIPT italic_σ start_POSTSUBSCRIPT roman_E end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + italic_N start_POSTSUBSCRIPT roman_I end_POSTSUBSCRIPT italic_σ start_POSTSUBSCRIPT roman_I end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT .
|
the mean input μisubscript𝜇𝑖\mu_{i}italic_μ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and input variance σi2superscriptsubscript𝜎𝑖2\sigma_{i}^{2}italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT of
|
with total input current Ii(t)subscript𝐼𝑖𝑡I_{i}(t)italic_I start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t ) that consists of recurrent input
|
σ2superscript𝜎2\sigma^{2}italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT of the total input to each neuron while modifying the
|
the subthreshold dynamics of the membrane potential Visubscript𝑉𝑖V_{i}italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT of neuron
|
C
|
Typically, asymptomatic individuals are less contagious but unaware that they have contracted the disease, whereas symptomatic individuals are more contagious but can take precautions (use of a mask, social distancing, quarantine, use of a condom, etc.) to limit the spread of the disease.
|
We now look at the parameter regions where one of the two infection rates is subcritical and the other one supercritical, in which case the behavior of the process is less obvious.
|
In particular, the characteristics of the disease and the social behavior of the population induce a variability in the rate at which the disease spreads from asymptomatic versus symptomatic individuals.
|
Typically, asymptomatic individuals are less contagious but unaware that they have contracted the disease, whereas symptomatic individuals are more contagious but can take precautions (use of a mask, social distancing, quarantine, use of a condom, etc.) to limit the spread of the disease.
|
In particular, the contact process with aging truncated at age 2 is equivalent to our epidemic model with β1<β2subscript𝛽1subscript𝛽2\beta_{1}<\beta_{2}italic_β start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT < italic_β start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT in which the asymptomatic individuals spread the disease at a slower rate than the symptomatic individuals.
|
B
|
The conditional pose prediction task takes the encoder and decoder from the mulit-class image reconstruction task and reversed order.
|
Compared to the variational image reconstruction task, the inputs and ground-truth labels are no long EM images, but poses
|
In addition, we generated visualization of the volumes of the 1st, 5th, and 10th classes from cryoDRGN2, HetACUMN, and the groundtruth (see Figure 6). Comparing with the ground truth, it is no obvious difference for cryoDRGN2 and cryoFIRE on the first two volumes. But for the 10th class, there is some visible inconsistency of predicted volume against the ground truth. This observation is consistent with the latent-z𝑧zitalic_z distribution mentioned above.
|
The second task is for conditional pose prediction (CPP), which exploits the same encoder-decoder as the first task but in reversed order to explore larger pose spaces. Instead of image reconstruction, it reconstructs the corresponding projection poses from randomly sampled poses with the reversed pipeline, and minimizes the difference between the pose pairs, i.e., conditional pose prediction task.
|
Figure 1: Architectures of the two tasks in HetACUMN. (a) the variational image reconstruction task; (b) the conditional pose prediction (CPP) task.
|
A
|
The CPN utilized the ResNeXt backbone network [49] to extract multiscale feature maps, a regression head to generate candidate contour representations for each pixel, and a classification head to determine whether an object was present or not at these locations. A proposal sampling stage extracted a sparse list of contour representations, which were transformed into the pixel domain using differentiable Fourier transformation to encode contour information in the frequency domain [6]. The precision of the contours was further improved by using a displacement field generated by an additional regression head. In addition to the original CPN, this work introduced dedicated supervision for boundaries and proposed an extra branch to estimate localization uncertainty for boundaries. The multitask training objective was defined by a combination of the average absolute difference loss for contour regression, the generalized IoU loss for boundary localization [7], the absolute L1 distance for local refinement [5], the distance loss for frequency regularization [5], the binary cross entropy loss for classification, and the negative power log-likelihood loss for uncertainty estimation [8].
|
The uncertainty-aware Listen2Student mechanism [28] was applied to incorporate unlabeled examples during training, where a teacher model generated bounding boxes as pseudo-labels to supervise the student model. The model inputs were three-channel images. For post-processing, the Vanilla NMS relying solely on the classification score might not reliably indicate the proposal’s quality. To address this issue, the approach proposed in [8] was employed to incorporate uncertainty estimations into the NMS selection process. The object contours were transformed into segmentation masks through rasterization and region filling. A region-growing technique [9] was further adopted for overlapping regions.
|
Lou et al. [30] (T2-sribdmed) first divided the images into four distinct categories based on low-level image features (e.g., intensities) in an unsupervised way. Then, class-wise cell segmentation models were trained for each category. The model employed U-Net-like architecture where ConvNeXT [29] was used as the building blocks. To address the diverse cell morphologies, two distinct decoder heads were employed. One decoder predicted the cell distance map and semantic map, effectively segmenting round-shaped cells, while the other decoder predicted the cell gradient map to handle cells with irregular shapes. The training process involved pre-training the model on the entire dataset, followed by fine-tuning on each of the four categories, resulting in the creation of four models. During inference, the image was initially classified into one of the four categories, and subsequently, the corresponding model was used to perform the segmentation process (Methods).
|
The model inputs were three-channel images. The overall loss function was the combination of binary cross-entropy loss and mean-square error loss. The inference process relied on the sliding window strategy, a highly efficient approach for processing whole-slide images. During the merging of predictions from these small window patches, an importance map was generated and applied to the predictions, thereby preventing the recognition of the same cells at the patch boundary as multiple cells.
|
Additionally, all the top three teams explored the potential of leveraging the unlabeled images to improve the segmentation performance. Specifically, T1-osilab [25] employed consistency regularization [23] to match the algorithm’s predictions on the clean and degraded unlabeled images and introduced an additional head module to reconstruct the unlabeled images [7]. Both T1-osilab [25] and T2-sribdmed [30] investigated pseudo-label learning, generating pseudo labels for unlabeled images using trained models, followed by training the network with both pseudo labels and ground-truth annotations. T3-cells [44] implemented the uncertainty-aware Listen2Student mechanism [28] to train a student network with low-uncertainty pseudo labels. However, despite these joint efforts, none of the employed methods demonstrated a notable enhancement in segmentation performance. Thus, it remains an open question how to effectively use unlabeled data to boost cell segmentation performance.
|
A
|
Our method is both a generalization of existing methods that consider higher-order phase-isostable interactions and a general framework from which to study higher-order effects. For example, a higher-order reduced model is derived using the Haken-Kelso-Bunz (HKB) equation in [36]. The higher-order terms are the lowest-order Fourier terms of our ℋℋ\mathcal{H}caligraphic_H functions, thus the same questions of existence can be answered with our method and further explored with additional Fourier terms and multi-body interactions. Larger networks of the HKB equation that consider interactions well beyond dyadic [74] fit comfortably within the limitations of our method (see Section 6.1 below for details). Similarly, there is no restriction to applying our method to questions of coordinated movement, e.g., [25], or studies of coupled population dynamics [39].
|
Second, we use first order averaging, which is technically valid for small ε𝜀\varepsilonitalic_ε comparable to those used in weak coupling theory. This limitation is especially apparent in the last example, where the thalamic model is near a SNIC bifurcation and the reciprocal of the period (1/44 ms≈0.0231times44ms0.0231/$44\text{\,}\mathrm{m}\mathrm{s}$\approx 0.0231 / start_ARG 44 end_ARG start_ARG times end_ARG start_ARG roman_ms end_ARG ≈ 0.023) places an approximate upper bound on the coupling strength ε𝜀\varepsilonitalic_ε, as ε𝜀\varepsilonitalic_ε must be much smaller than 1/T1𝑇1/T1 / italic_T [32]. This example may benefit from higher-order averaging methods [31, 32] could be used. In addition, we have observed phase drift in the full model (data not shown) in a manner that may not be possible to capture in the current formulation. For example, with N=3𝑁3N=3italic_N = 3 homogeneous oscillators and some values of ε𝜀\varepsilonitalic_ε, two oscillators synchronize and the third exhibits a phase drift, effectively resulting in a 2 oscillator system with a drift in the remaining phase difference. In our formulation, a single phase difference equation can not exhibit drift without heterogeneity. This discrepancy may be due to ignoring transients in the isostable coordinates – if we were to include explicit isostable dynamics such as in [42], this behavior may be captured.
|
Our method is both a generalization of existing methods that consider higher-order phase-isostable interactions and a general framework from which to study higher-order effects. For example, a higher-order reduced model is derived using the Haken-Kelso-Bunz (HKB) equation in [36]. The higher-order terms are the lowest-order Fourier terms of our ℋℋ\mathcal{H}caligraphic_H functions, thus the same questions of existence can be answered with our method and further explored with additional Fourier terms and multi-body interactions. Larger networks of the HKB equation that consider interactions well beyond dyadic [74] fit comfortably within the limitations of our method (see Section 6.1 below for details). Similarly, there is no restriction to applying our method to questions of coordinated movement, e.g., [25], or studies of coupled population dynamics [39].
|
When a finite number of oscillators is considered, other features may be exploited, each with their own limitations. When the network exhibits symmetries, it is possible to enumerate all phase-locked states with weak or strong coupling [20], but this method is not suited to work in the case of asymmetries [23]. In networks of neurons, the pulse-like shape of action potentials allows for the use of pulse coupling [11, 5, 4, 52, 40]. This approach yields analytically tractable results for weak or strong and possibly asymmetric coupling, but the number of oscillators is often limited to pairs. The study of network behavior can be made tractable by using piecewise smooth models, but coupling functions require particular assumptions such as linear coupling [9, 8], weak coupling [7, 49], and Laplacian coupling [43]. In addition, the analysis of phase-locked states is often restricted to understanding the stability of a synchronous network state [8, 10] (although some do consider the stability of splay states [7]).
|
Our method may aid in addressing questions of synchrony and phase-locking in general finite populations of coupled oscillators with heterogeneity where order parameters are typically used. For example, the heterogeneous systems and coupling functions considered in [1] can not exhibit synchrony and a “bounded synchronization” measurement [22] is necessary. Our method could provide a far more detailed understanding of the bounded synchronization state alongside other possible phase-locked states. Moreover, similar questions could be asked in much more realistic and complex neurobiological models.
|
D
|
)\,.- divide start_ARG italic_σ start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT roman_Γ ( italic_α ) roman_cos ( divide start_ARG italic_α end_ARG start_ARG 2 end_ARG roman_arccos divide start_ARG italic_I start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG start_ARG square-root start_ARG italic_I start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + roman_Δ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG end_ARG ) end_ARG start_ARG 2 start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT ( italic_I start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + roman_Δ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT divide start_ARG italic_α end_ARG start_ARG 2 end_ARG end_POSTSUPERSCRIPT end_ARG + caligraphic_O ( [ italic_σ start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT ] start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) .
|
Equations (39) and (41) constitute the result (36) of the linear approximation for time-independent regimes in terms of physically meaningful macroscopic observables.
|
In this section we employ the results for firing rate (39) and mean voltage (41) to construct a self-consistent mathematical description of the macroscopic states of the population of QIFs with global synaptic coupling (9,10). Here I0=η0+Jrsubscript𝐼0subscript𝜂0𝐽𝑟I_{0}=\eta_{0}+Jritalic_I start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = italic_η start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + italic_J italic_r. In the studies with circular and pseudo- cumulants for a Gaussian noise, Goldobin-Dolmatova-2020 ; Goldobin-2021 ; Goldobin-Volo-Torcini-2021 the most challenging cases were that of a small or vanishing heterogeneity ΔΔ\Deltaroman_Δ. Therefore, it will be instructive to demonstrate the application of our theoretical findings to the case of Δ=0Δ0\Delta=0roman_Δ = 0, where in the absence of noise the OA (and MPR) manifold is marginally stable. The results for a less technically challenging case of heterogeneous populations will be provided in the next section.
|
For the case of a heterogeneous population with a Cauchy distribution of η𝜂\etaitalic_η, the theoretical analytical approximation (36) [equivalently, Eqs. (39) and (41)] is examined by comparison to the ‘exact’ numerical results in Fig. 4. The analytical theory exhibits a decent accuracy even for the noise-driven regimes (the right bifurcation curve) and as large noise strength as σ=0.5𝜎0.5\sigma=0.5italic_σ = 0.5. The numerical results in Fig. 4 are calculated with power-series expansions of time-independent F(k)𝐹𝑘F(k)italic_F ( italic_k ) (Appendix E) with controlled accuracy 10−15superscript101510^{-15}10 start_POSTSUPERSCRIPT - 15 end_POSTSUPERSCRIPT. A fine detail of the mean-field driven regime can be noticed for both homo- and heterogeneous populations in Figs. 3 and 4: the results indicate that both noises with α<1𝛼1\alpha<1italic_α < 1 and α>1𝛼1\alpha>1italic_α > 1 slightly extend the domain of the existence of this regime towards lower excitability η0subscript𝜂0\eta_{0}italic_η start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. Both the analytical and exact left curves with α≠1𝛼1\alpha\neq 1italic_α ≠ 1 are shifted leftwards as compared to the case of α=1𝛼1\alpha=1italic_α = 1.
|
In Sec. III, for the case of noninteger α𝛼\alphaitalic_α, we construct a first-order perturbation theory for the effect of noise on the characteristic function and derive macroscopic observables: population-mean voltage and firing rate. In Sec. IV, the theoretical results for macroscopic states of homogeneous populations of QIFs are reported.
|
A
|
The code supporting the conclusions of this study is available on GitHub at https://github.com/jgornet/predictive-coding-recovers-maps. The repository contains the Malmo environment code, training scripts for both the predictive coding and autoencoding neural networks, as well as code for the analysis of predictive coding and autoencoding results. Should there be any questions or need for clarifications about the codebase, we encourage readers to raise an issue on the repository or reach out to the corresponding author.
|
In the previous section, we demonstrate that the predictive coding neural network captures spatial relationships within an environment containing more internal spatial information than can be captured by an auto-encoder network that encodes image similarity. Here, we analyze the structure of the spatial code learned by the predictive coding network. We demonstrate that each unit in the neural network’s latent space activates at
|
Moreover, we study the predictive coding neural network’s representation in latent space. Each unit in the network’s latent space activates at distinct, localized regions—called place fields—with respect to physical space. At each physical location, there exists a unique combination of overlapping place fields. At two locations, the differences in the combinations of overlapping place fields provides the distance between the two physical locations. The existence of place fields in both the neural network and the hippocampus\autociteokeefePlaceUnitsHippocampus1976 suggest that predictive coding is a universal mechanism for mapping. In addition, vector navigation emerges naturally from predictive coding by computing distances from overlapping place field units. Predictive coding may provide a model for understanding how place cells emerge, change, and function.
|
All datasets supporting the findings of this study, including the latent variables for the autoencoding and predictive coding neural networks, as well as the training and validation datasets, are available on GitHub at https://github.com/jgornet/predictive-coding-recovers-maps. Researchers and readers interested in accessing the data for replication, verification, or further studies can contact the corresponding author or refer to the supplementary materials section for more details.
|
The code supporting the conclusions of this study is available on GitHub at https://github.com/jgornet/predictive-coding-recovers-maps. The repository contains the Malmo environment code, training scripts for both the predictive coding and autoencoding neural networks, as well as code for the analysis of predictive coding and autoencoding results. Should there be any questions or need for clarifications about the codebase, we encourage readers to raise an issue on the repository or reach out to the corresponding author.
|
C
|
We turn to more convenient and fast EEG signals and focus on the object recognition tasks, in which the semantic information is the significant gain by natural image decoding compared to visual decoding of contrast, color, etc.
|
Beyond the self-supervised framework, we try to demonstrate the biological plausibility by resolving the visual processing of EEG signals.
|
We have tried to demonstrate the feasibility and plausibility of EEG-based image decoding from three folds, zero-shot classification performance, detailed resolving of the brain activity, and model interpreting.
|
Motivated by these challenges, we present an EEG-based image decoding framework that employs self-supervised learning, enabling the model to achieve zero-shot generalization in object recognition tasks, further demonstrating the feasibility.
|
We demonstrate the feasibility of investigating natural image information from EEG signals. Extensive experiments affirm the biological plausibility, which brings a resolving of human object recognition from temporal, spatial, spectral, and semantic aspects.
|
B
|
Here, we introduce notation defining the probability of observing a particular gene tree topology under uniform sampling. The values hw(x,y)subscriptℎ𝑤𝑥𝑦h_{w}(x,y)italic_h start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ( italic_x , italic_y ) each correspond to the probability of the w𝑤witalic_wth gene tree topology as follows. Let w=1𝑤1w=1italic_w = 1 correspond to the species tree topology. Let w=2𝑤2w=2italic_w = 2 correspond to either of the two alternative balanced topologies, which have the same probability by exchangeability of sibling species. Let w=3𝑤3w=3italic_w = 3 correspond to any of the four caterpillar topologies in which the cherry is {a,b}𝑎𝑏\{a,b\}{ italic_a , italic_b }. Let w=4𝑤4w=4italic_w = 4 correspond to any of the caterpillar topologies in which the cherry is {c,d}𝑐𝑑\{c,d\}{ italic_c , italic_d }. Let w=5𝑤5w=5italic_w = 5 correspond to any of the remaining eight caterpillar topologies.
|
The main theorem of this section establishes possible anomaly zones for the caterpillar tree. That is, given any choice of birth rate λ𝜆\lambdaitalic_λ and death rate μ𝜇\muitalic_μ, the caterpillar species tree with some choices of branch lengths produces a gene tree distribution where the uniformly sampled gene tree can have an alternate topology have maximal probability. We show that such anomalous gene trees can correspond to only balanced quartets. The species tree topology must have probability greater than any other caterpillar topology, for any choice of branch lengths. Theorem 4 is also a parallel result to what is found in MSC.
|
The main result of this section is that the species tree topology corresponds to the uniformly sampled gene tree with maximal probability. The main implication of this result is that when more independent gene trees are given, the democratic vote estimator applied to the uniform sampled gene trees obtains the species tree topology with probability approaching one. Theorem 2 is a parallel result to what is found in MSC.
|
In this paper, the distribution of gene trees is described further for gene trees generated under GDL. With this further information, we describe when anomaly zones can exist for gene trees generated under GDL for rooted species trees on either three or four species. As with anomalous gene trees in the multispecies coalescent model, the lengths of interior edges of the species tree are important. As the interior branch lengths in the species tree grow to infinity, the probability that the gene tree topology coincides with that of the species tree goes to 1111. The discordant gene trees have less probability. Similarly for GDL, species trees with longer interior edges have lower probabilities of discordant gene trees. However, the parameters governing birth and death are also relevant. As observed in Hill et al. (2022), when the per-capita birth rate is high, the number of edges is high and the signal emitted by the species tree diminishes. Conversely, when the birth rate is 0, every discordant gene tree has probability zero, for any setting of branch lengths in the species tree. Similar effects occur when the death rate is high enough to prevent excessive branching in the GDL process, but explicit quantitative results are required to understand this effect. This paper provides results that aid in intuiting the connection between the birth and death rates and gene tree discordance, but the focus is on the number of copies in the ancestral population rather than the birth and death rates themselves. The main results apply to any choice of birth and death rates and species trees with three or four leaves.
|
If the gene trees are assumed independent of each other, then the “democratic vote” estimator finds the species tree with the highest probability by counting the number of times each branching pattern appears in the list of gene trees. As more independent gene trees are accumulated, the gene tree with the highest probability obtains the most votes almost surely. As in Degnan and Rosenberg (2006), this estimate of the species tree is simply the gene tree topology that occurs most often. Even under these ideal assumptions, there exist species trees for which the democratic vote of MSC gene trees is positively misleading. Such rooted trees exist when the number of species is as few as four. Similarly, the democratic vote estimate can be positively misleading for some unrooted trees with as few as five species. Species trees are in the anomaly zone if the gene tree with maximum probability is discordant from the species tree (Degnan and Rosenberg (2006)). However, by using supertree methods such as ASTRAL (Mirarab et al. (2014)) on unrooted quartets, any unrooted species tree can be consistently estimated in a polynomial number of species and polynomial number of gene trees. The ASTRAL suite has found extensive usage across many biological datasets. Finite sample guarantees have also been developed, see Shekhar et al. (2018). This type of result assumes some level of error tolerance ϵitalic-ϵ\epsilonitalic_ϵ, then provides a minimum number of genes that are required to obtain a provable amount of error below the tolerance level.
|
B
|
V(L)<Λ<∞𝑉𝐿ΛV(L)<\Lambda<\inftyitalic_V ( italic_L ) < roman_Λ < ∞ so that N>N∗=exp(V(L)ϵ−1)𝑁subscript𝑁𝑉𝐿superscriptitalic-ϵ1N>N_{*}=\exp(V(L)\epsilon^{-1})italic_N > italic_N start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT = roman_exp ( italic_V ( italic_L ) italic_ϵ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ).
|
If V(L)<Λ<ST|T<∞𝑉𝐿Λsubscript𝑆conditional𝑇𝑇V(L)<\Lambda<S_{T|T}<\inftyitalic_V ( italic_L ) < roman_Λ < italic_S start_POSTSUBSCRIPT italic_T | italic_T end_POSTSUBSCRIPT < ∞ then N>N∗𝑁subscript𝑁N>N_{*}italic_N > italic_N start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT and there is a sufficient number of walkers to accomplish the rare event in finite time as ϵ→0+→italic-ϵsuperscript0\epsilon\to 0^{+}italic_ϵ → 0 start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT.
|
In this regime, there is a sufficient number of walkers to accomplish the extreme rare event in finite time as ϵ→0+→italic-ϵsuperscript0\epsilon\to 0^{+}italic_ϵ → 0 start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT.
|
As in Regime 1, we fix the number of walkers to be N=⌊exp(ϵ−1Λ)⌋𝑁superscriptitalic-ϵ1ΛN=\lfloor\exp(\epsilon^{-1}\Lambda)\rflooritalic_N = ⌊ roman_exp ( italic_ϵ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT roman_Λ ) ⌋ and we (i) asymptote ϵ→0+→italic-ϵsuperscript0\epsilon\to 0^{+}italic_ϵ → 0 start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT before (ii) asymptoting δ→0+→𝛿superscript0\delta\to 0^{+}italic_δ → 0 start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT.
|
In this regime, increasing the number of walkers (i.e., increasing "labor") has the strongest effect on reducing the extreme first hitting time.
|
B
|
Due to the large space of the corresponding vocabulary, the corpus need to be pre-processed using word embedding techniques before sending the sentences into the network [39]. Here we describe the main steps.
|
In the bottom part of the figure, we select untrained, trained-for-five-epoch, and full-trained RNN with meta predictive learning to show the performances at different training stages in generating one of the sentences in the test dataset. The correctly predicted tokens from the test sentence are highlighted, while the wrongly predicted tokens are gray colored. The indicated accuracy is the ratio of the number of correctly predicted tokens from the test sentence to the total number of tokens in the sentence. The mean accuracy evaluated from 100100100100 sentences is about 0%percent00\%0 %, 21.3%±10.5%plus-or-minuspercent21.3percent10.521.3\%\pm 10.5\%21.3 % ± 10.5 %, 23.5%±11.3%plus-or-minuspercent23.5percent11.323.5\%\pm 11.3\%23.5 % ± 11.3 % at the three shown stages, respectively. Note that all the models share the same training hyperparameters like batch size, learning rate, and training optimizers (see appendix B for details).
|
To study the network behavior, we plot the distribution of hyperparameters m𝑚mitalic_m, π𝜋\piitalic_π, ΞΞ\Xiroman_Ξ when the RNN network is trained with the MPL method, as shown in the Fig. 6. We find that the mean weight m𝑚mitalic_m for all layers is symmetrically distributed around zero, with a relatively narrow distribution. The distribution of π𝜋\piitalic_π for all layers is of an L-shape and peaks at π=0𝜋0\pi=0italic_π = 0, indicating a dense network is favored and formed after learning. The distribution of ΞΞ\Xiroman_Ξ is of the U-shape and has two peaks. One peak is at Ξ=0Ξ0\Xi=0roman_Ξ = 0, indicating that these weights are deterministic and could only take a single value of m𝑚mitalic_m, and the other peak is at Ξ≃0.01similar-to-or-equalsΞ0.01\Xi\simeq 0.01roman_Ξ ≃ 0.01, indicating that the corresponding connection can carry a range of candidate values. Currently, it remains unknown how to relate these microscopic details of the network structure to the decoding of the semantic information in the corpus. It is thus important in future works to design analytically tractable model of language processing bridging neurophysiological plausibility and superperformance observed in the state-of-the-art architectures, which would help to uncover key neuron, synapse, and circuit motif types in the human brain.
|
Our proposed MPL achieves equal or even better performance compared with traditional methods in all three tasks, showing the advantage of ensemble predictive coding, since examples of single networks can be readily sampled from the trained distribution [28, 18]. By analyzing the distribution of hyperparameters, we are able to find that most connections are deterministic in the input and recurrent layer, while the output layer has a higher level of variability. The observation that the output connections bear a higher level of variability is a universal result in all three tasks, which may particularly connect to the generative function of the language processing model. The network performance changes non-linearly and continuously with data load α=MN𝛼𝑀𝑁\alpha=\frac{M}{N}italic_α = divide start_ARG italic_M end_ARG start_ARG italic_N end_ARG, where M𝑀Mitalic_M is the training data size and N𝑁Nitalic_N is the number of neurons in the circuit, and we found that the critical point is given by αc≈0.02subscript𝛼𝑐0.02\alpha_{c}\approx 0.02italic_α start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ≈ 0.02, beyond which a chance level of prediction is absent. With increasing the size of training data, the performance further improves until a perfect learning is achieved. We can then test the resulting network to generate text of arbitrary length (to create something is a first step to understand that thing), and the generated text follows perfectly the grammatical rule set before training. In addition, our MPL is able to accomplish comparable performances in the Penn Treebank corpus with other training methods in RNN, although the framework is less accurate than the transformer structure, which thereby calls for further studies about the mechanistic difference between biological learning and non-biological transformer learning, and how the latter one can inspire discovery of new fundamental elements of computation that can realize logical and mathematical reasoning in many different tasks [29, 30].
|
The first step is to use a tokenizer tool to split the sentences into tokens and replace the useless words or characters with a special token named <<<unk>>>, indicating an unknown token. In addition, the tokens that appeared less than five times in the whole corpus will be replaced with <<<unk>>> to help the network concentrate on the major tokens of high frequency.
|
D
|
We conducted an analysis of bright-field (BF) images of NE organoids formed in the neural induction medium with 2% and 8% Geltrex respectively at day 7 and day 18. The findings are presented below.
|
To further validate our hypothesis, we conducted a comparison between the outcomes of our research and those from a previously published peer-reviewed paper about hiSPCs derived NE organoids [5]. Theses NE organoids are generated to mimic neural tube development at early embryogenesis stage. They are defined with round morphology but different size during culture in vitro. Briefly, the result from our study shows the robustness of SegmentAnything in detecting NE organoids from bright-field images. Furthermore, automatically organoids size quantification closely aligns with the manually measured results from the publication, reinforcing the efficacy of our proposed approach. We are the first one investigating the efficacy of SegmentAnything on organoid detection, and all codes are open sourced in https://github.com/XiaodanXing/SAM4organoid.
|
Figure 3: (a) The average detection scores comparison between our method and the StarDist. (c) and (d) are the segmentation results from our method and the StarDist algorithm, respectively.
|
We conducted a comparison of the mean average precision (mAP) between the organoid detection results obtained from our method and those obtained from the open sourced StarDist method [3] in Fig. 3. Instead of training the StarDist method from scratch, we inferenced the ’2D_versatile_fluo’ model with default settings. The mAP comparison results are depicted in Fig. 4(a), while the segmentation comparison results are presented in Fig. 4(c) and (d). To ensure a fair and unbiased comparison, we refrained from manually removing any wrongly segmented regions (as described in challenge 4) from our proposed method. The results clearly demonstrate that the StarDist method, without any training or fine-tuning on the test modality, failed to achieve accurate segmentation on organoids.
|
We also analyzed the morphological features of organoids among different groups in Fig, 4. Our results indicate that in the later stage of organoid formation (day 18), a higher concerntration of Geltrex leads to smaller organoid sizes, which aligns with the hypothesis that Geltrex, being a hydrogel, undergoes solidification at 37 degrees Celsius, thereby exerting pressure on organoid formation from the paper [5]. Furthermore, our results are in agreement with the manually annotated results, highlighting the capability of our proposed toolbox in facilitating biological studies. The consistency between our automated analysis and the manually derived findings demonstrates the reliability and effectiveness of our approach in cellular analysis, offering valuable insights for further research and experimentation.
|
C
|
The starting point for the present paper is the recent article [10] just cited. In this seminal study, the author proposes a very persuasive stochastic model for brain-supervised learning
|
In order to understand how biological neural networks (BNNs) work, it seems natural to compare them with artificial neural networks (ANNs). Although the definition of the latter is inspired by the former, they also differ in several aspects. One of them is the way the network parameters are updated.
|
We review and discuss this setup in Section 2. In this model the local updating rule of the connection parameters in BNNs turns out to be a zero-order optimization procedure. More precisely, it is shown in [10] that the expected value of the iterates coincides with a modified gradient descent. However, this holds only on average. The noise for such zero-order methods is so high that one can hardly imagine effective learning based on it, see [3, 8, 1]. The author himself writes in [10, Section 4]: “It remains to reconcile the observed efficiency of learning in biological neural networks with the slow convergence of zero-order methods.”
|
It turns out that with this modification, the updates correspond approximately to a continuous descent step along the gradient flow, see Theorem 1. This can be interpreted in the sense that it is not biologically implausible that BNNs use a kind of SGD algorithm after all, but without explicitly computing the gradient.
|
In simple terms, an ANN learns from data by adjusting the weights of the connections between nodes in order to minimize a loss function that measures the difference between the desired output and the actual output of the network. More specifically, the optimization step is performed using the Stochastic Gradient Descent (SGD) algorithm, which iteratively updates the weights of the network by moving them in the direction of the steepest descent of the empirical loss function of a single training sample. The gradient itself is computed with the so-called backpropagation algorithm. In particular, the update of any parameter is based on the states of all other parameters. Such a mechanism does not seem to be biologically plausible for BNNs, as many authors have pointed out. Parameter update in BNNs occurs only locally, and distant neurons are only indirectly connected through the endogenous reward system. This observation is closely related to the weight transportation problem [6, 2, 4]. We refer to [12, 11] for a detailed discussion about the role of SGD in BNN, which the author of [10, Section 5] summarizes as follows:
|
B
|
In conclusion, in order to narrow the gap between the promise of aqueous iontronic neuromorphic computation and its implementation, our work demonstrates the capabilities of a fluidic memristor by employing it as an artificial synapse for carrying out neuromorphic reservoir computing. Temporal signals, in the form of voltage pulse trains, that together represent (handwritten) numbers were distinguished by individual channels for subsequent in silico classification with a simple readout function, demonstrating (at least) comparable performance to more conventional solid-state platforms [34, 35, 33, 32]. Additionally, the device is fabricated with a cost-effective easy soft-lithography process. The achieved computing properties are inspired and supported by a quantitative predictive theoretical model of the device dynamics. Consequently, our work establishes a solid foundation, both theoretically and experimentally, for future investigations into fluidic memristive systems and their application in aqueous neuromorphic computing architectures, paving the way for computing systems that more closely resemble the brains fascinating aqueous processes.
|
This work is part of the D-ITP consortium, a program of the Netherlands Organisation for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). This work is also supported by a National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIP) (2020R1A2C2009093) and by the Korea Environment Industry & Technology Institute (KEITI) through its Ecological Imitation-based Environmental Pollution Management Technology Development Project funded by the Korea Ministry of Environment (MOE) (2019002790007).
|
The PNP equations form an effective theoretical framework to analyse ion transport in charged porous materials [42]. However, the complex three-dimensional geometric structure of the NCNM, with features on length scales varying from the colloidal surface-surface distance all the way up to the channel length, introduces intricate numerical challenges for fully spatially resolved solutions of the PNP equations. To simplify, we consider slab-averages, i.e. the average along a cross section [38, 43, 44, 40, 41], of the electric potential and the ionic concentrations in the porous structure between the colloids. Although this sacrifices on nanoscale details, it does account for the pinched electric field lines towards the channel tip and for the spatial variation of the ionic charge density. Through this method we reduce the three-dimensional Nernst-Planck equation to a one-dimensional form, providing an expression for the total salt and charge flux through the channel. The divergence of the total salt flux qualitatively shows that the experimentally observed inhomogeneous ionic space charge density forms a source (sink) term of salt, resulting in salt accumulation (depletion) upon a positive (negative) applied voltage V𝑉Vitalic_V. Quantitatively, a divergence-free steady-state condition on the total salt flux provides a differential equation for the voltage-dependent slab-averaged salt concentration profile, which we solve analytically. By viewing the channel as a series of conductive slabs, with the conductance of each slab proportional to the (now known) voltage-dependent salt concentration, we calculate the steady-state channel conductance g∞(V)=I(V)/Vsubscript𝑔𝑉𝐼𝑉𝑉g_{\infty}(V)=I(V)/Vitalic_g start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT ( italic_V ) = italic_I ( italic_V ) / italic_V. This describes how an increase (decrease) in salt in the channel at positive (negative) voltages makes the channel more (less) conductive. Our theory thus quantitatively confirms the experimental hypothesis that the ionic space charge distribution results in salt concentration polarisation and hence in current rectification [36]. Moreover, leveraging the general analytical nature of our theory, we demonstrate that any inhomogeneous ionic space charge density in generic channels (provided they are well-described by slab-averaged PNP equations) is the key ingredient for a source-sink term of salt and thus for current rectification, derived in detail in the SI. Therefore we not only provide a mechanistic insight as to how the space charge leads to current rectification in the channel of present interest, but this understanding could also explain current rectification in channels with other sources of space charge densities and with other geometries [23, 37]. Furthermore, this insight may provide inspiration for future design of devices that exhibit current rectification.
|
The fabrication of microchannel and formation of the NCNM for the fluidic memristor is similar to previously reported methods [36, 37] and is described in the SI in detail. A master for multi-layered channels (target heights are 5 μ𝜇\muitalic_μm for shallow channel and 100 μ𝜇\muitalic_μm for deep) was created using a multi-step UV exposure with negative photoresist (PR, SU-8 2005, 3050, Microchem Co., USA). After surface treatment of the master with (3,3,3-trifluoropropyl)silane (452807, Sigma-Aldrich, USA) for easy separation, Polydimethylsiloxane (PDMS, Sylgard, Dow Corning Korea Ltd., Korea) was poured and cured by heating. The detached PDMS device was bonded with a slide glass. The formation of NCNM was formed by a self-assembly of homogeneous nanoparticles with negative surface charge in the desired shallow channel using Laplace pressure to halt the solvent at the base and evaporation of solvent. A close-packed fcc was formed by the growth of the ordered lattice induced by the evaporation.
|
To illustrate how the results shown in Fig. 4(a) can be leveraged to classify more complex data inputs with an explanatory example, let us consider the simple single-digit numbers 0-9, represented by black and white 4×5454\times 54 × 5 pixel images. By converting a row of 4 pixels to a string of bits by letting a white pixel correspond to a “0” and a black pixel to a “1”, we can encode the entire image with 5 strings of 4 bits, as shown in Fig. 4(b) for the number “2” (other digits are shown in the SI). These bit-strings then generate 5 distinct signature outputs, as we saw in Fig. 4(a). A single-layer fully connected 5×105105\times 105 × 10 neural network is then trained in silico to classify the 5 measured conductances as numbers. This protocol is schematically illustrated in Fig. 4(c). Other types of simple readout functions could possibly also suffice. We trained our read-out network in silico using the results shown in Fig. 4(a, bottom). To incorporate the (device-to-device) variability, each individual pulse was subject to some noise newly drawn from a normal distribution with mean 0 and standard deviation given by the experimentally determined standard deviation for that specific voltage train. During training, we repeated this process 100 times for each of the numbers 0-9, achieving perfect classification of all 10 digits with noise-free inference measurements. If we also take the noise into account during inference, we still achieve an overall accuracy of 95%, highlighting the system’s robustness against noise. Note that actual training is only performed on a simple and small neural network, that would otherwise not be capable of handling temporal inputs, while the “hard” work of separating the time-dependent signals is handled by the internal physics of our fluidic memristor. Ultimately, this successful classification of simple digit images serves as an explanatory proof-of-concept for the broader application of performing complex time-dependent data analysis tasks.
|
A
|
0&0&0&0&\mu_{m}\end{bmatrix}.F = [ start_ARG start_ROW start_CELL italic_β start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT divide start_ARG roman_Λ end_ARG start_ARG italic_μ end_ARG end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL italic_β start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT divide start_ARG roman_Λ end_ARG start_ARG italic_μ end_ARG end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL italic_β start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT divide start_ARG roman_Λ end_ARG start_ARG italic_μ end_ARG end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL italic_α start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT divide start_ARG roman_Λ start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT end_ARG start_ARG italic_μ start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT end_ARG end_CELL start_CELL 0 end_CELL start_CELL italic_α start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT divide start_ARG roman_Λ start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT end_ARG start_ARG italic_μ start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT end_ARG end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW end_ARG ] , and V = [ start_ARG start_ROW start_CELL italic_κ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL italic_κ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL italic_ϵ italic_δ start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT + italic_κ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL italic_κ start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL italic_μ start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT end_CELL end_ROW end_ARG ] .
|
Here, the matrix FV−1superscriptFV1\textbf{FV}^{-1}FV start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT is given by
|
where ρ𝜌\rhoitalic_ρ represents the spectral radius of the matrix FV−1superscriptFV1\textbf{FV}^{-1}FV start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT.
|
ℛh=ρ(FV−1)=βhσ1+μΛμ,subscriptℛℎ𝜌superscriptFV1subscript𝛽ℎsubscript𝜎1𝜇Λ𝜇\mathcal{R}_{h}=\rho(\textbf{FV}^{-1})=\dfrac{\beta_{h}}{\sigma_{1}+\mu}\dfrac%
|
ℛ0=ρ(FV−1)=max{ℛh,ℛz},subscriptℛ0𝜌superscriptFV1subscriptℛℎsubscriptℛ𝑧\mathcal{R}_{0}=\rho(\textbf{FV}^{-}1)=\max\{\mathcal{R}_{h},\mathcal{R}_{z}\},caligraphic_R start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = italic_ρ ( FV start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT 1 ) = roman_max { caligraphic_R start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , caligraphic_R start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT } ,
|
A
|
We are concerned in spiking neural networks for the BG. In 2001, based on the functional anatomy proposed by Gurney et al. GPR1 , they developed an artificial neural network for the BG GPR2 . Later, in 2006, based on the anatomical and physiological data, Humphries et al. Hump1 in the Gurney group developed a physiological neural model for the BG by employing the leaky integrate-and-fire neuron model with one dynamic variable LIF . But, the effects of dopamine on the BG cells and synaptic currents were not considered there. In 2009, such effects of dopamine modulations on the striatal cells (D1 and D2 SPNs and fast-spiking interneurons) and the synaptic currents into the striatal cells were studied intensively by Humphries et al. Str2 ; SPN1 by using the Izhikevich neuron models Izhi1 ; Izhi2 ; Izhi3 ; Izhi4 . In 2017, Fountas and Shanahan CN6 ; CN7 extended the work of Humphries et al. Str2 ; SPN1 to the whole BG (including GP, STN, and SNr in addition to the striatal cells) by employing the Izhikevich neuron model, and studied oscillatory firing behaviors in the BG CN6 ; CN7 where dopamine effects were also considered. Also, in 2015 Mandali et al. Man used the Izhikevich neuron models arranged on a 2D lattice for the BG cells and studied synchrony, exploration, and action selection. Recently, in 2021 Navarro-López et al. CN1 also developed the BG-thalamo-cortical network (where the Izhikevich neuron models were also used), and investigated the BG-thalamo-cortical oscillatory activity. In some other spiking neural networks for the BG, instead of the Izhikevich neuron model, the adaptive exponential integrate-and-fire model with two dynamic variables AdEx was used for the BG cells for study of signal enhancement by short-term plasticity CN11 and learning stimulus-action association CN20 .
|
We are concerned in spiking neural networks for the BG. In 2001, based on the functional anatomy proposed by Gurney et al. GPR1 , they developed an artificial neural network for the BG GPR2 . Later, in 2006, based on the anatomical and physiological data, Humphries et al. Hump1 in the Gurney group developed a physiological neural model for the BG by employing the leaky integrate-and-fire neuron model with one dynamic variable LIF . But, the effects of dopamine on the BG cells and synaptic currents were not considered there. In 2009, such effects of dopamine modulations on the striatal cells (D1 and D2 SPNs and fast-spiking interneurons) and the synaptic currents into the striatal cells were studied intensively by Humphries et al. Str2 ; SPN1 by using the Izhikevich neuron models Izhi1 ; Izhi2 ; Izhi3 ; Izhi4 . In 2017, Fountas and Shanahan CN6 ; CN7 extended the work of Humphries et al. Str2 ; SPN1 to the whole BG (including GP, STN, and SNr in addition to the striatal cells) by employing the Izhikevich neuron model, and studied oscillatory firing behaviors in the BG CN6 ; CN7 where dopamine effects were also considered. Also, in 2015 Mandali et al. Man used the Izhikevich neuron models arranged on a 2D lattice for the BG cells and studied synchrony, exploration, and action selection. Recently, in 2021 Navarro-López et al. CN1 also developed the BG-thalamo-cortical network (where the Izhikevich neuron models were also used), and investigated the BG-thalamo-cortical oscillatory activity. In some other spiking neural networks for the BG, instead of the Izhikevich neuron model, the adaptive exponential integrate-and-fire model with two dynamic variables AdEx was used for the BG cells for study of signal enhancement by short-term plasticity CN11 and learning stimulus-action association CN20 .
|
In this section, based on the spiking neural networks (SNNs) for the BG developed in previous works SPN1 ; SPN2 ; CN6 , we make refinements on the BG SNN to become satisfactory for our study. This BG SNN is based on anatomical and physiological data of the BG as follows. For the framework of the BG SNN (e.g., number of BG cells and synaptic connection probabilities), refer to the anatomical works Ana1 ; Ana2 ; Ana3 ; Ana4 . For the intrinsic parameter values of single BG cells, we refer to
|
In this paper, we consider a spiking neural network of the BG, based on anatomical and physiological data obtained in rat-based works.
|
Based on the anatomical information Ana3 , the numbers of the striatal cells, the STN cells, the SNr cells, and the GP cells in the BG are chosen.
|
B
|
[2, 31]. Rather than calculating the importance of a variable for a single model, our framework finds the importance of a variable for all models within a Rashomon set, although our framework is applicable to all of these model reliance metrics.
|
Figure 1 provides a demonstration of this problem: across 500 bootstrap replicates from the same data set, the Rashomon set varies wildly – ranging from ten models to over ten thousand — suggesting that we should account for its instability in any computed statistics. This instability is further highlighted when considering the Model Class Reliance (MCR) variable importance, which is the range of model reliance (i.e., variable importance) values across the Rashomon set for the given dataset [15] (we define MCR and the Rashomon set more rigorously in Sections 2 and 3 respectively).
|
In contrast, model class reliance (MCR) methods describe how much a class of models (e.g., decision trees) relies on a variable. Fisher et al. [15] uses the Rashomon set to provide bounds on the possible range of model reliance for good models of a given class. Smith et al. [41] analytically find the range of model reliance for the model class of random forests. Zhang and Janson [52] introduce a way to compute confidence bounds for a specific variable importance metric over arbitrary models, which Aufiero and Janson [3] extend so that it is applicable to a broad class of surrogate models in pursuit of computational efficiency. These methods report MCR as a range, which gives no estimate of variable importance – only a range of what values are possible. In contrast, Dong and Rudin [12] compute and visualize the variable importance for every member of a given Rashomon set in projected spaces, calculating a set of points; however, these methods have no guarantees of stability to reasonable data perturbations.
|
Several methods for measuring the MR of a model from a specific model class exist, including the variable importance measure from random forest which uses out-of-bag samples [7] and Lasso regression coefficients [20]. Lundberg et al. [28] introduce a way of measuring MR in tree ensembles using SHAP [27]. Williamson et al. [48] develop MR based on the change in performance between the optimal model and the optimal model using a subset of features.
|
Figure 1: Statistics of Rashomon sets computed across 500 bootstrap replicates of a given dataset sampled from the Monk 3 data generation process [42]. The original dataset consisted of 124 observations, and the Rashomon set was calculated using its definition in Equation 1, with parameters specified in Section D of the supplement. The Rashomon set size is the number of models with loss below a threshold. Model reliance is a measure of variable importance for a single variable — in this case, X2subscript𝑋2X_{2}italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT — and Model Class Reliance (MCR) is its range over the Rashomon set. Both the Rashomon set size and model class reliance are unstable across bootstrap iterations.
|
B
|