Dataset Viewer
Auto-converted to Parquet
context
stringlengths
103
1.9k
A
stringlengths
104
2.67k
B
stringlengths
109
2.17k
C
stringlengths
107
2.98k
D
stringlengths
107
2.86k
label
stringclasses
4 values
In our numerical experiments, the test of another sampling (Algorithm 3) is actually unnecessary. Is it possible to show this analytically as well?
In the case that the condition in Eq. (16) is indeed satisfied, we can find ν𝜈\nuitalic_ν by a brute-force search. Let 𝒱𝒱\mathcal{V}caligraphic_V be a trial set of the grid shift parameters. For each ν∈𝒱𝜈𝒱\nu\in\mathcal{V}italic_ν ∈ caligraphic_ViiiiiiThroughout this paper, we require that |ν|≤1/2𝜈12|\nu|\leq 1/2| italic_ν | ≤ 1 / 2., let sνsubscript𝑠𝜈s_{\nu}italic_s start_POSTSUBSCRIPT italic_ν end_POSTSUBSCRIPT be the solution of the compressed sensing subroutine and sν∗subscript𝑠subscript𝜈∗s_{\nu_{\ast}}italic_s start_POSTSUBSCRIPT italic_ν start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT end_POSTSUBSCRIPT be the final solution with the smallest 1-norm:
which we call the optimal grid decomposition of y0superscript𝑦0y^{0}italic_y start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT.
It is possible to find the optimal grid shift parameter by optimization instead of doing brute-force search in a trial set, which should significantly improve our current result.
An overview of our algorithm is described as follows. For signal vectors with size N𝑁Nitalic_N, when the frequencies are all nearly on-grid (f≈n/N,n∈ℤformulae-sequence𝑓𝑛𝑁𝑛ℤf\approx n/N,n\in\mathbb{Z}italic_f ≈ italic_n / italic_N , italic_n ∈ blackboard_Z) and the noise for each sample is bounded by a constant, the convex relaxation algorithm can recover the frequencies with only 𝒪⁢(poly⁡log⁡N)𝒪poly𝑁\mathcal{O}(\operatorname{poly}\log N)caligraphic_O ( roman_poly roman_log italic_N ) samples, which satisfies the Heisenberg limit. With no prior knowledge about f𝑓fitalic_f (i.e., f𝑓fitalic_f could be off-grid), we introduce a grid shift parameter ν𝜈\nuitalic_ν such that after shifting the signal by e−i2⁢π⁢f⁢t→e−i2⁢π⁢(f−ν/N)⁢t→superscript𝑒i2𝜋𝑓𝑡superscript𝑒i2𝜋𝑓𝜈𝑁𝑡e^{-\mathrm{i}2\pi ft}\to e^{-\mathrm{i}2\pi(f-\nu/N)t}italic_e start_POSTSUPERSCRIPT - i2 italic_π italic_f italic_t end_POSTSUPERSCRIPT → italic_e start_POSTSUPERSCRIPT - i2 italic_π ( italic_f - italic_ν / italic_N ) italic_t end_POSTSUPERSCRIPT, the dominant frequencies of the new signal become nearly on-grid. This step requires an assumption on the signal, but we will show that a wide range of signals satisfy such an assumption. For each trial of ν𝜈\nuitalic_ν, we run the compressed sensing subroutine on the data set {yt}t∈𝒯subscriptsubscript𝑦𝑡𝑡𝒯\{y_{t}\}_{t\in\mathcal{T}}{ italic_y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_t ∈ caligraphic_T end_POSTSUBSCRIPT to obtain a trial solution sνsubscript𝑠𝜈s_{\nu}italic_s start_POSTSUBSCRIPT italic_ν end_POSTSUBSCRIPT. The optimal ν𝜈\nuitalic_ν is the one with the smallest ‖sν‖1subscriptnormsubscript𝑠𝜈1\|s_{\nu}\|_{1}∥ italic_s start_POSTSUBSCRIPT italic_ν end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. By searching for the optimal grid-shift parameter in a finite set 𝒱𝒱\mathcal{V}caligraphic_V, the accuracy of the dominant frequencies is 𝒪⁢(σoff/N)𝒪subscript𝜎off𝑁\mathcal{O}(\sigma_{\mathrm{off}}/N)caligraphic_O ( italic_σ start_POSTSUBSCRIPT roman_off end_POSTSUBSCRIPT / italic_N ), where σoffsubscript𝜎off\sigma_{\mathrm{off}}italic_σ start_POSTSUBSCRIPT roman_off end_POSTSUBSCRIPT is the maximal entry of the minimal off-grid component. This quantity is related to the noise, the frequency gap, and the residual part of the signal. In terms of the maximum runtime Tmaxsubscript𝑇T_{\max}italic_T start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT, since the samples of the compressed sensing algorithm are integers in [1,N]1𝑁[1,N][ 1 , italic_N ], Tmaxsubscript𝑇T_{\max}italic_T start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT scales linearly in N𝑁Nitalic_N, and Ttotalsubscript𝑇totalT_{\text{total}}italic_T start_POSTSUBSCRIPT total end_POSTSUBSCRIPT is 𝒪⁢(N⁢poly⁡log⁡N)𝒪𝑁poly𝑁\mathcal{O}(N\operatorname{poly}\log N)caligraphic_O ( italic_N roman_poly roman_log italic_N ).
C
𝖾𝗑𝗉𝖾𝗇𝗌𝗂𝗏𝖾𝖾𝗑𝗉𝖾𝗇𝗌𝗂𝗏𝖾\mathsf{expensive}sansserif_expensive on the channel c1subscript𝑐1c_{1}italic_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT.
Hopefully, the process P𝑃Pitalic_P will choose 𝖾𝗑𝗉𝖾𝗇𝗌𝗂𝗏𝖾𝖾𝗑𝗉𝖾𝗇𝗌𝗂𝗏𝖾\mathsf{expensive}sansserif_expensive as well so that the cost
However, the process P𝑃Pitalic_P may fail to choose 𝖾𝗑𝗉𝖾𝗇𝗌𝗂𝗏𝖾𝖾𝗑𝗉𝖾𝗇𝗌𝗂𝗏𝖾\mathsf{expensive}sansserif_expensive on the channel
𝖾𝗑𝗉𝖾𝗇𝗌𝗂𝗏𝖾𝖾𝗑𝗉𝖾𝗇𝗌𝗂𝗏𝖾\mathsf{expensive}sansserif_expensive on the channel c1subscript𝑐1c_{1}italic_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT.
If the process P𝑃Pitalic_P chooses 𝖾𝗑𝗉𝖾𝗇𝗌𝗂𝗏𝖾𝖾𝗑𝗉𝖾𝗇𝗌𝗂𝗏𝖾\mathsf{expensive}sansserif_expensive, 3 units of potential are sent as
A
The GFlops calculations encompass tensor contractions, QR factorization, and low-rank approximations, as outlined in the model detailed in Section 2.2.
In (a)(b), the input networks are MPSs with different ranks. In (c)(d), the inputs are balanced binary tree (BBT) tensor networks with different ranks.
It is worth noting that in our reported results, the execution time excludes the graph analysis part, which involves graph embedding and computing the contraction sequence of given tensor networks. This part remains independent of the tensor network ranks and is negligible when the ranks are high.
It is worth noting that Algorithm 5 may produce an embedding in which there exists a vertex in the embedding tree whose corresponding tensor network partition is empty. In such cases, we can address this problem by introducing identity matrices into the input graph. This adjustment ensures that the resulting tensor network remains equivalent while guaranteeing the non-emptiness of each partition.
The selection of the embedding tree is guided by an analysis of the structure of the input tensor network graph G𝐺Gitalic_G, its partitioning, and the contraction path. This analysis aims to identify a tree structure that optimizes the efficiency of both the current contraction and any subsequent contractions involving the contracted output. The determination of each embedding tree structure occurs in lines 5-9.
B
Additionally, the best-approximated isotropic displacement using (3.1) or (3.2) is unknown as it naturally depends on the very parameters μisosuperscript𝜇iso\mu^{\rm iso}italic_μ start_POSTSUPERSCRIPT roman_iso end_POSTSUPERSCRIPT, κisosuperscript𝜅iso\kappa^{\rm iso}italic_κ start_POSTSUPERSCRIPT roman_iso end_POSTSUPERSCRIPT that we want to determine here with this test. We can also use Norris formula (1.10) in the cubic case but decide against it, in order to keep the method as general as possible, allowing us to find the best approximating isotropic counterpart ℂisosubscriptℂiso\mathbb{C}_{\rm iso}blackboard_C start_POSTSUBSCRIPT roman_iso end_POSTSUBSCRIPT for any anisotropic elasticity tensor ℂanisosubscriptℂaniso\mathbb{C}_{\rm aniso}blackboard_C start_POSTSUBSCRIPT roman_aniso end_POSTSUBSCRIPT. Thus, we apply zero Dirichlet boundary conditions on all four sides and, precautionarily, only use the displacement data inside a circle with a diameter of 0.5⁢m0.5m0.5$\mathrm{m}$0.5 roman_m (half the size of the domain). As per Saint-Venant’s principle, this together with a sufficiently large ratio between the size of the domain and the interior circle where the load is applied allows for sufficient accuracy of our method. In section 4.1, we test different domain sizes in an isotropic setting ℂaniso=ℂisosubscriptℂanisosubscriptℂiso\mathbb{C}_{\rm aniso}=\mathbb{C}_{\rm iso}blackboard_C start_POSTSUBSCRIPT roman_aniso end_POSTSUBSCRIPT = blackboard_C start_POSTSUBSCRIPT roman_iso end_POSTSUBSCRIPT and compare our values μisosuperscript𝜇iso\mu^{\rm iso}italic_μ start_POSTSUPERSCRIPT roman_iso end_POSTSUPERSCRIPT, κisosuperscript𝜅iso\kappa^{\rm iso}italic_κ start_POSTSUPERSCRIPT roman_iso end_POSTSUPERSCRIPT with the original parameters μ𝜇\muitalic_μ and κ𝜅\kappaitalic_κ from ℂisosubscriptℂiso\mathbb{C}_{\rm iso}blackboard_C start_POSTSUBSCRIPT roman_iso end_POSTSUBSCRIPT.
These values are computed by minimizing the norm of the displacement ∥u⁢(r)∥delimited-∥∥𝑢𝑟\lVert u(r)\rVert∥ italic_u ( italic_r ) ∥ (norm) and the full displacement field u⁢(x1,x2)𝑢subscript𝑥1subscript𝑥2u(x_{1},x_{2})italic_u ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) (disp), respectively, and show the discrepancy to the analytical solution of an infinite domain.
3.4 Fitting only against the norm ∥u⁢(r)∥delimited-∥∥𝑢𝑟\lVert u(r)\rVert∥ italic_u ( italic_r ) ∥ of the displacement
We present two different procedures for finding the best approximated isotropic elasticity tensor with quadratic error minimization using Mathematica. In the first one, we only consider the radially averaged norm of the displacement ∥u⁢(r)∥delimited-∥∥𝑢𝑟\lVert u(r)\rVert∥ italic_u ( italic_r ) ∥. In the second procedure, we take the full displacement solution u⁢(x1,x2)𝑢subscript𝑥1subscript𝑥2u(x_{1},x_{2})italic_u ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) into account. For the former, we average the norm of the displacement ∥u⁢(r)∥delimited-∥∥𝑢𝑟\lVert u(r)\rVert∥ italic_u ( italic_r ) ∥ over all angles111For the cubic symmetry class considered here, an average between 0∘superscript00^{\circ}0 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT and 45∘superscript4545^{\circ}45 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT would be sufficient., cf. Figure 8. Note again, that the exact magnitude of the Dirichlet boundary conditions is not known a priori as it depends on the final best approximation isotropic elasticity tensor ℂisosubscriptℂiso\mathbb{C}_{\rm iso}blackboard_C start_POSTSUBSCRIPT roman_iso end_POSTSUBSCRIPT. Therefore, we set zero Dirichlet boundary conditions on the outside and only use the data within a circle of half the size of the computed domain for the actual fitting procedure.
4.2 The best approximating ℂisosubscriptℂiso\mathbb{C}_{\rm iso}blackboard_C start_POSTSUBSCRIPT roman_iso end_POSTSUBSCRIPT for the norm of the displacement ∥u⁢(r)∥delimited-∥∥𝑢𝑟\lVert u(r)\rVert∥ italic_u ( italic_r ) ∥
B
Compared to the U-Net and its variants, our method enhances the DSC by 1.74% over the second-best SwinUNETR. When compared with other multimodal medical image segmentation models, our method enhances DSC by 1.29% aover the second best SegResNet. Moreover, Diff4MMLiTS significantly reduces false positives and false negatives, as reflected in the improved SE and PRE. This indicates enhanced tumor detection capability and a diminished likelihood of missed or incorrect detections. Overall, our proposed method has excellent superiority and achieves the best performance in the multimodal liver tumor segmentation task.
In our framework, each stage is trained independently, with the output of one stage serving as the input for the subsequent one. This sequential training method ensures that each component is optimized for its specific role before integration into the overall framework. To comprehensively validate the importance of each module, we explore alternative methods by replacing the inpainter and the diffusion-based synthesis module. Specifically, we replace the inpainter in NCG with a median filter. On the other hand, we replace the proposed latent diffusion-based MCS module with CUT-GAN[35], which constructs the domain of images to be transformed by concatenating normal CTs with randomly generated tumor masks, and learns the mapping between this domain and the domain of the CT image with real tumor.
As illustrated in Table IV, we further evaluate the performance of the synthesis strategy on multimodal and unimodal segmentation methods. In all quantity settings, we employ nnUNet as the segmentation model architecture. The performance of unimodal segmentation models typically rely heavily on the quantity and diversity of training data, but our method requires only a small subset of multimodal data to exceed the performance of models trained on fully annotated unimodal data. The results indicate that when employing merely 10% of the data (real samples and inpaint-based normal multimodal CTs from three patients) as the training set for the segmentation model, it yields a DSC that is 2.63% higher than that of unimodal segmentation models. Furthermore, even when compared to multimodal segmentation models, our method achieves comparable results using only 70% of the paired training data. These findings confirm the effectiveness of our Diff4MMLiTS in enhancing the performance of liver tumor segmentation models.
To further evaluate the adaptability of Diff4MMLiTS, we use three backbone models in the MS module, namely U-Net, AttentionUNet, and nnUNet, with results presented in Table III. The findings indicate that our framework adapts seamlessly to all backbones, achieving notable performance improvements. Compared to segmentation models trained solely on real data, those employing the hybrid training strategy show improvements of 1.95%, 6.58%, and 2.68% in DSC. This identifies the adaptability of our framework to different backbone models and the effectiveness of the hybrid training strategy in enhancing segmentation performance.
We evaluate the results of our proposed method on publicly available external datasets to verify that the model trained with Diff4MMLiTS can effectively generalize to out-of-distribution data without the need for retraining on the new dataset. All methods are trained on mmLiTs and tested on lesion samples selected from LiTS. The comparison results as shown in Table II. Compared to nnUNet fully trained on real data, Diff4MMLiTS with the synthesis strategy achieves a 16.12% improvement in DSC. This implies that training models with such reliable synthetic images can effectively mitigate the risk of overfitting to in-distribution samples, thereby enhancing their ability to generalize to out-of-distribution samples. This underscores the potential of the proposed method as a promising solution for liver tumor screening.
D
Meanwhile, our fully explicit and unified representation supports highly efficient rendering, achieving superior efficiency over all competitors except 3DGS.
As shown in Figure 9, 4DGS works well under diverse lighting and weather conditions. It faithfully reconstructs high-frequency texture details and correctly models the geometry for both dynamic and static regions.
We propose a generic scene representation, 4D Gaussian splatting (4DGS), for modeling dynamic scenes, as shown in Figure 2.
The quality of synthesis in dynamic regions notably excels when compared to other methods. Several intricate details, including the black bars on the flame gun, the fine features of the right-hand fingers, and the texture of the salmon, are faithfully reconstructed, demonstrating the strength of our approach.
These representations hold the topological invariance and low-frequency motion prior, thus well-suited for reconstructing dynamic scenes from monocular videos.
A
In this section, we propose a counterfactual contrastive learning method based on the counterfactual passage extraction to improve the robustness and relevance sensitivity of dense retrieval models.
Ideally, a perfect retrieval model should be able to not only estimate the relevance between documents and queries, but also capture the key passages of a document that determine its relevance to each query.
Having high relevance sensitivity means that a dense retrieval model could easily distinguish not only positive documents from negative ones, but also counterfactual documents, which modify the key passages of postive documents, from other documents.
The assumptions on the relative preferences between positive documents, negative documents, and counterfactual documents in terms of relevance is depicted in Figure 1.
Different from traditional hard negative mining techniques, the introduction of our counterfactual documents focus on the modifications of the positive documents, thus are more effective in improving the relevance sensitivity of dense retrieval models instead of their overall retrieval performance.
B
The MSRVTT-Caption [12] in the video captioning task is the same as the MSRVTT dataset in the text-video retrieval task.
The parameter α𝛼\alphaitalic_α serves as the hyper-parameter that balances the cross-modality contrastive loss (ℒCsubscriptℒ𝐶\mathcal{L}_{C}caligraphic_L start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT) and the Banzhaf Interaction loss (ℒIsubscriptℒ𝐼\mathcal{L}_{I}caligraphic_L start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT). Meanwhile, the parameter β𝛽\betaitalic_β acts as the hyper-parameter regulating the trade-off between the loss from deep supervision and the loss from self-distillation. λ𝜆\lambdaitalic_λ is the hyper-parameter used to balance the feature alignment penalty and the task-specific penalty. To determine the optimal settings for α𝛼\alphaitalic_α and β𝛽\betaitalic_β for the text-video retrieval task, we evaluate the scale range setting α∈[0.3,1.8]𝛼0.31.8\alpha\in[0.3,1.8]italic_α ∈ [ 0.3 , 1.8 ] as shown in Fig. 8(a). Our findings indicate that enhancing α𝛼\alphaitalic_α from 0.5 to 0.8 leads to an improvement in R@1 from 48.2% to 48.4%, with saturation observed at α=1.0𝛼1.0\alpha=1.0italic_α = 1.0 for text-to-video retrieval. Consequently, we adopt α=1.0𝛼1.0\alpha=1.0italic_α = 1.0 to achieve optimal performance. In Fig. 8(b), we demonstrate the impact of the hyper-parameter β𝛽\betaitalic_β, evaluating the scale range setting β∈[0.3,1.8]𝛽0.31.8\beta\in[0.3,1.8]italic_β ∈ [ 0.3 , 1.8 ]. We find that the model achieves the best performance at β=1.0𝛽1.0\beta=1.0italic_β = 1.0. For the video-question answering task, we assess the scale range settings α∈[0.5,3.5]𝛼0.53.5\alpha\in[0.5,3.5]italic_α ∈ [ 0.5 , 3.5 ] (Fig. 8(c)) and β∈[0.5,3.5]𝛽0.53.5\beta\in[0.5,3.5]italic_β ∈ [ 0.5 , 3.5 ] (Fig. 8(d)). We find that the model achieves the best performance at α=2.0𝛼2.0\alpha=2.0italic_α = 2.0 and β=1.0𝛽1.0\beta=1.0italic_β = 1.0. Therefore, we establish α𝛼\alphaitalic_α as 2.0 and β𝛽\betaitalic_β as 1.0 for video-question answering. For the video captioning task, as shown in Fig. 8(e) and (f), we find that the video captioning task is robust to hyper-parameters α𝛼\alphaitalic_α and β𝛽\betaitalic_β. We consider that this is because we trained the video captioning task for 50 epochs, significantly more than the 5 epochs used for the retrieval and video-question answering tasks. The additional training epochs help mitigate the impact of hyper-parameters α𝛼\alphaitalic_α and β𝛽\betaitalic_β on model performance. We set α𝛼\alphaitalic_α to 1.0 and β𝛽\betaitalic_β to 1.0 for video captioning. For the hyper-parameter λ𝜆\lambdaitalic_λ, as shown in Fig. 8(g) and (h), we adopt λ=2.5𝜆2.5\lambda=2.5italic_λ = 2.5 for video-question answering and λ=3.3𝜆3.3\lambda=3.3italic_λ = 3.3 for video captioning to achieve optimal performance.
Evaluation Metrics.    We choose Recall at rank K (R@K), Median Rank (MdR), and mean rank (MnR) to evaluate the retrieval performance. We select the answer accuracy to evaluate the video-question answering performance. We apply four metrics for the video caption task, including BLEU-4 [70], ROUGE-L [71], METEOR [72], and CIDEr [73].
In text-to-video retrieval, given a text query alongside a gallery of videos, the objective is to rank all videos so that the video corresponding to the text query is ranked as high as possible. Similarly, in video-to-text retrieval, the goal is to rank all text candidates based on the video query. In our HBI V2 framework, we can directly rank candidates by leveraging the similarity scores between video and text, eliminating the necessity for an additional prediction head.
Ablation about Components.    To illustrate the importance of each part of our method including the Banzhaf Interaction, the deep supervision structure, the self-distillation, and the representation reconstruction, we conduct ablation experiments on both MSRVTT and MSRVTT-QA datasets in Table V. The Banzhaf Interaction boosts the baseline with an improvement up to 0.8% at R@1 and 1.3% at answer accuracy. Furthermore, deep supervision and self-distillation significantly improve the generalization ability. Additionally, the representation reconstruction further improves the performance of the model. Our full model attains superior performance, surpassing the baseline by 2.8% at R@1 for text-to-video retrieval and 1.9% at answer accuracy for video-question answering. This shows that the four parts are beneficial for aligning videos and texts.
B
Input: Snapshots 𝒮=[𝒖1,…,𝒖J]𝒮superscript𝒖1…superscript𝒖𝐽\mathcal{S}=[\bm{u}^{1},\ldots,\bm{u}^{J}]caligraphic_S = [ bold_italic_u start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , … , bold_italic_u start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT ] with 𝒖𝒊≈𝒖⁢(ti).superscript𝒖𝒊𝒖subscript𝑡𝑖\bm{u^{i}}\approx\bm{u}(t_{i}).bold_italic_u start_POSTSUPERSCRIPT bold_italic_i end_POSTSUPERSCRIPT ≈ bold_italic_u ( italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) .
Φ†superscriptΦ†\Phi^{\dagger}roman_Φ start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT denotes the Moore-Penrose pseudo inverse of the DMD modes ΦDMD.superscriptΦDMD\Phi^{\text{DMD}}.roman_Φ start_POSTSUPERSCRIPT DMD end_POSTSUPERSCRIPT .
Output: DMD modes ΦDMD.superscriptΦDMD\Phi^{\text{DMD}}.roman_Φ start_POSTSUPERSCRIPT DMD end_POSTSUPERSCRIPT .
Output: DMD modes ΦDMD.superscriptΦDMD\Phi^{\text{DMD}}.roman_Φ start_POSTSUPERSCRIPT DMD end_POSTSUPERSCRIPT .
6:  Obtain ΦDMD=𝒮1⁢𝚺−1⁢𝑽⁢W.superscriptΦDMDsuperscript𝒮1superscript𝚺1𝑽𝑊\Phi^{\text{DMD}}=\mathcal{S}^{1}{\bm{\Sigma}}^{-1}{\bm{V}}W.roman_Φ start_POSTSUPERSCRIPT DMD end_POSTSUPERSCRIPT = caligraphic_S start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT bold_Σ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT bold_italic_V italic_W .
B
2if h≤m1/8ℎsuperscript𝑚18h\leq m^{1/8}italic_h ≤ italic_m start_POSTSUPERSCRIPT 1 / 8 end_POSTSUPERSCRIPT then 
Let G𝐺Gitalic_G be an m×m𝑚𝑚m\times mitalic_m × italic_m grid digraph and H𝐻Hitalic_H beaninduced subgraph of Auxα⁢(G)subscriptAux𝛼𝐺\textsf{Aux}_{\alpha}(G)Aux start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ( italic_G ) with hℎhitalic_h vertices. For every β>0𝛽0\beta>0italic_β > 0, AuxReach runs in O~⁢(h1/2+β/2)~𝑂superscriptℎ12𝛽2\widetilde{O}({h}^{1/2+\beta/2})over~ start_ARG italic_O end_ARG ( italic_h start_POSTSUPERSCRIPT 1 / 2 + italic_β / 2 end_POSTSUPERSCRIPT ) space and polynomial time.
Input: An induced subgraph H𝐻Hitalic_H of Auxα⁢(G)subscriptAux𝛼𝐺\textsf{Aux}_{\alpha}(G)Aux start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ( italic_G ) and two vertices x𝑥xitalic_x and y𝑦yitalic_y in H𝐻Hitalic_H (let G𝐺Gitalic_G be an m×m𝑚𝑚m\times mitalic_m × italic_m grid digraph and h=|V⁢(H)|ℎ𝑉𝐻h=|V(H)|italic_h = | italic_V ( italic_H ) |)
3  /* m𝑚mitalic_m is a global variable where G𝐺Gitalic_G is an m×m𝑚𝑚m\times mitalic_m × italic_m grid digraph */
the points of an m×m𝑚𝑚m\times mitalic_m × italic_m grid. The edges can only occur between a vertex and its immediate vertical
C
In fact, without this nonlocality, any CA-based discussion would likely have been rendered meaningless.
In the absence of quantum effects, a CA encoding the information of the stretched horizon would be mapped identically onto the conformal boundary.
In this scheme, the evolution law Z𝑍Zitalic_Z—which transfers information from the CA at the stretched horizon to the boundary—encodes the resulting displacement as a permutation of states.
Let us return to the issue of information on the stretched horizon being transmitted to the conformal boundary.
In the context of the black hole paradox, the piece of information inscribed on the horizon can be considered to be the encrypted information of the conformal boundary[7].
A
Recall that ξη^⁢(x)superscript𝜉^𝜂𝑥\xi^{\widehat{\eta}}(x)italic_ξ start_POSTSUPERSCRIPT over^ start_ARG italic_η end_ARG end_POSTSUPERSCRIPT ( italic_x ) and ξη^τ⁢(x)superscript𝜉subscript^𝜂𝜏𝑥\xi^{\widehat{\eta}_{\tau}}(x)italic_ξ start_POSTSUPERSCRIPT over^ start_ARG italic_η end_ARG start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ( italic_x ) denote the projections of the gradients of η^⁢(x)^𝜂𝑥\widehat{\eta}(x)over^ start_ARG italic_η end_ARG ( italic_x ) and η^τ⁢(x)subscript^𝜂𝜏𝑥\widehat{\eta}_{\tau}(x)over^ start_ARG italic_η end_ARG start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT ( italic_x ), respectively, onto the subspace spanned by the trailing (d−k)𝑑𝑘(d-k)( italic_d - italic_k ) eigenvectors of ∇2η^superscript∇2^𝜂\nabla^{2}\widehat{\eta}∇ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT over^ start_ARG italic_η end_ARG and ∇2η^τsuperscript∇2subscript^𝜂𝜏\nabla^{2}\widehat{\eta}_{\tau}∇ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT over^ start_ARG italic_η end_ARG start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT, respectively. Similarly, we define the corresponding population quantities ξf⁢(x)superscript𝜉𝑓𝑥\xi^{f}(x)italic_ξ start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT ( italic_x ) and ξη⁢(x)superscript𝜉𝜂𝑥\xi^{\eta}(x)italic_ξ start_POSTSUPERSCRIPT italic_η end_POSTSUPERSCRIPT ( italic_x ) (see Section 2.5). With slight abuse of notation, we use Ridge⁢(g)Ridge𝑔\text{Ridge}(g)Ridge ( italic_g ) to denote the ridge of any twice differentiable function g𝑔gitalic_g restricted to [0,1]d.superscript01𝑑[0,1]^{d}.[ 0 , 1 ] start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT .
In the formulations of our theoretical results, we will use the following assumptions. Let m𝑚mitalic_m be a positive integer (where our results will require m≥4𝑚4m\geq 4italic_m ≥ 4).
As can be seen below, our proposed algorithms target the ridge of the ridgeness function, and we will see below (see Lemma 5) that the ridge of the ridgeness function essentially equals the original ridge of f𝑓fitalic_f.
This important section can be interpreted as providing population level versions of our main convergence results for the proposed algorithms presented above. Indeed, the algorithms can be interpreted as ‘perturbed versions’ of corresponding population level versions. We will discuss the precise meaning of this in what follows, and we also indicate how this correspondence is being used to prove the convergence results for the algorithms. This section will also provide additional insights into why the algorithms proposed in this work do not suffer from the theoretical gaps of the SCMS algorithm - cf. Section 4.2.1.
The remaining part of the paper is organized as follows. In Section 2 we introduce the formal definition of ridges. This is followed by our extraction algorithms, whose performance is illustrated using some numerical studies in ℝ2superscriptℝ2\mathbb{R}^{2}blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. The main theoretical results are given in Section 3, where we give the convergence results of our algorithms. The mathematical framework for the theoretical analyses is provided in Section 4. In Appendix A we give an example for which the SCMS algorithm fails to detect a part of the ridge while our algorithms do not miss it. All the proofs are provided in Appendix B.
A
\theta}}^{\prime},3^{2}\bm{I}_{2})italic_φ ( bold_italic_θ | bold_italic_θ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) = caligraphic_N ( bold_italic_θ | bold_italic_θ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , 3 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT bold_italic_I start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT )
The surrogate is built with k-nearest neighbor (kNN) regression using K∈{1,10,100}𝐾110100K\in\{1,10,100\}italic_K ∈ { 1 , 10 , 100 } neighbors.
We compare two MH-S algorithms and one DA-PM-MH algorithm using again a nearest neighbor surrogate, with K=100𝐾100K=100italic_K = 100. The budget is E=105𝐸superscript105E=10^{5}italic_E = 10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT evaluations.
with B=4𝐵4B=4italic_B = 4, η0=4subscript𝜂04\eta_{0}=4italic_η start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 4 and ηi=3.5subscript𝜂𝑖3.5\eta_{i}=3.5italic_η start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 3.5 for i=1,…,2𝑖1…2i=1,\ldots,2italic_i = 1 , … , 2, where Θ=[−10,10]×[−10,10]Θ10101010\Theta=[-10,10]\times[-10,10]roman_Θ = [ - 10 , 10 ] × [ - 10 , 10 ], i.e., bounded domain. The goal is to compare the performance of the different algorithms against a vanilla PM-MH algorithm for two different noises. Specifically, we compare
Figures (a)-(b)-(c) show noisy realizations of the ABC likelihood with bandwidth ϵ=0.1italic-ϵ0.1\epsilon=0.1italic_ϵ = 0.1 for M∈{1,10,100}𝑀110100M\in\{1,10,100\}italic_M ∈ { 1 , 10 , 100 }, respectively. We also plot the 0.1 and 0.9 quantiles of the noisy realizations. Figure (d) shows the true posterior distribution employing the Gaussian likelihood along with three ABC posteriors with bandwidths ϵ∈{0.1,10,100}italic-ϵ0.110100\epsilon\in\{0.1,10,100\}italic_ϵ ∈ { 0.1 , 10 , 100 }. The true value used to generate the observed data is also depicted.
A
Ω⁢(f⁢(n))Ω𝑓𝑛\Omega\left(f(n)\right)roman_Ω ( italic_f ( italic_n ) ),Θ⁢(f⁢(n))Θ𝑓𝑛\Theta\left(f(n)\right)roman_Θ ( italic_f ( italic_n ) ). Furthermore, for a constant c>0𝑐0c>0italic_c > 0, we write 𝒪c⁢(f⁢(n))subscript𝒪𝑐𝑓𝑛\mathcal{O}_{c}\left(f(n)\right)caligraphic_O start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( italic_f ( italic_n ) ),Ωc⁢(f⁢(n))subscriptΩ𝑐𝑓𝑛\Omega_{c}\left(f(n)\right)roman_Ω start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( italic_f ( italic_n ) ), and Θc⁢(f⁢(n))subscriptΘ𝑐𝑓𝑛\Theta_{c}\left(f(n)\right)roman_Θ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( italic_f ( italic_n ) ) if the hidden constant in the notation depends on c𝑐citalic_c.
To keep track of the progress of the dynamics towards consensus, we describe the dynamics via the bias at time t𝑡titalic_t, denoted by stsubscript𝑠𝑡s_{t}italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, which represents the difference between the sizes of the majority and minority opinion communities at time t𝑡titalic_t. Note that the protocol, in the case of a complete graph of fixed size n𝑛nitalic_n with binary opinions, is completely described by {st}tsubscriptsubscript𝑠𝑡𝑡\{s_{t}\}_{t}{ italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT.
We first compute the expectation of the bias at time t𝑡titalic_t, conditional on its value at time t−1𝑡1t-1italic_t - 1.
The transition probabilities are characterized iteratively by the majority update rule as follows: given any time t≥0𝑡0t\geq 0italic_t ≥ 0, let Mt∈Σnsubscript𝑀𝑡superscriptΣ𝑛M_{t}\in\Sigma^{n}italic_M start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ∈ roman_Σ start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT be the state of the process at time t𝑡titalic_t. Then, at time t+1𝑡1t+1italic_t + 1, each node u∈V𝑢𝑉u\in Vitalic_u ∈ italic_V samples three agents independently uniformly at random (with replacement) and updates its opinion to the majority one among the sampled neighbor opinions. For the sake of clarity, we remark that when u𝑢uitalic_u samples a neighbor node twice, the corresponding opinion counts twice.
An opinion dynamics is a synchronous distributed algorithm characterized by a very simple structure. In this structure, the state of a node at round t𝑡titalic_t depends only on its own state and a symmetric function of the multiset of states of its neighbors at round t−1𝑡1t-1italic_t - 1.
B
By embedding the distribution of the short term experimental data using kernels, we derive interpretable weights for extrapolating long term effects from short term effects.
Our research question is how to extrapolate long term effects of continuous actions, allowing nonlinearity and heterogeneity in the link between the short term and the long term.
The final estimator has a simple closed form solution, while preserving nonlinearity and heterogeneity in the link between the short term and long term.
The long term regression γ0obs⁢(s,x)superscriptsubscript𝛾0obs𝑠𝑥\gamma_{0}^{\textsc{obs}}(s,x)italic_γ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT obs end_POSTSUPERSCRIPT ( italic_s , italic_x ) allows for nonlinearity and heterogeneity in the link
The short term kernel mean embedding μsexp⁢(d,x)subscriptsuperscript𝜇exp𝑠𝑑𝑥\mu^{\textsc{exp}}_{s}(d,x)italic_μ start_POSTSUPERSCRIPT exp end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( italic_d , italic_x ) allows for nonlinearity and heterogeneity in the counterfactual distribution of short term rewards.
B
𝐂Uncoded=[𝐈M|𝟎]Tsuperscript𝐂Uncodedsuperscriptdelimited-[]conditionalsubscript𝐈𝑀0𝑇\mathbf{C}^{\text{Uncoded}}=[\mathbf{I}_{M}|\mathbf{0}]^{T}bold_C start_POSTSUPERSCRIPT Uncoded end_POSTSUPERSCRIPT = [ bold_I start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT | bold_0 ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT, where 𝐈M∈ℝM×Msubscript𝐈𝑀superscriptℝ𝑀𝑀\mathbf{I}_{M}\in\mathbb{R}^{M\times M}bold_I start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_M × italic_M end_POSTSUPERSCRIPT is an identity matrix.
where the first term on the right hand side calculates the average number of learners used for training each agent, and oc≥0subscript𝑜𝑐0o_{c}\geq 0italic_o start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ≥ 0. Using the above metric, the computation overhead of each assignment scheme can be derived as follows:
This paper introduced DARL1N, a scalable MARL algorithm that can be trained over a distribute computing architecture. DARL1N reduces the representation complexity of the value and policy functions of each agent in a MARL problem by disregarding the influence of other agents that are not within one hop of a proximity graph. This model enables highly efficient distributed training, in which a compute node only needs data from an agent it is training and its potential one-hop neighbors. We conducted comprehensive experiments using five MARL environments and compared DARL1N with four state-of-the-art MARL algorithms. DARL1N generates equally good or even better policies in almost all scenarios with significantly higher training efficiency than benchmark methods, especially in large-scale problem settings. To improve the resilience of DARL1N to stragglers common in distributed computing systems, we developed coding schemes that assign each agent to multiple learners. We evaluate properties of MDS, Random Sparse, Repetition, LDGM codes and provide guidelines on selecting suitable assignment schemes under different situations.
The coded schemes mitigate the impact of stragglers by assigning each agent to multiple learners. The training performed by the extra learners is redundant. To quantify the computation overhead introduced by this redundancy, we use the following metric:
Coded distributed training assigns each agent to multiple learners. Here, we investigate five codes, where the encoding matrices can be directly utilized as the assignment matrix.
D
Quite importantly, their result is only applicable to SAA while our reduction applies to any policy in the wide range of sample-size-agnostic policies. In particular, this will be critical to derive policies which achieve minimax optimal asymptotic regret rates when SAA fails and is not (rate) optimal (see Section 5).
The examples of pricing and ski-rental illustrate how critical Theorem 1 is to derive guarantees for a wide range of data-driven policies but leaves open the choice of the policy that should be analyzed. In Section 5.2 we take initial steps for the design of general policies with strong asymptotic worst-case regret guarantees. We first introduce a policy naturally suggested by Theorem 1, which minimizes the DRO regret objective and show that this policy is essentially minimax optimal. We also derive a general result regarding the performance of the classical DRO policy in Rahimian and Mehrotra (2019) (formally, a variant which optimizes the worst-case objective over randomized actions) and show in LABEL:{prop:RDRO_to_DRO} that when the optimal value of the objective is not too sensitive to the structure of the heterogeneity, the DRO policy achieves a rate-optimal regret guarantee. We further show that this condition holds for the pricing problem under Wasserstein heterogeneity.
We next present an alternative sample-size-agnostic policy for which the asymptotic worst-case vanishes as ϵitalic-ϵ\epsilonitalic_ϵ goes to 00. Furthermore, we characterize the worst-case performance of that policy, showing that it has the best possible dependence with respect to ϵitalic-ϵ\epsilonitalic_ϵ.
We prove Theorem 1 through a sample path analysis. We show that almost surely (over all possible historical samples observed), the asymptotic worst-case regret of a sample-size-agnostic policy can be bounded by the right-hand-side of (6). In particular, we use the ETC property of the distance to show that asymptotically, the empirical cdf observed by the policy belongs to the heterogeneity ball of radius ϵitalic-ϵ\epsilonitalic_ϵ. The proof also relies on the formalism we develop to define data-driven policies as discusses in Section A.1.
We show in Section A.2.1 that, in general, by leveraging relations between different distances one may relate the worst-case regret of data-driven decision-making instances in heterogeneous environments which have the same ϵitalic-ϵ\epsilonitalic_ϵ but differ along the type of distance used. In particular, when ΞΞ\Xiroman_Ξ is a bounded subset of ℝℝ\mathbb{R}blackboard_R, we obtain by specifying our result to the Kolmogorov and the Wasserstein distances that for any data-driven policy π∈Π𝜋Π\pi\in\Piitalic_π ∈ roman_Π and any sample size n𝑛nitalic_n and any radius of heterogeneity ϵitalic-ϵ\epsilonitalic_ϵ,
C
On the six natural RNA targets and among the subset of all CASP15 participants ranked on these specific targets, RhoFold (AIchemy_RNA) was fourth, while RhoFold+’s performance was on par with AIchemy_RNA2’s (with a difference of 0.4 in the Z-score) and surpassed that of other methods. In a detailed analysis of performance on specific targets, we found that, for target R1108, RhoFold+ achieved the best Z-score and RMSD.
In order to test RhoFold+’s ability to generalize for structure- (in addition to mainly sequence-) dissimilar targets, we sought to determine whether RhoFold+’s predictions could surpass the best single template (the most structurally similar model) in the training set for a given query. To investigate this, we compared the TM-scores between our predictions and experimentally determined structures against the TM-scores between the best single templates and experimentally determined structures across all RNA-Puzzles. For the majority of puzzles, RhoFold+ produced predictions with a higher global similarity and an average TM-score of 0.574, surpassing the best single template by 0.05 (Fig.2e, Supplementary Table 13). It is important to highlight that for proteins, surpassing the best single template required substantial progress. Indeed, it was only during CASP14 that computational methods outperformed the best single template. Although RhoFold+ generated considerably more accurate predictions than other methods under the conventional sequence similarity data splitting paradigm, we further tested RhoFold+’s adaptability by eliminating 3D structures from the training set whose TM-score, with respect to any target, surpassed a specified threshold (Supplementary Figure 6, Supplementary Table 6, 10). Even under this more demanding condition, RhoFold+ continued to exhibit promising performance (Supplementary Table 10).
Importantly, here RhoFold+ was trained using non-overlapping training data with respect to the RNA-Puzzles targets tested (see Methods). We conducted preprocessing to obtain 24 single-chain RNA targets and excluded RNA complexes. This set of RNA targets contained two puzzles (PZ), PZ34 and PZ38, that were introduced after our development of RhoFold+ (Fig.2a, Supplementary Figure 3) and thus served as a blind test. After collecting the predictions of other methods from the official server (http://www.rnapuzzles.org/), we found that the performance of RhoFold+ surpassed that of all other methods, including FARFAR2/ARES, on nearly all targets, except for PZ24. Notably, RhoFold+ outperformed the second-best method on more than half of the targets by ∼similar-to\sim∼4 Å RMSD. On 17 targets, RhoFold+ achieved RMSD values <5absent5<5< 5 Å, and only one target exhibited an RMSD >10absent10>10> 10 Å (Fig.2a, Supplementary Table 5). As a whole, RhoFold+ produced an average RMSD of 4.02 Å, 2.30 Å better than that of the second-best model (FARFAR2: top 1%, 6.32 Å). Assessed using the template modeling (TM)-score 30, RhoFold+ achieved an average of 0.57 (Supplementary Table 5), higher than the scores of other top performers (0.41 and 0.44).
Interestingly, RhoFold+ also attained the best Z-score for R1116, although its RMSD was ∼similar-to\sim∼1 Å higher than that of UltraFold (other methods produced predictions with significantly lower accuracy, with RMSDs >>>10 Å). Upon further investigation, we found that, while UltraFold outperformed RhoFold+ on this metric by producing accurate local predictions, the predicted global structure was less accurate, as evidenced by a TM-score of 0.497 and a GDT-TS score <<<0.4. In contrast, RhoFold+ inaccurately predicted a helix angle, resulting in an RMSD of 8.92 Å, but its correctly predicted topology resulted in a higher TM-score of >>>0.55. For this target, AIchemy_RNA2 incorrectly predicted the stem stackings and RNA topology, resulting in a high RMSD of 17.26 Å and a TM-score of ∼similar-to\sim∼0.49. Notably, RhoFold+’s prediction for R1116 did not arise from overfitting, as indicated by the low maximum structural similarity (TM-score) and maximum sequence similarity (seq-sim) of R1116 with respect to the training set (Fig.2k, Supplementary Table 6).
k. Comparison of RhoFold+’s predictions against AIchemy_RNA2 and UltraFold on the R1116 target from CASP15.
C
In addition, we compute the wall-clock time for different processes of MAZE on the AA layout, specifically the collecting trajectories, updating, and pairing, resulting in 150 seconds, 82 seconds, and approximately 0 seconds, respectively. The primary factor contributing to the overall time overhead is the process of collecting trajectories and updating, which are shared processes among the ZSC methods. Consequently, the time overhead associated with MAZE is comparable to that of other ZSC methods.
We mainly evaluate the performance of MAZE in the popular Overcooked [4] environment, a two-player common-payoff collaborative cooking environment. Furthermore, we design a grid-world FillInTheGrid to verify the versatility of MAZE. We conduct experiments on different layouts in these environments, where the agent and partner have different degrees of heterogeneity. We investigate four research questions (RQs) in our experiments. In RQ1, we show that even a simplified variant of MAZE, which just uses two populations of agents and partners, can be significantly better than self-play [45, 50] and population play [18], disclosing the necessity of considering the heterogeneity of agent and partner. The results in RQ2 show that MAZE achieves better performance compared with recently proposed methods [31, 48, 62]. In RQ3, we examine the influence of different components of MAZE, demonstrating the effectiveness of each component of the proposed framework. In RQ4, we investigate the coordination ability with real human participants. To the best of our knowledge, we are the first to point out the importance of heterogeneity in ZSC and propose an efficient method MAZE to solve it. Experiments on different types of heterogeneous environments show the necessity of considering the heterogeneity and the effectiveness of MAZE.
The training curves are shown in Figure 6, reflecting the change in the average reward of the agents during the training phase. In all the six heterogeneous environments, V-MAZE achieves better performance clearly, showing the necessity of considering the heterogeneity and distinguishing the two players explicitly. Besides, we use a larger scale network to train SP and PP agents on the heterogeneous layout AA in Section V-G. With more generations (i.e., almost three times of V-MAZE), SP and PP still perform worse than V-MAZE. This indicates that the heterogeneous skills of both players cannot be mastered well at the same time even with enhanced representation ability and more data, further strengthening our conclusions. The test performance with unknown partners (defined in Section V-D in Table II align with the training curves, i.e., V-MAZE >>> PP >>> SP.
Table III shows the detailed results, i.e., the mean and standard deviation of the reward achieved by each algorithm under each combination of layout and partner. We compute the rank of each algorithm under each setting as in [10], which are averaged in the last row of Table III. Besides, we apply the Wilcoxon rank-sum test with significance level 0.05, to compare MAZE with other methods. We can observe the order of rank “MAZE <<< MEP <<< TrajeDi <<< FCP <<< PP <<< SP”, which is consistent with previous observations, e.g., “FCP <<< PP <<< SP” in Strouse et al. [48] and “MEP <<< TrajeDi <<< FCP <<< SP” in Zhao et al. [62]. As expected, PP using a population is better than SP using a single individual. The superiority of TrajeDi, FCP, and MEP over PP discloses the advantage of exposing the agent to diverse partners during the training process. The proposed method MAZE performs the best overall. Furthermore, its superiority on the four heterogeneous layouts (i.e., H-CR, AA, AA-2 and FC), suggesting that MAZE is suitable for heterogeneous ZSC. The generally good performance of MAZE with different partners also shows that MAZE can coordinate with partners with different skill levels. We can also observe that for each algorithm on each layout, the partner trained by MAZE is almost always the best. Besides, the MAZE agent paired with the MAZE partner achieves the highest performance on the four heterogeneous layouts. These observations also show the advantage of considering the heterogeneity by MAZE. When paired with Human Proxy partners, the performance of each algorithm is relatively bad, which may be because the human proxy is trained ignoring the heterogeneity and thus hard to coordinate with.
Finally, we compare the performance of different methods on a homogeneous environment CR and a heterogeneous environment H-CR. The layouts of these environments are the same. However, the skills of different players in CR are the same, and the skills in H-CR are different. Similar to Section V-C, we first show the training curves of V-MAZE, SP, and PP, as shown in Figure 10. As expected, on the homogeneous layout CR, V-MAZE achieves similar performance to SP and PP. While on the heterogeneous layout H-CR, V-MAZE achieves better performance clearly, showing the superior performance of V-MAZE on the heterogeneous environment.
D
(e) Precipitation rates averaged over time and longitudes and relative frequency histograms (f) are shown for ERA5 data (black), CM2Mc-LPJmL (red), GFDL-ESM4 (blue), quantile mapping (magenta) and the GAN (cyan). The GAN applied to the CM2Mc-LPJmL output corrects the double-peaked ITCZ as well as the histogram over the entire range of precipitation rates.
In both tropical and temperate zones, the constrained GAN corrects the precipitation towards the more complex and higher-resolution GFDL-ESM4, while following the trend of the CM2Mc-LPJmL model. Again, the unconstrained model remains relatively constant in both cases, with a small decrease over time in the temperate zone. Note that the GFDL-ESM4 does not represent a ground truth, but only one realisation of a possible Earth system trajectory, for comparison. This can be seen by the differing trends of two other CMIP6 models in Fig. S13. It should, however, be expected that the precipitation output from the CMIP6 models is much more realistic than the raw precipitation from the comparably low-resolution CM2Mc-LPJmL model.
The averaged absolute value of the grid-cell-wise mean error (ME) for the raw CM2Mc-LPJmL and GFDL-ESM4 models, as well as for the QM- and GAN-based post-processing, using the CM2Mc-LPJmL output as input. The bias reduction relative to the raw CMCMc-LPJmL model is given in percentage. Note that the GAN shows the largest reduction of the absolute ME in all cases, with more than 75% improvement relative to the raw CM2Mc-LPJmL for the annual fields.
of the mean error (ME) shown in the spatial plots (Table LABEL:tab:bias). Here, the GAN shows the strongest error reduction compared to QM and GFDL-ESM4, reducing the error of CM2Mc-LPJmL by 75% for annual and between 72% to 64% for seasonal time series. We include the results of two additional ESMs from CMIP6, the MPI-ESM1-2-HR and the CESM2 model, for comparison with GFDL-ESM4 in the SI (Table S1). The ME of the MPI-ESM1-2-HR model is higher than for GFDL-ESM4 while the CESM2 shows lower bias. The average ME of CEMS2, however, remains higher than our GAN-based post-processed CMCMc-LPJmL model.
Mean errors of (a) CM2Mc-LPJmL, (b) GFDL-ESM4, (c) QM-based and (d) GAN-based post-processing methods applied to the CM2Mc-LPJmL output.
B
To the best of our knowledge, this is the first study of the TOP as the existing literature assumes that components are directly plugged into slots on CAP machines (Castellani et al. (2019); Gao et al. (2021)). In practice, PCB manufacturers use trolleys to load components which otherwise could be difficult to manage and switch while building a variety of PCBs on an assembly line. If trolleys are prepared from direct assignment of components to slots of CAP machines, in that case, the trolley loading won’t be efficient, as that may need to switch multiple trolleys between different PCBs. The TOP is an important problem, especially for low-volume and high-mix problems, due to the frequent need to switch PCBs.
The problem structure is exploited to decompose the TOP into two smaller, identical and independent problems, i.e., assignment of trolleys and assignment of stackers, by pre-computing the dependency between them. So, a single and smaller MILP model is sufficient to solve both the problems and, hence, to solve the TOP (for details refer to Subsection 3.1 and 3.3).
A novel extension of the BPP is derived to formulate the TOP by introducing additional constraints to ensure that the number of trolleys required to build each PCB is less than or equal to the capacity of the assembly line used for building the PCB. An MILP model is developed to solve the TOP which is solved using exact optimisation methods.
To formulate the TOP, we extend the bin packing problem (BPP) which finds a minimum number of bins of common capacity to pack a given set of items of different weights (Wäscher et al. (2007)). The TOP shares constraints similar to the BPP, with additional constraints (for details refer to Subsection 3.2) to ensure that the number of trolleys needed to build each PCB does not exceed the capacity of the assembly line otherwise the problem will be either infeasible or will need to change trolleys while building a PCB. We also exploit the problem structure to decompose the TOP into two smaller, identical and independent problems, i.e., assignment of components to trolleys and assignment of components to stackers, by pre-computing the dependency between both problems. Further, we develop a single and a smaller mixed integer linear programming (MILP) model to solve both problems. The exact optimisation-based methods are used to successfully solve the resulting MILP model.
We present a novel extension of the BPP to formulate the TOP. Similar to bin packing, the TOP finds a minimum number of trolleys/stackers (equivalent to bins) of common capacity to load/pack a given set of components (equivalent to items) of different sizes/weights to build a set of PCBs in an assembly line. The TOP shares the objective function and constraints of the BPP but adds additional constraints to ensure that each PCB is feasible on the assembly line, i.e., the problem limits the maximum number of trolleys to be used to load components of a PCB otherwise either the PCB could not be built on the assembly line or need to replace trolleys during the building process. Additionally, we exploit the problem structure to decompose the TOP into two smaller, identical and independent problems, by pre-computing the dependency between them. So, a single and a smaller MILP model is developed to solve the TOP which is solved using exact optimisation methods. We also proved that the TOP is an NP-complete problem.
C
Unlike other deep learning methods that may suffer from limited interpretability, Hgarn can be used to reveal the dependencies between activities through the learned Hierarchical Graph. We visualize one attention head of GatCsubscriptGat𝐶\textsc{Gat}_{C}Gat start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT’s sliced attention matrix to analyze the learned activity-activity dependencies. In Figure 7, we select four activity pairs to show the related activities and their corresponding attention scores. These activity pairs are consistent with common sense, such as the high dependencies between Gyms and Stadiums, or Bus Stops and Travel Lounges. These results have important implications for understanding human activity patterns and predicting future mobility behavior.
However, most existing studies focus on predicting human mobility based on individual location sequences, overlooking the integral interplay between activity participation and location visitation behaviors. Classic travel behavior theories suggest that an individual’s travel decisions are determined by the need to participate in activities taking place at different locations and scheduled at different time of day [19]. Given that human activity data is becoming increasingly accessible and most location visits can be characterized by only a small number of activity categories, incorporating these activity dynamics into human mobility modeling offers a behaviorally insightful and computationally efficient approach.
In this study, we propose Hierarchical Graph Attention Recurrent Network (Hgarn) for next location prediction. Specifically, we construct a hierarchical graph based on past mobility records and employ a Hierarchical Graph Attention Module to capture complex time-activity-location dependencies. This way, Hgarn can learn representations with rich human travel semantics to model user preferences at the global level. We also propose a model-agnostic history-enhanced confidence (MaHec) label to incorporate each user’s individual-level preferences. Finally, we introduce a Temporal Module, which employs recurrent structures to jointly predict users’ next activities and their associated locations, with the former used as an auxiliary task to enhance the latter prediction.
We design a activity-aware Hierarchical Graph Attention Recurrent Network (Hgarn), which contains a hierarchical graph attention module to model dependencies between time, activities, and locations, and a temporal module to incorporate the hierarchical graph representations into sequence modeling, leveraging next activity prediction to boost next location prediction.
Both travel behavior theories and empirical evidence suggest that human mobility patterns largely depend on the need to participate in activities at different times of the day. Therefore, it is crucial to consider the latter when modeling the former. In this paper, we propose a Hierarchical Graph Attention Recurrent Network (Hgarn) for activity-aware human mobility prediction.
D
Does there exist C>0𝐶0C>0italic_C > 0 such that logR2(2,n)≤(log(M2(n))C\log R_{2}(2,n)\leq(\log(M_{2}(n))^{C}roman_log italic_R start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( 2 , italic_n ) ≤ ( roman_log ( italic_M start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_n ) ) start_POSTSUPERSCRIPT italic_C end_POSTSUPERSCRIPT?
Bucić, Sudakov and Tran [1] gave a doubly exponential upper bound for d=2,3𝑑23d=2,3italic_d = 2 , 3; and
In their paper, Fishburn and Graham [10] introduced another natural generalisation for monotone sequences and the Erdős-Szekeres theorem, which they called a lex-monotone array.
The authors would like to thank Zachary Hunter and Matija Bucić for their helpful comments. We also thank the anonymous referee for their suggestions.
Using the methods from our proof of Theorem 1.1, we resolve their question, showing that a doubly exponential upper bound holds in all dimensions.
C
G⁢(|ψ⟩)<G⁢(|ψ~⟩)𝐺ket𝜓𝐺ket~𝜓G(|\psi\rangle)<G(|\tilde{\psi}\rangle)italic_G ( | italic_ψ ⟩ ) < italic_G ( | over~ start_ARG italic_ψ end_ARG ⟩ ).
The access to multipartite quantum states is an indispensable prerequisite for many applications in quantum information, turning them into a powerful resource which potentially outperform their classical counterparts Bennett and Brassard (2014); Ekert (1991); Holland and Burnett (1993). Indeed, magic states turn out to be a resource for fault-tolerant quantum computation Zhou et al. (2000); Bravyi and Kitaev (2005) while cluster states are resourceful for measurement-based quantum computation Raussendorf and Briegel (2001); Raussendorf et al. (2003). Furthermore, the power of quantum metrology heavily relies on the ability to prepare multipartite quantum states. However, for a particular given task it is in general very challenging to identify those multipartite states which yield the largest advantage.
In this work we have presented an iterative method for the computation of maximally resourceful quantum states. We provided a convergence analysis and showed that in each step the resourcefulness of the iterates increase. We illustrated our approach for the special case of the geometric measure, allowing us to identify interesting quantum states, discover novel AME states, and characterize highly entangled subspaces which may be useful for information processing. We further demonstrated the universality of the algorithm for various other quantifiers, yielding novel forms of correlations in the triangle network.
a generic quantum state, we show that in each step of the algorithm the resourcefulness increases. We illustrate the universality of our method by applying it to various different resource quantifiers and present a detailed analysis for the geometric measure. Here we
The proof is given in Appendix A and comes with an interesting feature. It turns out that the proof does not rely on the particular product state structure of |π⟩ket𝜋\ket{\pi}| start_ARG italic_π end_ARG ⟩, so any figure of merit based on maximizing the overlap with pure states from some subset can be optimized with our method. This turns the algorithm into a powerful and tool with a universal applicability. Indeed, we adapt it to the dimensionality of entanglement (see Appendix E), the stabilizer rank (Appendix D), matrix product states (Appendix E) as well as to the preparability in quantum networks  (Appendix F). Remarkably, here novel states are found which are more distant to any network state as those known so far.
D
In this paper, we generalize the existing results in literature on EFX allocations to the setting when the number of distinct valuations is k𝑘kitalic_k, but the number of agents can be arbitrary. We give an EFX allocation with at most k−2𝑘2k-2italic_k - 2 unallocated goods such that no agent envies the bundle of unallocated goods. We also show the existence of a complete EFX allocation under MMS-feasible valuations when all but two agents have identical valuations. The limitation of the technique used to prove Theorem 1 is clear from [efx_3]. At each step, our allocation Pareto dominates the previous allocations. As shown in [efx_3], even for three agents, there could be a partial allocation that Pareto dominates all complete allocations. So one cannot hope to reach a complete allocation using this technique.
We now prove a lemma that will be used crucially to prove Theorem 1. In the lemma below, we assume that there is an existing (possibly partial) EFX allocation X𝑋Xitalic_X. We show that if we improve the bundles of the leading agents such that the new bundle of each leading agent is a minimally envied subset with respect to their respective bundles in X𝑋Xitalic_X, then the resulting allocation is EFX for all agents.
We thank anonymous reviewers for their helpful comments. Vishwa Prakash HV acknowledges the support of TCS Research Scholar Fellowship.
In the remainder of the proof, we consider the case that Xnsubscript𝑋𝑛X_{n}italic_X start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT is the only EFX-feasible bundle for both b1subscript𝑏1b_{1}italic_b start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and c1subscript𝑐1c_{1}italic_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT.
That is, the overall minimum has increased. Now, we run the PR algorithm on X′′superscript𝑋′′X^{\prime\prime}italic_X start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT with the valuation vasubscript𝑣𝑎v_{a}italic_v start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT to get a new allocation Z𝑍Zitalic_Z. Let agent c1subscript𝑐1c_{1}italic_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT pick their favorite bundle. From the property of the PR algorithm, we know that ϕ⁢(Z)>ϕ⁢(X)italic-ϕ𝑍italic-ϕ𝑋\phi(Z)>\phi(X)italic_ϕ ( italic_Z ) > italic_ϕ ( italic_X ). Thus, we have a new almost EFX-feasible allocation with higher potential. This concludes the proof.
B
Road Anomaly Test Sets. We further compare SLEEG with recent advanced anomaly segmentation methods on Road Anomaly in Tab. 2, it is observed that SLEEG outperforms most competitors by a large margin when no labeled anomaly data is available. Since there exists larger inherent domain shift between Road anomaly and Cityscapes than Fishyscapes, previous methods (e.g. SML (Jung et al. 2021)) that perform well on Fishyscapes are prone to suffer from poor accuracy on Road anomaly. However, our SLEEG yields significant improvements on both two datasets, demonstrating the robustness of SLEEG in tackling open-world scenes with diverse styles.
Investigation on the influence on AP and false positive rate with varied λ𝜆\lambdaitalic_λ value on FS Lost & Found validation set (left) and FS Static validation set (right).
Table 6: Ablation results of comparing static/dynamic margin (Eq. (Anomaly Estimators for Likelihood Maximization)) on FS and Road Anomaly validation set.
Ablation results for different likelihood estimators on FS validation set and Road Anomaly validation set.
Comparison of visualization results with JEM, Softmax Entropy and Image Re-synthesis on FS Lost & Found validation set.
C
The interpolation results are shown in fig. 18. The physical space results are shown in the top row, and the RCDT-POD results are shown in the bottom row. The results show that the RCDT-POD ROM, despite the intrinsic error, can predict the target snapshot, without introducing an additional shock within the wake, clearly visible in the physical space interpolation. However, unphysical and uncausal features upstream the airfoil contact are introduced by the RCDT-POD ROM. The classical POD has error zero in those regions and is physically more plausible.
For the implementation of proper orthogonal decomposition (POD) and model order reduction (MOR) we use the EZyRB package [28].
In section 4, instead, we focus on the complete MOR procedure starting with a simple moving Gaussian distribution, transformed into RCDT space and order-reduced using POD, compared alongside ’standard’ POD in physical space. We then test our workflow for a multi-phase fluid wave and the flow around an airfoil using high-resolution CFD data. Final discussions and future work directions are then reported in section 5.
This work has focused on implementing and verifying the Radon-Cumulative Distribution Transform (RCDT) for image and flow capture and assessing its applicability in model order reduction (MOR) – under proper orthogonal decomposition (POD) – of high-fidelity CFD input data. RCDT and subsequent RCDT-POD MOR workflows were tested for accuracy compared against either the original input images or standard POD in physical space.
Both the implementation of the RCDT and ROM workflows have been written in Python 3.9.7, making use of two packages, PyTransKit [27] and EZyRB [28]; implementing the discretised form of RCDT – with subsequent forward/inverse transforms – and model order reduction functionality, respectively. For the ROM side, i.e. EZyRB, model reduction is approached using proper orthogonal decomposition (POD), see [29, 30], for example, in applying EZyRB towards shape optimization problems. All the code developed for the preparation of this article is available open-source [31]. Specifically, the singular value decomposition (SVD) – discussed more in section 2.5 – is used to determine the POD modes for the reduced-order model. SVD is not the only way to compute the POD, though an alternative approach is given by the method of snapshots [6, 32]. Three distinct workflows have been implemented: the RCDT transform upon a single snapshot image/flow followed by the inverse transformation to observe the intrinsic error induced by the discretisation and implementation of the non-linear transform; the RCDT-POD reconstruction/projection error to evaluate the effect of the non-linear transformation in the POD modes; and, the complete RCDT-POD ROM workflow consisting of the RCDT-POD on a series of snapshots, and the subsequent interpolation (with respect to time or other parameters) to predict unseen scenarios.
C
\underline{\bf X})^{T}=\underline{\bf X}^{{\dagger}}*\underline{\bf X}.( under¯ start_ARG bold_X end_ARG ∗ under¯ start_ARG bold_X end_ARG start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT = under¯ start_ARG bold_X end_ARG ∗ under¯ start_ARG bold_X end_ARG start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT , ( under¯ start_ARG bold_X end_ARG start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT ∗ under¯ start_ARG bold_X end_ARG ) start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT = under¯ start_ARG bold_X end_ARG start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT ∗ under¯ start_ARG bold_X end_ARG .
We see that the GTSVD provides the same right tensor 𝐙¯¯𝐙\underline{\bf Z}under¯ start_ARG bold_Z end_ARG in (14)-(15) and we can use it to sample lateral slices of the data tensors 𝐗¯¯𝐗\underline{\bf X}under¯ start_ARG bold_X end_ARG and 𝐘¯¯𝐘\underline{\bf Y}under¯ start_ARG bold_Y end_ARG based on the TDEIM algorithm. As a result, the same indices can be used to sample the lateral slices for the data tensors 𝐗¯¯𝐗\underline{\bf X}under¯ start_ARG bold_X end_ARG and 𝐘¯¯𝐘\underline{\bf Y}under¯ start_ARG bold_Y end_ARG. The horizontal slices of the data tensors 𝐗¯¯𝐗\underline{\bf X}under¯ start_ARG bold_X end_ARG and 𝐘¯¯𝐘\underline{\bf Y}under¯ start_ARG bold_Y end_ARG can also be sampled using the left tensor parts 𝐔¯¯𝐔\underline{\bf U}under¯ start_ARG bold_U end_ARG and 𝐕¯¯𝐕\underline{\bf V}under¯ start_ARG bold_V end_ARG, although they do not necessarily provide identical horizontal slice indices. Following this idea, we can compute the GTSVD of the tensors 𝐗¯,𝐘¯¯𝐗¯𝐘\underline{\bf X},\,\underline{\bf Y}under¯ start_ARG bold_X end_ARG , under¯ start_ARG bold_Y end_ARG and by applying the TDEIM to the shared tensor factors 𝐔¯,𝐕¯,𝐙¯¯𝐔¯𝐕¯𝐙\underline{\bf U},\,\underline{\bf V},\,\underline{\bf Z}under¯ start_ARG bold_U end_ARG , under¯ start_ARG bold_V end_ARG , under¯ start_ARG bold_Z end_ARG, we can select indices of horizontal and lateral slices of the given data tensors. This approach is summarized in Algorithm 6. In Line 1 of Algorithm 6, we need to compute the GTSVD of two tensors and this is clearly prohibitive for large-scale tensors. However, to tackle this problem, the randomized algorithms proposed in [38] can be used. Note that Lines 10-11 in Algorithm 6 can be efficiently computed, and this is outlined in Algorithm 7.
The MP pseudoinverse of a tensor can also be computed in the Fourier domain and this is shown in Algorithm 2.
The procedure of the computation of the GTCUR for tensor triples is summarized in Algorithm 8. Lines 6-8 can be efficiently computed in the Fourier domain and similar algorithms like Algorithm 6 can be developed for this computation. The t-RSVD of the tensor triplets (𝐗¯,𝐘¯,𝐙¯)¯𝐗¯𝐘¯𝐙(\underline{\bf X},\underline{\bf Y},\underline{\bf Z})( under¯ start_ARG bold_X end_ARG , under¯ start_ARG bold_Y end_ARG , under¯ start_ARG bold_Z end_ARG ) provides the common tensor factors 𝐋¯,𝐖¯¯𝐋¯𝐖\underline{\bf L},\,\underline{\bf W}under¯ start_ARG bold_L end_ARG , under¯ start_ARG bold_W end_ARG and extra tensor factors 𝐔¯¯𝐔\underline{\bf U}under¯ start_ARG bold_U end_ARG and 𝐕¯¯𝐕\underline{\bf V}under¯ start_ARG bold_V end_ARG. These bases can be used to sample lateral and horizontal slices of the tensor triplets (𝐗¯,𝐘¯,𝐙¯)¯𝐗¯𝐘¯𝐙(\underline{\bf X},\underline{\bf Y},\underline{\bf Z})( under¯ start_ARG bold_X end_ARG , under¯ start_ARG bold_Y end_ARG , under¯ start_ARG bold_Z end_ARG ). Since we have the common tensor factors 𝐋¯¯𝐋\underline{\bf L}under¯ start_ARG bold_L end_ARG and 𝐖¯¯𝐖\underline{\bf W}under¯ start_ARG bold_W end_ARG, the indices, which are used to sample the horizontal slices of the data tensors 𝐗¯¯𝐗\underline{\bf X}under¯ start_ARG bold_X end_ARG and 𝐙¯¯𝐙\underline{\bf Z}under¯ start_ARG bold_Z end_ARG are the same, while the indices for the selection of lateral slices of the data tensors 𝐗¯¯𝐗\underline{\bf X}under¯ start_ARG bold_X end_ARG and 𝐘¯¯𝐘\underline{\bf Y}under¯ start_ARG bold_Y end_ARG are identical. Nevertheless, the tensor bases 𝐔¯,𝐕¯¯𝐔¯𝐕\underline{\bf U},\,\underline{\bf V}under¯ start_ARG bold_U end_ARG , under¯ start_ARG bold_V end_ARG are used to sample lateral slice indices of the tensor 𝐘¯¯𝐘\underline{\bf Y}under¯ start_ARG bold_Y end_ARG and horizontal slice indices of the tensor 𝐙¯¯𝐙\underline{\bf Z}under¯ start_ARG bold_Z end_ARG. The visualization of this approach is described in Figure 2. So, the idea is to compute the t-RSVD of the tensor triplets (𝐗¯,𝐘¯,𝐙¯)¯𝐗¯𝐘¯𝐙(\underline{\bf X},\underline{\bf Y},\underline{\bf Z})( under¯ start_ARG bold_X end_ARG , under¯ start_ARG bold_Y end_ARG , under¯ start_ARG bold_Z end_ARG ) and to find the corresponding tensor factors 𝐔¯,𝐕¯,𝐙¯,𝐖¯¯𝐔¯𝐕¯𝐙¯𝐖\underline{\bf U},\,\underline{\bf V},\,\underline{\bf Z},\,\underline{\bf W}under¯ start_ARG bold_U end_ARG , under¯ start_ARG bold_V end_ARG , under¯ start_ARG bold_Z end_ARG , under¯ start_ARG bold_W end_ARG and use them to find the indices for selecting horizontal and lateral slices using the TDEIM algorithm. We should remark that the computation of the t-RSVD is demanding for the case of large-scale data tensors and the fast randomized GSVD algorithm proposed in [40] can be used for the computation of the double GSVD, which are required to compute the t-RSVD of the tensor triplets. The connection between the GTCUR for tensor triplets with the GTCUR for tensor pairs and the TCUR is described in the next theorem.
The basis tensors 𝐔¯¯𝐔\underline{\bf U}under¯ start_ARG bold_U end_ARG and 𝐕¯¯𝐕\underline{\bf V}under¯ start_ARG bold_V end_ARG required in Algorithm 4 can be computed very fast through the randomized truncated t-SVD [31, 32, 33]. This version can be regarded as a randomized version of the TDEIM algorithm.
B
Noise Generation. We assess the realism of noise distributions synthesized by different methods using the public evaluation metrics AKLD and PGap on the SIDD validation set. We compare RNSD with baseline techniques including GRDN (Kim, Chung, and Jung 2019), C2N (Jang et al. 2021), sRGB2Flow (Kousha et al. 2022), DANet (Yue et al. 2020), PNGAN (Cai et al. 2021) and NeCA (Fu, Guo, and Wen 2023). As shown in Table 1, our method outperforms the state-of-the-art (SOTA) with a PGap reduced by 0.30 and an AKLD improved by 0.027, indicating more realistic and stable noise synthesis.
Additionally, we evaluate our method using another publicly available metric (Jang et al. 2021) by training the DnCNN network (Zhang et al. 2017) from scratch with synthetic noise generated by RNSD. We compare its performance with C2N (Jang et al. 2021), NoiseFlow (Abdelhamed, Brubaker, and Brown 2019), sRGB2Flow (Kousha et al. 2022), GMDCN (Song et al. 2023), and NeCA (Fu, Guo, and Wen 2023). As shown in Table 2, our synthetic noise improves DnCNN’s denoising PSNR by 0.75 dB compared to the SOTA, approaching the performance of real-data training (38.11 dB vs. 38.40 dB).
Noise Generation. We assess the realism of noise distributions synthesized by different methods using the public evaluation metrics AKLD and PGap on the SIDD validation set. We compare RNSD with baseline techniques including GRDN (Kim, Chung, and Jung 2019), C2N (Jang et al. 2021), sRGB2Flow (Kousha et al. 2022), DANet (Yue et al. 2020), PNGAN (Cai et al. 2021) and NeCA (Fu, Guo, and Wen 2023). As shown in Table 1, our method outperforms the state-of-the-art (SOTA) with a PGap reduced by 0.30 and an AKLD improved by 0.027, indicating more realistic and stable noise synthesis.
Visual Analysis of Noisy Images.We compare RNSD with baselines such as C2N (Jang et al. 2021), DANet (Yue et al. 2020), and R2F(sRGB2Flow)(Kousha et al. 2022), as shown in Fig.4. RNSD accurately mimics real-world noise patterns across sensors and ISO settings, synthesizing realistic noise while preserving color and tonal accuracy.
Figure 1: Subjective results and AKLD (Yue et al. 2020) of various noise synthesis methods, including sRGB2Flow (Kousha et al. 2022), DANet (Yue et al. 2020), and C2N (Jang et al. 2021).
A
We trained an FFM (Juan et al., 2016) provided by Yahoo-Inc (2023) using the binary cross-entropy loss on the above data, both with binning of several resolutions and with splines defined on 6 sub-intervals . The numerical field was naïvely transformed to [0,1]01[0,1][ 0 , 1 ] by simple normalization. We plotted the learned curve for every configuration in Figure 4. Indeed, low-resolution binning approximates poorly, a higher resolution approximates better, and a too-high resolution cannot be learned because of sparsity. However, Splines defined on only six sub-intervals approximate the synthetic functions {pi}i=07superscriptsubscriptsubscript𝑝𝑖𝑖07\{p_{i}\}_{i=0}^{7}{ italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 7 end_POSTSUPERSCRIPT quite well.
Next, we compared the test cross-entropy loss on 75,000 samples generated in the same manner with for several numbers of intervals used for binning and cubic Splines. For each number of intervals we performed 15 experiments to neutralize the effect of random model initialization. As is apparent in Figure 5, Splines consistently outperform in this theoretical setting. The test loss obtained by both strategies increases if the number of intervals becomes too large, but the effect is much more significant in the binning solution.
We conduct experiments with k∈8,16,…,64𝑘816…64k\in{8,16,\dots,64}italic_k ∈ 8 , 16 , … , 64 as embedding dimensions, and each experiment is conducted using 50 trials of Optuna (Akiba et al., 2019) with its default configuration to tune the learning rate and the L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT regularization coefficient. The models were trained using the AdamW optimizer (Loshchilov & Hutter, 2019). As an ablation study, to make sure that cubic splines contribute to the the improvement we observe, we also conduct experiments with 0t⁢hsuperscript0𝑡ℎ0^{th}0 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT order splines applied after the above transformation, since it may be the case that the arcsinharcsinh\operatorname{arcsinh}roman_arcsinh transformation itself yields an improvement. We remind the readers that 0t⁢hsuperscript0𝑡ℎ0^{th}0 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT order splines are just uniform bins. For splines, we used 20 knots, which is roughly half the number of bins obtained by the strategy employed by the Criteo winners. The obtained test losses are summarized in Figure 7. It is apparent that cubic splines outperform both uniform binning (0t⁢hsuperscript0𝑡ℎ0^{th}0 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT order splines) and the original binning procedure of the Criteo winners for most embedding dimensions. Moreover, we can see that cubic splines perform best when the embedding dimension is slightly larger than the best one for binning. We conjecture that, at least for the Criteo data-set, cubic splines require more expressive power yielded by a slightly higher embedding dimension to show their full potential.
Figure 5: Comparison of the test cross-entropy loss obtained with Splines and bins. Both methods suffer from sparsity issues as the number of intervals grows, but Splines are able to utilize their approximation power with a small number of intervals, before sparsity takes effect. The bands are 90% bootstrap confidence intervals based on multiple experiment repetitions: for each number of boundaries and numerical field type we ran 15 experiments to neutralize the effect of random initialization.
We ran 20 experiments with the tuned configurations to neutralize the effect of random initialization, and report the mean and standard deviation of the metrics on the test set in Table 1, where it is apparent that our approach outperforms binning on these datasets. These datasets were chosen since they contain several numerical fields, and are small enough to run many experiments to neutralize the effect of hyper-parameter choice and random initialization at a reasonable computational cost, or time. They were also used in other works on tabular data, such as Gorishniy et al. (2021; 2022).
A
The rightmost chart in Figure 1 highlights a fundamental shortcoming of the PRP ranking - most of the dynamics it induces do not converge. Dynamics in both other two ranking functions, however, always converge. This is a key advantage of these functions.
As mentioned in §5, the PRP ranking function maximizes the users’ welfare for a fixed profile. The reason why softmax ranking functions with high β𝛽\betaitalic_β values nevertheless manage to achieve higher users’ welfare than the PRP is that the PRP is only short-term optimal,
Another insight from Figure 1 is that the users’ welfare of the PRP is roughly constant across λ𝜆\lambdaitalic_λ values.
A plausible explanation is that in the case of PRP and the tested λ𝜆\lambdaitalic_λ range, λ𝜆\lambdaitalic_λ has little, if any, impact on the behavior of publishers. This conjecture might also explain why the publishers’ welfare of the PRP appears to linearly decrease with λ𝜆\lambdaitalic_λ: the dynamics remain the same, but the term −λ⋅∑i=1ndi0⁢(xi)⋅𝜆superscriptsubscript𝑖1𝑛subscriptsuperscript𝑑0𝑖subscript𝑥𝑖-\lambda\cdot\sum_{i=1}^{n}d^{0}_{i}(x_{i})- italic_λ ⋅ ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT italic_d start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) in the publishers’ welfare gains more influence as λ𝜆\lambdaitalic_λ increases. Note that the PRP ranking function appears to sacrifice stability in pursuit of increased users’ welfare.
To conclude this section, let us revisit the trends we discovered in light of the results we have already seen in §6. From this perspective, we can see how an increase in k𝑘kitalic_k emphasizes the consequences of the instability of the PRP ranking function. The already low convergence ratio at k=2𝑘2k=2italic_k = 2 further diminishes with increasing k𝑘kitalic_k. Simultaneously, the users’ welfare, a measure that the PRP is designed to maximize in the short term, also experiences a minor decline as k𝑘kitalic_k increases. In contrast, the examination of different k𝑘kitalic_k values underscores the stability of both the softmax and the linear ranking functions, maintaining a convergence ratio of 1111 and roughly identical welfare measures across all tested k𝑘kitalic_k values.
B
For offline MARL, since baselines are tested in a decentralized style, i.e., all agents independently decide their actions with only local observations, MADiff-C is not meant to be a fair comparison but to show if MADiff-D fills the gap for coordination without global information.
Compared with centralized control, a more popular and widely-adopted setting is that each agent only makes its own decision without any communication with other agents, which is what most current works (Lowe et al., 2017; Rashid et al., 2020; Wang et al., 2023) dealt with. In this case, we can only utilize the current local observation of each agent i𝑖iitalic_i to plan its own trajectory. To this end, the initial noisy trajectory is conditioned on the current observation of the agent i𝑖iitalic_i. Similar to the centralized case, by iterative diffusion steps, we finally sample the joint state sequence based on the local observation of agent i𝑖iitalic_i as:
Compared to single-agent learning, offline multi-agent learning (MAL) has been less studied and is more challenging.
Datasets: we use the off-the-grid offline dataset (Formanek et al., 2023), including three datasets with different qualities for each map, e.g., Good, Medium, and Poor.
Similar to the single-agent case, direct supervised learning (BC) on the dataset behaves poorly when datasets are mixed quality.
D
In this section we work towards a message passing formulation of synthetic AIF. We start by reviewing AIF and the CFFG representation for a GFE objective for control. Further details on variational objectives for AIF and epistemic considerations can be found in (Koudahl et al., 2023).
We simulate a nested perception-action cycle, where on each trial the seller sends an action (offer) α^ssubscript^𝛼𝑠\hat{\alpha}_{s}over^ start_ARG italic_α end_ARG start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT to the primary agent, and where the buyer sends actions (moves) u^tsubscript^𝑢𝑡\hat{u}_{t}over^ start_ARG italic_u end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT to the T-maze environment. Conversely, the T-Maze environment reports observations x^tsubscript^𝑥𝑡\hat{x}_{t}over^ start_ARG italic_x end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT to the buyer, which in turn report an observation x^s′subscriptsuperscript^𝑥′𝑠\hat{x}^{\prime}_{s}over^ start_ARG italic_x end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT to the seller. This setting thus defines two nested Markov blankets, where the seller can only interact with the T-maze through the buyer. Also note the difference in temporal scales; the buyer executes two actions (T=2𝑇2T=2italic_T = 2) for each action of the seller. At the end of each trial, the posterior for qs⁢(𝐀′)subscript𝑞𝑠superscript𝐀′q_{s}(\bm{\mathrm{A}}^{\prime})italic_q start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( bold_A start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) is set as the prior ps+1⁢(𝐀′)subscript𝑝𝑠1superscript𝐀′p_{s+1}(\bm{\mathrm{A}}^{\prime})italic_p start_POSTSUBSCRIPT italic_s + 1 end_POSTSUBSCRIPT ( bold_A start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) for the next trial.
For the initial simulation we set the reward probability α=0.9𝛼0.9\alpha=0.9italic_α = 0.9 and reward utility c=2𝑐2c=2italic_c = 2, and execute the perception-action cycle for S=100𝑆100S=100italic_S = 100 consecutive trials on the CFFG of Fig. 7. The resulting minimum policy GFE over trials, grouped by time, is plotted in Fig. 8 (top left). It can be seen that the GFE decreases overall, as the agent improves its model of the environment. With an improved model better actions can be proposed, and the agent learns to first seek the cue and then visit the indicated reward arm.
AIF defines an agent and an environment that are separated by a Markov blanket (Kirchhoff et al., 2018). In general, at each time step, the agent sends an action to the environment. In turn, the environment responds with an outcome that is observed by the agent. The goal of the agent is to manipulate the environment to elicit desired outcomes.
The results in Fig. 10 illustrate how the agent consolidates the outcomes of epistemic policies in the goal statistics. For the goal at the first time step 𝐜1,ssubscript𝐜1𝑠\bm{\mathrm{c}}_{1,s}bold_c start_POSTSUBSCRIPT 1 , italic_s end_POSTSUBSCRIPT, the agent learns to prefer a visit to the cue position. For the second time step 𝐜2,ssubscript𝐜2𝑠\bm{\mathrm{c}}_{2,s}bold_c start_POSTSUBSCRIPT 2 , italic_s end_POSTSUBSCRIPT, the agent learns to prefer a visit to the reward position. This results in a learned extrinsic preference for epistemic policies, as illustrated by the diverging policy GFEs on the right.
C
Experiment setup. We split the data into 90/10901090/1090 / 10 train/test sets at random and repeat the experiment 10 times. We determined the best estimate of the rank of the true outcome matrix and the rank of the observation pattern using 9-fold cross-validation with MNN. We used 16-fold cross-validation to separately tune the other hyperparameters. Next, we ran both MNN and modified USVT on the training set in each experimental run. Using the tuned hyperparameters, we ran both algorithms for 10 experimental runs using different train/test splits to calculate performance metrics.
Now, we consider R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT score, MSE, MAE, and max error as metrics to compare the estimates made by MNN and modified USVT against the true outcomes. The results of this experiment can be seen in Table 2. We can see that across all these metrics, MNN outperforms modified USVT. When comparing the R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT scores, we see that the difference in the performance of the two algorithms remains relatively constant for different ρnsubscript𝜌𝑛\rho_{n}italic_ρ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT values. The results of these R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-, MSE-, MAE-, and max error-based metrics all line up with the observations made about the relative bias of the estimates in the two algorithms, indicating that the latent factor clustering approach utilized by MNN is indeed de-biasing the data and leads to better estimates.
Table 1: Comparison of performance of MNN and USVT on Glance data. As can be seen, MSE for MNN is >28x better.
Results. As we can see from Fig. 4, the estimates from modified USVT are extremely biased. The estimates from MNN, however, appear to be minimally biased and inline with ground truth. Moreover, from Fig. 3, we can see that the estimates made by modified USVT are very sensitive to outliers in the data, while the estimates from MNN are not. We also observe from Table 1 that the R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, MSE, MAE, and max error of MNN are far better than that of modified USVT. Specifically the MSE of MNN is > 28x better compared to modified USVT. That is, MNN works significantly better on MNAR data.
Results. Before comparing the performance of MNN and modified USVT on the synthetic dataset, we examine the bias of the estimates for the full outcome matrix in both cases (experiments 1 and 2). As can be seen in Fig. 5,the distribution of estimates generated by MNN better approximates the true distribution of outcomes as the number of datapoints increases. Moreover, as n𝑛nitalic_n increases, the shape of the distribution indicates that the results are less biased. However, the estimates generated by modified USVT do not display this same trend and do not appear to become less biased as the number of datapoints increases. This difference in bias can also be seen in Fig. 6 as n𝑛nitalic_n is held constant at 1000. As ρnsubscript𝜌𝑛\rho_{n}italic_ρ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT decreases from 1 to n−1/4superscript𝑛14n^{-1/4}italic_n start_POSTSUPERSCRIPT - 1 / 4 end_POSTSUPERSCRIPT, the bias of MNN barely changes but the bias of modified USVT increases significantly.
C
Hypernetworks have demonstrated their effectiveness and versatility across a wide range of domains and tasks in deep learning. In this section, we discuss some of the important applications222We have explored 50 important papers (arranged by publication year) while considering at least one application in each distinct problem setting. This is not an exhaustive list and it is possible that we may have missed important references. of hypernetworks and highlight their contributions to advancing the SOTA in these areas. We summarize the applications of hypernets as per our proposed categorization and also provide links to code repositories for the benefit of the researchers, wherever available, in Table LABEL:tab_hypernets.
Continual learning, also known as lifelong learning or incremental learning, is a machine learning paradigm that focuses on the ability of a model to learn and adapt continuously over time, in a sequential manner, without forgetting previously learned knowledge. Unlike traditional batch learning, which assumes static and independent training and testing sets, continual learning deals with dynamic and non-stationary data distributions, where new data arrives incrementally, and the model needs to adapt to these changes while retaining previously acquired knowledge. The challenge in continual learning lies in mitigating catastrophic forgetting, which refers to the tendency of a model to forget previously learned information when it is trained on new data. To address this, various strategies have been proposed, including regularization techniques, rehearsal methods, dynamic architectures, and parameter isolation. Oswald et al., [49] modeled each incrementally obtained dataset as a task and applied task-conditioned hypernets for continual learning – this helped to share information among tasks. To address the catastrophic forgetting issue, they proposed a regularizer for rehearsing task-specific weight realizations rather than the data from previous tasks. They achieved SOTA results on benchmarks and empirically showed that the task-conditioned hypernets have a long capacity to retain memories of previous tasks. Similarly, Huang et al., [29], Ehret et al., [20] applied task-conditioned hypernets to continual learning in reinforcement learning (RL).
Multitasking refers to the capability of a model to perform multiple tasks or learn multiple objectives simultaneously. It involves leveraging shared representations and parameters across different tasks to enhance learning efficiency and overall performance. Hypernets can be applied in the context of multitasking to facilitate the joint learning of multiple tasks by dynamically generating or adapting the model’s parameters or architectures. Specifically, we can train task-conditioned hypernets for multitasking where embedding of a task act as input to the hypernet that generates weights for the corresponding task. We can either generate entire model for each of the tasks or can only generate non-shared parts of a multitasking network. The hypernets facilitate such models to share information across different tasks as well as have specific personalized model for each task. For example, Mahabadi et al., [43] applied task-conditioned hypernets that share knowledge across the tasks as well as generate task-specific models and achieved benchmark results. Navon et al., [45] also studied task-conditioned hypernets for Pareto-front learning to address the conflicting gradients among different objectives and obtained impressive results on multitasking, including fairness and image segmentation.
Task-conditioned hypernetworks: These hypernetworks take task-specific information as input. The task information can be in the form of task identity/embedding, hyperparameters, architectures, or any other task-specific cues. The hypernetwork generates weights that are tailored to the specific task. This allows the hypernet to adapt its behavior accordingly and allows information sharing, through soft weight sharing of hypernets, among the tasks, resulting in better performance on the tasks. For example, Chauhan et al., 2024c [14] applied hypernets to solve treatment effects estimation problem in causal inference that uses an identity or embedding of potential outcome (PO) functions to generate weights corresponding to the PO function. The hypernetworks enabled dynamic end-to-end inter-treatment information sharing among treatment groups and helped to calculate reliable treatment estimates in observational studies with limited-size datasets. Similarly, task-conditioned hypernets have been used to solve other problems, including multitasking [45], natural language processing (NLP) [24], and continual learning [49].
Few-shot learning is a sub-field of machine learning that focuses on training models to learn new concepts or tasks with only a limited number of training examples. Unlike traditional machine learning approaches that typically require large amounts of labeled data for each task, few-shot learning aims to generalize knowledge from a small support set of labeled examples to classify or recognize new instances. To address the practical difficulties of existing techniques to operate in high-dimensional parameter spaces with extremely limited-data settings, Rusu et al., [56] applied data-conditioned hypernets. They employed encoder-decoder based hypernet which learns a data-dependent latent generative representation of model parameters that shares information between different tasks through soft weight sharing of hypernets. They also achieved SOTA results and showed that the proposed technique can capture uncertainty in the data. Sendera et al., 2023a [61] also applied data-conditioned hypernet to few-shot learning by combining kernels and hypernets. The kernels were used to extract support information from data of different tasks that act as input to the hypernet which generates weights for the target task. Similarly, Zhao et al., [81], Zięba, [82], Sendera et al., 2023b [62] also applied hypernets, and utilized soft weight sharing, for few-shot learning.
A
We apply our PCE-based method to approximate non-polynomial functions. This transforms all benchmark programs into Prob-solvable loops, which allows using the static analysis tool Polar (Moosbrugger et al., 2022) to compute the moments of the program variables as a function of the loop iteration n𝑛nitalic_n.
In this section, we develop a method for the derivation of the exact moments of probabilistic loops that comply with a specified loop structure and functional assignments.
Our method for exact moment derivation for probabilistic loops with non-polynomial functions builds upon Prob-solvable loops.
We implemented the techniques for exact moment derivation for loops containing trigonometric or exponential polynomials, presented in Section 5, in the tool Polar.
We evaluate the technique for exact moment derivation using Polar on all benchmarks satisfying the general program structure of Listing 1 in Section 5.
C
Routing attacks pose significant threats to FANETs, originating from nodes that bypass prevention methods and can cause dramatic damage to the network. Therefore, it is imperative to analyze these attacks to develop effective countermeasures. Despite the importance of routing security, there is a notable lack of studies focusing on the analysis of routing attacks in FANETs. Consequently, in our study, we aim to address this gap by conducting a comprehensive analysis of routing attacks.
3D GMM was employed to simulate the natural 3D flight of UAVs in a realistic manner, as demonstrated in [117]. The alpha parameter value of the 3D GMM, which provides a balance of randomness and predictability in the UAV’s mobility, was initially set at 0.25 and then incrementally increased by 0.05 to create different network topologies. The simulation parameters used are summarized in Table V. Our previous study [38] was extended to include additional simulations where UAVs operate in a larger area with more nodes over longer simulation durations. Moreover, more realistic scenarios were implemented. For instance, whereas all nodes sent their data to a single mobile server in the previous study, in this study, nodes can communicate with each other and send the collected to a stationary GBS located at the center of the simulation area.
In the analysis conducted in this study, four attacks against AODV were implemented in realistic simulation scenarios: sinkhole, dropping, blackhole, and flooding attacks.
This study covers four attacks against the widely used AODV protocol, each with different goals. Initially, a concise overview of AODV, 3D Gauss Markov Mobility (GMM), and the specific attacks is presented. Following this, the simulation results obtained from networks with diverse topologies are demonstrated and deeply analyzed. Details of the experimentation process are provided in the subsequent subsections.
The unique characteristics of UAVs and networks of UAVs are presented in details, and then analyzed from a security perspective.
C
We find that the ISO can impact the optimality (i.e., choosing the best candidates) and fairness (i.e., treating similar candidates similarly) of the selected k𝑘kitalic_k candidates, especially when the screener is human.
Here, position bias refers to the penalty (or premium) a candidate experiences due to where it falls on the ISO, as humans are predisposed to favor the items placed at the top of a list (Baeza-Yates, 2018; Athey and Ellison, 2011).
Again, these results are due to the low probability of top scores for which the effects of the bias due to ϵ1subscriptitalic-ϵ1\epsilon_{1}italic_ϵ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is not counter-balanced by ρ𝜌\rhoitalic_ρ.
Here, the candidates for a job represent the items and the screener evaluating their profiles represents the decision-maker.
The former refers to a consistent screener; the latter refers to an inconsistent screener whose evaluation of candidates suffers over time due to the fatigue of performing a repetitive task.
A
Besides, we design a space aggregation module (SAM) to yield the clear images, which combines the reciprocity of dual degradation priors. We perform extensive experiments on several datasets to analyze and validate the effectiveness and superiority of our proposed DASUNet compared to other state-of-the-art methods.
In this paper, we have proposed a dual degradation-inspired deep unfolding method (DASUNet) for low-light image enhancement. Specifically, we design a dual degradation model (DDM) based on the degradation specificity among luminance and chrominance spaces. An alternative optimization solution is proposed to solve it and is unfolded into specified network modules to construct our DASUNet, which contains a new prior modeling module to capture the long-range prior information and a space aggregation module to combine dual degradation priors. Extensive expeirmental results validate the superiority of our DASUNet for low-light image enhancement. In future, we will explore more low-level vision tasks based on DDM.
To push the frontiers of deep unfolding-based image enhancement, we propose a Dual degrAdation-inSpired deep Unfolding network, termed DASUNet, for low-light image enhancement, which is shown in Fig. 2. The motivation originates from the degradation specificity of low-light images between luminance and chrominance spaces [47, 1, 18, 48]. On this basis, we formulate the task of low-light image enhancement as the optimization of dual degradation model, which inherits the physical deterioration principle and interpretability. Further, an alternating optimization solution is designed to solve the proposed dual degradation model. Then, the iterative optimization rule is unfolded into a deep model, composing DASUNet, which enjoys the strengths of both the physical model and the deep network. Based on the differences of luminance and chrominance spaces, we customized two different prior modeling modules (PMM) to learn different prior information. In the luminance channel, we design a luminance adjustment Transformer to modulate brightness strength. While in the chrominance channel, a Wavelet decomposition Transformer is proposed to model high-frequency and low-frequency information leveraging the advantages of convolutions and Transformer [12, 31, 68, 4].
We propose a dual degradation model based on degradation specificity of low-light images on different spaces. It is unfolded to form dual degradation-inspired deep unfolding network for low-light image enhancement, which can jointly learn two degradation priors from luminance space and chrominance space. More importantly, dual degradation model empowers DASUNet with explicit physical insight, which improves the interpretability of enhancement model.
Dual degradation model. Based on the degradation specificity between luminance and chrominance spaces, we proposed a DDM for low-light image enhancement. To demonstrate its effectiveness, we conduct some comparison experiments on various color spaces and degradation models on LOL dataset, the results of which are presented in Table 3. Single model and triple model denotes one degradation model and three degradation models on corresponding sapces. One can see from Table 3 that the design philosophy behind DDM is effective. Single model cannot consider the the degradation specificity on different spaces and triple model could lose the mutual benefits between homogeneous degradation spaces. Visual comparison of different degradation models are illustrated in Fig. 7(a). Single model introduces some visual artifacts and triple model yields some blurs, while our model produce clearer result.
C
Notice that the utility functions take the same form as the one-specialist case, whose analogous observation is proven in Appendix 8.3. There, we proved that a utility of the form A⁢δk0k0−1+B⁢δ⁢(1−δ)1k1−1𝐴superscript𝛿subscript𝑘0subscript𝑘01𝐵𝛿superscript1𝛿1subscript𝑘11A\delta^{\frac{k_{0}}{k_{0}-1}}+B\delta(1-\delta)^{\frac{1}{k_{1}-1}}italic_A italic_δ start_POSTSUPERSCRIPT divide start_ARG italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG start_ARG italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT - 1 end_ARG end_POSTSUPERSCRIPT + italic_B italic_δ ( 1 - italic_δ ) start_POSTSUPERSCRIPT divide start_ARG 1 end_ARG start_ARG italic_k start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - 1 end_ARG end_POSTSUPERSCRIPT is unimodal for A,B>0𝐴𝐵0A,B>0italic_A , italic_B > 0, k0,k1≥2subscript𝑘0subscript𝑘12k_{0},k_{1}\geq 2italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_k start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ≥ 2, δ∈[0,1]𝛿01\delta\in[0,1]italic_δ ∈ [ 0 , 1 ].
For simplicity, in the multi-specialist case, we prove unimodality for the case of quadratic costs (this is all we need to arrive at the bargaining solutions reported in the paper). We show that the same proof holds for both the generalist and specialists’ utilities:
Solving the powerful-G𝐺Gitalic_G, powerful-D𝐷Ditalic_D, vertical monopoly or other bargaining solutions consists in maximizing players’ utilities either separately or combined into a joint utility. This is possible once parameters are specified; however, we cannot produce a closed-form expression for the general polynomial case because doing so would require solving for the zeroes of a polynomial of high degree. Therefore, for the remainder of this section, we will demonstrate the solution steps using parameter values k0,k1=2subscript𝑘0subscript𝑘12k_{0},k_{1}=2italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_k start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 2. We call this the case of quadratic costs. We choose the quadratic case for clarity and exposition, though we note that other solutions with other parameter values can be calculated using analogous steps.
It is important to note that the three regimes defined in this section can describe a specialist’s strategy in either the 1-specialist or multi-specialist fine-tuning game. In the 1-specialist case, the potential strategies describe counterfactual outcomes that depend on the particular cost and revenue functions of the specialist. In the multi-specialist game, the strategies are ways of grouping the domains and all can exist simultaneously.
For ease and without loss of generality, we assume the set of domains is in descending order of value cisubscript𝑐𝑖c_{i}italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, and we’ll consider each domain one at a time to determine whether the domain has δi=1subscript𝛿𝑖1\delta_{i}=1italic_δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 1 using the condition derived in Lemma 10.1. We start by analyzing the case where i=n𝑖𝑛i=nitalic_i = italic_n, that is, we’re at the last possible domain, and for all others we’ll derive a recursive relation to prove the Lemma.
A
The average occupied pixel area for a single object in BEE24 is one-fourth of that in the second-ranked dataset, GMOT-40, highlighting the challenge of detecting and tracking smaller objects.
BEE24 has a much larger maximum duration (i.e., 200 s) and number of tracks for a single video than several common MOT datasets. For example, the maximum duration and tracks are an order of magnitude larger than those in GMOT-40.
MOT17 and MOT20. We compare the proposed TOPICTrack tracker with the state-of-the-art trackers on the MOT17 and MOT20 test sets.
Furthermore, the maximum number of annotations for a single video in BEE24 far exceeds those of other datasets, except for MOT20. MOT20 focuses on crowded scenes and therefore has the highest number of annotations. However, the objects’ appearances are easily identifiable, and their slow motion tends to be linear.
In fact, in this case, the motion pattern of the bee tends to be linear, thus using a linear assumptions-based motion model for association could keep track of the bee.
C
Tab. 4 presents the results for the explanation generation subtask. Models relying solely on opinions (C𝐶Citalic_C) and emotion representations (E𝐸Eitalic_E) as input exhibit significantly poorer performance across all metrics compared to other baselines. For instance, the BART model without dialogue (D𝐷Ditalic_D) underperforms its counterpart with dialogue by 0.24 BLEU scores, indicating the insufficiency of relying solely on two input opinions. Incorporating textual descriptions of images (I𝐼Iitalic_I) for text-based models leads to noteworthy improvements, as they provide valuable information
We consider two generative text models (BART [28], T5-large [48]) and a recently introduced multi-modal model NLX-GPT [53] for emotion and explanation generation for both Questioner and Answerer. Since the Answerer has always access to the image I𝐼Iitalic_I, we include I𝐼Iitalic_I in the form of text from pretrained BLIP [31] model for text models T-5 and BART. For the Questioner, we experiment with two variants: the model (1) w/o and (2) w/ access to the image I𝐼Iitalic_I. For (2), we also include I𝐼Iitalic_I in the input similar to the Answerer case and predict the emotion and explanation of the Questioner after observing the image. We fine-tune the NLX-GPT model [53] for generating emotional explanations, which can accept both visual and language inputs and produce answers and explanations. We train the model from the perspectives of both the Questioner and the Answerer using the same input as the previously mentioned language models. The image I𝐼Iitalic_I is passed through the model’s image encoder before being fed to the decoder for final emotion and explanation generation (see the supplementary for implementation details).
The Affective Visual Dialog  task involves three subtasks: dialog-based question answering 4.1, affective explanation generation 4.2 and dialog-based emotion classification 4.3. We split the dataset into train,
The Questioner asks questions about a hidden image, which is intentionally concealed to mitigate any visual priming biases. These biases can cause models to operate under the assumption that the questioner will inquire about the objects depicted in the image [4, 23]. The objective of the Questioner is to explore the hidden image. On the other hand, the second agent (Answerer) observes the image and provides answers to the questions posed by the Questioner. The unique setup in our user interface design is that to initiate the conversation, we reveal two opposing opinions (a negative and a positive caption) from the combined ArtEmis v1 and v2 datasets [3, 40] associated with the artwork. Our intuition of triggering the dialog with the two opposite opinions is to counter the biases of the Questioner when starting the task and encourage more open-mindedness towards the emotion triggered by the hidden visual signal (i.e., to open the possibility that the hidden visuals may be perceived subjectively positively or negatively). After 10 question-answer pair exchanges, the Questioner is asked to provide an emotional class response and explanation for what informed the Questioner’s decision based on the dialog. Then, the hidden image is revealed to the Questioner, and both agents are required to indicate their emotional responses in the presence of visual cues and conversation. This final question after revealing the image allows explorations regarding the emotion arising from only dialog vs. those that are also informed by visual stimuli. For the live chat framework, we use Mephisto tool [60] with our customized front-end interfaces. Fig. 3 shows an example of collected dialog; more examples and interfaces are attached in the supplementary.
Table 4: Results on Affective Explanation Generation setup for Questioner. I,E,C,D𝐼𝐸𝐶𝐷I,E,C,Ditalic_I , italic_E , italic_C , italic_D represents the image, 2 opposed emotion labels, associated opinions, and the dialog defined in Sec. 4.
D
We also evaluate the performance of the text retrieval task by experimenting on the test split of the flickr30k dataset (Young et al., 2014). This dataset consists of five caption texts for each photo, and these texts are similar to each other. We use the first caption text vector to retrieve the top 5 similar sentences using faiss 333https://github.com/facebookresearch/faiss. The strict accuracy 444Strict accuracy means only all the top five sentences retrieved are equal to the five reference sentences considered correct of AnglE, SimCSE (supervised), and SBERT are 12.9%percent12.912.9\%12.9 %, 10.4%percent10.410.4\%10.4 %, and 5.2%percent5.25.2\%5.2 %, respectively. This evidence indicates the effectiveness of using AnglE for the retrieval task.
In addition, we evaluate the performance of text embedding in transfer tasks. In particular, our approach involves training text embedding on STS tasks and then transferring it to seven other kinds of tasks. Notably, AnglE outperforms baselines, showing a significant improvement of 4.34%percent4.344.34\%4.34 % and 4.48%percent4.484.48\%4.48 % over DiffCSE and SimCSE, respectively. These results suggest that AnglE can produce better embeddings that effectively improve performance in various tasks. A more detailed description of the experiment can be found in section A.2.
To provide a comprehensive analysis, we also evaluate the performance of the baselines in the non-transfer setting. We train the baselines on the train set and evaluate them on the test or validation set. Two typical models, SimCSE and SBERT, representing contrastive and supervised learning, are compared with our model. The results of the non-transfer STS tasks are listed in Table 2, where we evaluate the baselines on four short-text datasets (MRPC, STS-B, QQP, and QNLI) and one long-text dataset (GitHub Issues Similarity Dataset). SimCSE notably performs poorly compared to SBERT and AnglE in the non-transfer setting. This is due to the limitation of the small-scale training set, as there are not enough samples for SimCSE to effectively learn representations. Furthermore, the datasets only provide pair-supervised data, namely (x,x+)𝑥superscript𝑥(x,x^{+})( italic_x , italic_x start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ) or (x,x−)𝑥superscript𝑥(x,x^{-})( italic_x , italic_x start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT ), which prevents SimCSE from utilizing its hard negative objective that relies on triple-supervised data (x,x+,x−)𝑥superscript𝑥superscript𝑥(x,x^{+},x^{-})( italic_x , italic_x start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT , italic_x start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT ). This limitation might affect its performance. On the other hand, AnglE consistently outperforms SBERT, achieving an absolute gain of 5.52%percent5.525.52\%5.52 %. This can support the idea that angle-optimized text embedding can mitigate the negative impact of the cosine function, resulting in better performance. Furthermore, we explore applying the long text model RANbase (86M parameters) (Li et al., 2023) as the backbone to test the performance on long text. The results show that AnglE-BERT outperforms AnglE-RAN across all short text datasets. This advantage might be attributed to the larger parameter size of BERT and its proficiency in handling short texts. However, we observe a remarkable shift in long-text STS. AnglE-RAN outperforms AnglE-BERT in this scenario, suggesting that AnglE-RAN can handle long texts well despite having fewer parameters.
To comprehensively evaluate the STS tasks, we have introduced the GitHub Issues Similarity Dataset to evaluate model performance on the long-text STS task. Furthermore, we have proposed an LLM-supervised learning method to cope with the scarcity of domain-supervised data. Extensive experimental results have demonstrated that AnglE outperforms baselines, indicating that AnglE can handle both short and long-text STS tasks and work effectively in various scenarios. In future work, we plan to explore the application of AnglE in real-world scenarios and provide further insights into AnglE.
In this section, we will first introduce the baselines, then the results of the transfer STS tasks, then the results of the non-transfer STS tasks, and finally a summary.
A
C2: The percentage of dispensed amount that has been sunk should be within a certain (undisclosed) range, and,
Fig. 11 shows the (topological) parameter-free nature of FaSTM∀for-all\forall∀N. This validates Motivation 2 and 3 described in Section 2 - i.e., complex money laundering networks involve transactions among several parties, covering longer distances (more than 2 hops). In the left most graph, we are using diameter on the x-axis instead of (number of) hops. This is because FaSTM∀for-all\forall∀N can detect flows with varying path lengths. For both the detected (in scope) and the suspicious (cases) flows, we observe a diverse range of values for diameter; and number of dispense and sink accounts. Fig. 12 shows two complex real flows which will go unnoticed by FlowScope, even after multiple runs with different configurations. FlowScope has a parameter k (the k in the k𝑘kitalic_k-partite graph) for controlling the number of hops. When k=3, you have 2 hops in the flows; with k=4, 3 hops; and so on. For the two cases, FaSTM∀for-all\forall∀N detected the complete flows with all the relevant bank accounts involved in contributing towards the suspiciousness of the flows. To detect the same flows using FlowScope, it has to be run with 2 configurations of k for the first case; and 3 for the second case. For case 1, both flows will go undetected because individually neither of them qualify for C3 - i.e., the maximum flow of funds do not reach the minimum threshold set in the criterion. For case 2, the flow detected by FlowScope with k=4 will go undetected; the other two would qualify. This still makes the quality of the cases lower, as 2 additional suspicious accounts in that flow have gone undetected. We can not further show or (even) summarize all the cases because of Reason 2 (mentioned in Section 5).
C2: The percentage of dispensed amount that has been sunk should be within a certain (undisclosed) range, and,
C3: The maximum flow, respecting the temporal order, of money should be greater than a certain (undisclosed) threshold
The flow looks interesting because of the cyclic behaviour. On a closer look (middle graph), after taking into account the chronological order of transactions, the cyclic behaviour is not there anymore. The aim is to convert the left most graph to the right most graph, by respecting the temporal order. It can be observed that the number of nodes and edges could potentially explode with this type of representation. We show in Section 4 how we limit this explosion by taking key AML knowledge into account.
C
While the proposed PPO-based reinforcement learning (RL) approach for DC-DC boost converter control shows significant promise,
The performance of the PPO-based control approach is compared with traditional control techniques, including optimized proportional-integral (PI) control and artificial neural network (ANN) control.
The proposed PPO-based reinforcement learning (RL) method for DC-DC boost converter control does have slightly higher computational demands compared to traditional control methods. This is primarily due to the complexity of the PPO algorithm and the
significantly degraded, indicating that the PI control method struggled to handle the input voltage variation effectively. Quantitatively comparing the performance of these control methods, RL control emerged as the superior method as depicted in Table 7. It exhibited the ability to seamlessly handle the input voltage variation and maintain step response characteristics similar to those observed under fixed input voltage conditions. This quality positions RL control as the best control method among the three in terms of adapting to varying input voltage and preserving stable step response characteristics. The RL control method’s effectiveness can be attributed to its capacity to learn and optimize the control policy based on the specific dynamics and requirements of the boost converter system. This adaptability allows RL-based control to consistently deliver reliable and robust performance, even in the presence of changing input voltage conditions.
The computational complexity of the proposed RL-based control method is slightly higher than that of traditional control methods.
D
Existing studies have three key limitations: they demand extra images and fine-tuning of text-to-image models with limited scope for new concepts; they can’t learn from user interaction history and need detailed user prompts; and there’s a lack of public, personalized text-to-image datasets that truly reflect user preferences.
Figure 3: Dataset statistics and distribution. Left: Proportion of users based on the varying number of historical prompts they have. Note that each user has a minimum of 18 historical prompts, as we have excluded those with fewer prompts from the dataset. Right: Proportion of prompts based on their varying lengths. Best view in color.
As mentioned in 5.3 and Table 3, we have conducted experiments to demonstrate the performance of our method in terms of two shorter types of input prompt xtsubscript𝑥𝑡x_{t}italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT. The quantitative results in 3 showcase that our method performs robustly even on conditions of very short prompt. Here, we provide two qualitative examples along with user ground truth generated images in Figure  12. These results showcase that, by leveraging historical prompts, our method is capable of rewriting input prompt xtsubscript𝑥𝑡x_{t}italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT properly and further generating images that are very close to the users’ true intentions, either in terms of style or objects.
Recently, researchers have found that optimizing prompts can boost the performance of LLMs on several NLP tasks and even search systems. For examples,
In search systems, LLMs are used to generate query expansion terms by  [10], while they are used to reformulate query by  Wang et al. [26] instead.
C
In this subsection, we propose a method to enhance node classification performance by integrating the objective function proposed in this study with existing GNN-based node classification methods, which do not directly utilize higher-order structures in training, including the latest semi-supervised and unsupervised learning techniques. Existing methodologies employ various strategies such as integrating random walks with the Word2Vec model [39], adjusting exploration and return variables of random walks [42], leveraging second-order proximities between nodes [40], utilizing 3-motifs for distinguishing high-order graph structures [24], or utilizing attention mechanisms [22] to train embedding vectors for all nodes in the network and ultimately validate classification performance based on the output layer using simple neural network structures like linear maps. To integrate our approach with these, we apply the softmax function to the l𝑙litalic_l-dimensional output vector (where l=|I|𝑙𝐼l=|I|italic_l = | italic_I | is the number of clusters or labels) of the existing architecture to create a probability distribution for each node. We then set these distributions as the initial parameters and use the proposed objective function to retrain. In other words, we use the output of the GNN architectures via softmax function instead of the RW to initialize our proposed optimization method (5). This strategy allows us to leverage (l𝑙litalic_l-dimensional) classification vectors resulting from the trained embedding vectors of GNN-based architectures and use these classification vectors to learn the various higher-order structures included in the network using the proposed objective function, potentially improving classification performance.
Sixth, according to experimental results with benchmark data, using the training and validation data employed in Planetoid, GCN, GAT, and SGC as prior information improves mean accuracy by 2.2%, 0.7%, and 0.3% for Cora, Citeseer, and Pubmed, respectively. The objective function used in this study is intended to promote that all nodes forming higher-order simplices exhibit similar node probability distributions. As a result, if the network has a large number of inter-simplices, where nodes producing higher-order simplices belong to distinct labels or clusters, achieving good performance with the proposed objective function becomes challenging. The proportions of inter-cluster simplices among all higher-order simplices beyond 2-simplices are 16.4%, 18.8%, and 22.6% (therefore, intra-cluster simplices ratio would be 83.6%, 81.2%, 77.4%) for Cora, Citeseer, and Pubmed, respectively. This observation explains why a higher accuracy improvement is observed with Cora compared to Citeseer and Pubmed.
In the training process, Glorot initialization [57] is utilized to initialize the parameters, and the Adam SGD optimizer [58] is employed for optimization. For all experiments, the learning rate is set to 0.4 and the number of epochs to 10. The proposed objective function aims to learn the probability distribution assigned to each node in the network. We apply the softmax function to describe node probability distributions. At the end of the training process, the argmax function is used to obtain the final classification results. The performance is evaluated using the accuracy error metric.
In this subsection, we propose a method to enhance node classification performance by integrating the objective function proposed in this study with existing GNN-based node classification methods, which do not directly utilize higher-order structures in training, including the latest semi-supervised and unsupervised learning techniques. Existing methodologies employ various strategies such as integrating random walks with the Word2Vec model [39], adjusting exploration and return variables of random walks [42], leveraging second-order proximities between nodes [40], utilizing 3-motifs for distinguishing high-order graph structures [24], or utilizing attention mechanisms [22] to train embedding vectors for all nodes in the network and ultimately validate classification performance based on the output layer using simple neural network structures like linear maps. To integrate our approach with these, we apply the softmax function to the l𝑙litalic_l-dimensional output vector (where l=|I|𝑙𝐼l=|I|italic_l = | italic_I | is the number of clusters or labels) of the existing architecture to create a probability distribution for each node. We then set these distributions as the initial parameters and use the proposed objective function to retrain. In other words, we use the output of the GNN architectures via softmax function instead of the RW to initialize our proposed optimization method (5). This strategy allows us to leverage (l𝑙litalic_l-dimensional) classification vectors resulting from the trained embedding vectors of GNN-based architectures and use these classification vectors to learn the various higher-order structures included in the network using the proposed objective function, potentially improving classification performance.
In this experiment, we integrate GNNs with the proposed objective function and evaluate the performance gains using the Cora, Citeseer, and Pubmed datasets. GAT [22] uses an attention mechanism to learn node embeddings. The node features are created using the bag-of-words representation of documents, with the dimensions of the node features for Cora, Citeseer, and Pubmed being 1433, 3703, and 500, respectively. These features are used to learn embedding vectors using a multi-head attention structure, and a linear map is employed to generate outputs corresponding to the number of clusters. In our method, the total training and validation data used in [22]—640 for Cora, 620 for Citeseer, and 560 for Pubmed—are treated as the overall prior information. In contrast to [22], which evaluated performance using 1,000 test nodes for each dataset, we assess performance on all remaining nodes not designated for training or validation. The treatment is similarly applied to Planetoid [41] (transductive experiment), GCN [20] (with 64 hidden units), and SGC [50]. Table II summarizes the results of the integration experiments.
D
Fig. 7(a) shows fundamental diagrams for the mixed autonomy traffic flow with commercially available ACC vehicles at different market penetration rates (MPRs) ranging from 0% to 100% without attack. It is observed that the capacity decreases from around 1,900 (veh/hr) to 1,250(veh/hr) as the MPR increases from 0% to 100%. This is consistent with the findings of [41] showing that capacity decreases with the increase of the MPR of commercially available ACC vehicles, since the headway of such vehicles is increased for safety purposes as opposed to HVs. When the MPR increases, the critical density tends to decrease with more scatter observed in the congested regime of the fundamental diagram, as seen in Fig. 7(a), indicating more oscillations in the traffic flow.
Fig. 7(d) shows the fundamental diagrams for Scenario 3. In the absence of attacks, the fundamental diagram at 0% MPR (Fig. 7(d)) is the same as the normal case without attack (Fig. 7(a)). The fundamental diagram for the MPR of 60% is shown in Fig. 7(d), which is also similar to that of Fig. 7(a) for the same reason as in Scenario 2.
In this article, we have considered three types of candidate cyberattacks on AVs with low levels of automation, i.e., ACC vehicles. We study the impacts of these attacks on both microscopic and macroscopic traffic flow dynamics. Motivated by these impacts, we then develop a machine learning based approach, i.e., a GAN-based model, for real-time detection of potential attacks on ACC vehicles. The proposed model is holistically evaluated considering all three types of attacks introduced. Numerical results on a set of candidate attacks selected for illustration show that the proposed model can effectively detect the vast majority of attacks with only a few misclassifications. The model developed is also observed to outperform some other existing ones in detecting abnormal ACC vehicle trajectory.
Fig. 7(c) shows the fundamental diagrams for Scenario 2. As in Scenario 1, the fundamental diagram at 0% MPR is the same as Fig. 7(a) (without attack). The fundamental diagram for the MPR of 60% in this case is similar to the normal scenario shown in Fig. 7(a). The capacity Q𝑄Qitalic_Q and the shape of the fundamental diagram are also similar to the scenario in the absence of attacks. This is consistent with the findings of Section IV-A concerning impacts of Type \@slowromancapii@ attacks on individual vehicles, where the speed, spacing gap, and position are similar to the scenario of normal traffic. Consequently, the resulting fundamental diagram does not differ much from normal.
In this section, we present numerical results on fundamental diagrams with ACC vehicles being attacked under the three types of attacks introduced before. For comparison with the case without attacks (Fig. 7(a)), we show fundamental diagrams at the ACC MPR of 0%, 60%, and 100%.
D
The Indefinite Datasets are for Causal Discovery in Indefinite Data (CDID) task (producing the causal structures and causal representations as discussed in Section 2.3.3), contributing to the
DIR [62], and our method (O⁢u⁢r⁢s𝑂𝑢𝑟𝑠Oursitalic_O italic_u italic_r italic_s). We provide the details about all datasets, baselines and implementation in Appendix C.
and i⁢n⁢p⁢u⁢t⁢_⁢i⁢n⁢s⁢t⁢r⁢c⁢u⁢t⁢i⁢o⁢n𝑖𝑛𝑝𝑢𝑡_𝑖𝑛𝑠𝑡𝑟𝑐𝑢𝑡𝑖𝑜𝑛input\_instrcutionitalic_i italic_n italic_p italic_u italic_t _ italic_i italic_n italic_s italic_t italic_r italic_c italic_u italic_t italic_i italic_o italic_n as shown in Step 1.
We conduct experiments for CDID (Causal discovery in indefinite data) task and causal consistency on the Causalogue and Causaction datasets.
We use EDKA-GM as the Non-causal model, and biCD as the causal model (the backbone model with our method in this task).
A
Chief Complaint (C⁢C𝐶𝐶CCitalic_C italic_C): The primary reason or concern for which the patient seeks medical attention.
Present Illness (P⁢I𝑃𝐼PIitalic_P italic_I): A detailed account of the symptoms and problems leading up to the current visit, typically in chronological order.
Algorithm 1 shows the detailed steps described in the methodology. The goal is to efficiently utilize the power of the transformer-encoders for finding the context in long clinical texts.
Table 2: Preliminary results for different experimented baseline models on mortality prediction and length of stay prediction tasks in macro-averaged % AUROC. The CORe and DischargeBERT models outperform the baseline model performances, leading to their selection in the main experiments of our study.
The final prediction A𝐴Aitalic_A for the clinical note N𝑁Nitalic_N is then obtained by consolidating all Aisubscript𝐴𝑖A_{i}italic_A start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, typically through averaging or another fusion strategy.
A
Can we use the reaction to adversarial perturbations as an OSR score to separate familiar and novel samples?
We call an attack informed if the adversary has access to the binary set-labels of the input, i.e., closed-set vs. open-set, and uninformed if that information is not available huang2011adversarial.
We consider a deep neural network f𝜽:𝒳→ℝ|ℱ|:subscript𝑓𝜽→𝒳superscriptℝℱf_{\bm{\theta}}:\mathcal{X}\to\mathbb{R}^{|\mathcal{F}|}italic_f start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT : caligraphic_X → blackboard_R start_POSTSUPERSCRIPT | caligraphic_F | end_POSTSUPERSCRIPT parameterized by 𝜽𝜽\bm{\theta}bold_italic_θ for modelling p⁢(y∣𝒙,y∈ℱ)𝑝conditional𝑦𝒙𝑦ℱp(y\mid\bm{x},\>y\in\mathcal{F})italic_p ( italic_y ∣ bold_italic_x , italic_y ∈ caligraphic_F ). Here, f𝜽subscript𝑓𝜽f_{\bm{\theta}}italic_f start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT maps an input to a vector of logits that are normalized using the softmax function σ:ℝ|ℱ|→(0,1)|ℱ|:𝜎→superscriptℝℱsuperscript01ℱ\sigma:\mathbb{R}^{|\mathcal{F}|}\to(0,1)^{|\mathcal{F}|}italic_σ : blackboard_R start_POSTSUPERSCRIPT | caligraphic_F | end_POSTSUPERSCRIPT → ( 0 , 1 ) start_POSTSUPERSCRIPT | caligraphic_F | end_POSTSUPERSCRIPT to obtain pseudo-probabilities for the familiar categories.
In open-set recognition (OSR) a set 𝒩𝒩\mathcal{N}caligraphic_N of novel categories is additionally considered and a test set containing inputs from both novel and familiar classes is used to evaluate the OSR performance:
We consider an input space 𝒳𝒳\mathcal{X}caligraphic_X and a set ℱℱ\mathcal{F}caligraphic_F of familiar categories, i.e., the closed-set.
D
README.md exists but content is empty.
Downloads last month
-