id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2504.05855 | Zhang Dong | Xingzu Liu, Songhang deng, Mingbang Wang, Zhang Dong, Le Dai, Jiyuan
Li, Ruilin Nong | Enhancing Coreference Resolution with Pretrained Language Models:
Bridging the Gap Between Syntax and Semantics | acl submission | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Large language models have made significant advancements in various natural
language processing tasks, including coreference resolution. However,
traditional methods often fall short in effectively distinguishing referential
relationships due to a lack of integration between syntactic and semantic
information. This study introduces an innovative framework aimed at enhancing
coreference resolution by utilizing pretrained language models. Our approach
combines syntax parsing with semantic role labeling to accurately capture finer
distinctions in referential relationships. By employing state-of-the-art
pretrained models to gather contextual embeddings and applying an attention
mechanism for fine-tuning, we improve the performance of coreference tasks.
Experimental results across diverse datasets show that our method surpasses
conventional coreference resolution systems, achieving notable accuracy in
disambiguating references. This development not only improves coreference
resolution outcomes but also positively impacts other natural language
processing tasks that depend on precise referential understanding.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 09:33:09 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Liu",
"Xingzu",
""
],
[
"deng",
"Songhang",
""
],
[
"Wang",
"Mingbang",
""
],
[
"Dong",
"Zhang",
""
],
[
"Dai",
"Le",
""
],
[
"Li",
"Jiyuan",
""
],
[
"Nong",
"Ruilin",
""
]
] | TITLE: Enhancing Coreference Resolution with Pretrained Language Models:
Bridging the Gap Between Syntax and Semantics
ABSTRACT: Large language models have made significant advancements in various natural
language processing tasks, including coreference resolution. However,
traditional methods often fall short in effectively distinguishing referential
relationships due to a lack of integration between syntactic and semantic
information. This study introduces an innovative framework aimed at enhancing
coreference resolution by utilizing pretrained language models. Our approach
combines syntax parsing with semantic role labeling to accurately capture finer
distinctions in referential relationships. By employing state-of-the-art
pretrained models to gather contextual embeddings and applying an attention
mechanism for fine-tuning, we improve the performance of coreference tasks.
Experimental results across diverse datasets show that our method surpasses
conventional coreference resolution systems, achieving notable accuracy in
disambiguating references. This development not only improves coreference
resolution outcomes but also positively impacts other natural language
processing tasks that depend on precise referential understanding.
| no_new_dataset | 0.946051 |
2504.05866 | Sofia Della Penna | Sofia Della Penna, Roberto Natella, Vittorio Orbinato, Lorenzo
Parracino, Luciano Pianese | CTI-HAL: A Human-Annotated Dataset for Cyber Threat Intelligence
Analysis | Accepted for publication in the Workshop on Attackers and Cybercrime
Operations (WACCO 2025), co-located with IEEE European Symposium on Security
and Privacy 2025 | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | Organizations are increasingly targeted by Advanced Persistent Threats
(APTs), which involve complex, multi-stage tactics and diverse techniques.
Cyber Threat Intelligence (CTI) sources, such as incident reports and security
blogs, provide valuable insights, but are often unstructured and in natural
language, making it difficult to automatically extract information. Recent
studies have explored the use of AI to perform automatic extraction from CTI
data, leveraging existing CTI datasets for performance evaluation and
fine-tuning. However, they present challenges and limitations that impact their
effectiveness. To overcome these issues, we introduce a novel dataset manually
constructed from CTI reports and structured according to the MITRE ATT&CK
framework. To assess its quality, we conducted an inter-annotator agreement
study using Krippendorff alpha, confirming its reliability. Furthermore, the
dataset was used to evaluate a Large Language Model (LLM) in a real-world
business context, showing promising generalizability.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 09:47:15 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Della Penna",
"Sofia",
""
],
[
"Natella",
"Roberto",
""
],
[
"Orbinato",
"Vittorio",
""
],
[
"Parracino",
"Lorenzo",
""
],
[
"Pianese",
"Luciano",
""
]
] | TITLE: CTI-HAL: A Human-Annotated Dataset for Cyber Threat Intelligence
Analysis
ABSTRACT: Organizations are increasingly targeted by Advanced Persistent Threats
(APTs), which involve complex, multi-stage tactics and diverse techniques.
Cyber Threat Intelligence (CTI) sources, such as incident reports and security
blogs, provide valuable insights, but are often unstructured and in natural
language, making it difficult to automatically extract information. Recent
studies have explored the use of AI to perform automatic extraction from CTI
data, leveraging existing CTI datasets for performance evaluation and
fine-tuning. However, they present challenges and limitations that impact their
effectiveness. To overcome these issues, we introduce a novel dataset manually
constructed from CTI reports and structured according to the MITRE ATT&CK
framework. To assess its quality, we conducted an inter-annotator agreement
study using Krippendorff alpha, confirming its reliability. Furthermore, the
dataset was used to evaluate a Large Language Model (LLM) in a real-world
business context, showing promising generalizability.
| new_dataset | 0.962214 |
2504.05878 | Xingyuan Li | Xingyuan Li, Ruichao Hou, Tongwei Ren, Gangshan Wu | KAN-SAM: Kolmogorov-Arnold Network Guided Segment Anything Model for
RGB-T Salient Object Detection | This paper is accepted by ICME2025 | null | null | null | cs.MM cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing RGB-thermal salient object detection (RGB-T SOD) methods aim to
identify visually significant objects by leveraging both RGB and thermal
modalities to enable robust performance in complex scenarios, but they often
suffer from limited generalization due to the constrained diversity of
available datasets and the inefficiencies in constructing multi-modal
representations. In this paper, we propose a novel prompt learning-based RGB-T
SOD method, named KAN-SAM, which reveals the potential of visual foundational
models for RGB-T SOD tasks. Specifically, we extend Segment Anything Model 2
(SAM2) for RGB-T SOD by introducing thermal features as guiding prompts through
efficient and accurate Kolmogorov-Arnold Network (KAN) adapters, which
effectively enhance RGB representations and improve robustness. Furthermore, we
introduce a mutually exclusive random masking strategy to reduce reliance on
RGB data and improve generalization. Experimental results on benchmarks
demonstrate superior performance over the state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 10:07:02 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Li",
"Xingyuan",
""
],
[
"Hou",
"Ruichao",
""
],
[
"Ren",
"Tongwei",
""
],
[
"Wu",
"Gangshan",
""
]
] | TITLE: KAN-SAM: Kolmogorov-Arnold Network Guided Segment Anything Model for
RGB-T Salient Object Detection
ABSTRACT: Existing RGB-thermal salient object detection (RGB-T SOD) methods aim to
identify visually significant objects by leveraging both RGB and thermal
modalities to enable robust performance in complex scenarios, but they often
suffer from limited generalization due to the constrained diversity of
available datasets and the inefficiencies in constructing multi-modal
representations. In this paper, we propose a novel prompt learning-based RGB-T
SOD method, named KAN-SAM, which reveals the potential of visual foundational
models for RGB-T SOD tasks. Specifically, we extend Segment Anything Model 2
(SAM2) for RGB-T SOD by introducing thermal features as guiding prompts through
efficient and accurate Kolmogorov-Arnold Network (KAN) adapters, which
effectively enhance RGB representations and improve robustness. Furthermore, we
introduce a mutually exclusive random masking strategy to reduce reliance on
RGB data and improve generalization. Experimental results on benchmarks
demonstrate superior performance over the state-of-the-art methods.
| no_new_dataset | 0.946151 |
2504.05882 | Luca Barco | Luca Barco, Giacomo Blanco, Gaetano Chiriaco, Alessia Intini, Luigi La
Riccia, Vittorio Scolamiero, Piero Boccardo, Paolo Garza, Fabrizio Dominici | Turin3D: Evaluating Adaptation Strategies under Label Scarcity in Urban
LiDAR Segmentation with Semi-Supervised Techniques | Accepted at CVPRW2025 - USM3D | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D semantic segmentation plays a critical role in urban modelling, enabling
detailed understanding and mapping of city environments. In this paper, we
introduce Turin3D: a new aerial LiDAR dataset for point cloud semantic
segmentation covering an area of around 1.43 km2 in the city centre of Turin
with almost 70M points. We describe the data collection process and compare
Turin3D with others previously proposed in the literature. We did not fully
annotate the dataset due to the complexity and time-consuming nature of the
process; however, a manual annotation process was performed on the validation
and test sets, to enable a reliable evaluation of the proposed techniques. We
first benchmark the performances of several point cloud semantic segmentation
models, trained on the existing datasets, when tested on Turin3D, and then
improve their performances by applying a semi-supervised learning technique
leveraging the unlabelled training set. The dataset will be publicly available
to support research in outdoor point cloud segmentation, with particular
relevance for self-supervised and semi-supervised learning approaches given the
absence of ground truth annotations for the training set.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 10:17:14 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Barco",
"Luca",
""
],
[
"Blanco",
"Giacomo",
""
],
[
"Chiriaco",
"Gaetano",
""
],
[
"Intini",
"Alessia",
""
],
[
"La Riccia",
"Luigi",
""
],
[
"Scolamiero",
"Vittorio",
""
],
[
"Boccardo",
"Piero",
""
],
[
"Garza",
"Paolo",
""
],
[
"Dominici",
"Fabrizio",
""
]
] | TITLE: Turin3D: Evaluating Adaptation Strategies under Label Scarcity in Urban
LiDAR Segmentation with Semi-Supervised Techniques
ABSTRACT: 3D semantic segmentation plays a critical role in urban modelling, enabling
detailed understanding and mapping of city environments. In this paper, we
introduce Turin3D: a new aerial LiDAR dataset for point cloud semantic
segmentation covering an area of around 1.43 km2 in the city centre of Turin
with almost 70M points. We describe the data collection process and compare
Turin3D with others previously proposed in the literature. We did not fully
annotate the dataset due to the complexity and time-consuming nature of the
process; however, a manual annotation process was performed on the validation
and test sets, to enable a reliable evaluation of the proposed techniques. We
first benchmark the performances of several point cloud semantic segmentation
models, trained on the existing datasets, when tested on Turin3D, and then
improve their performances by applying a semi-supervised learning technique
leveraging the unlabelled training set. The dataset will be publicly available
to support research in outdoor point cloud segmentation, with particular
relevance for self-supervised and semi-supervised learning approaches given the
absence of ground truth annotations for the training set.
| new_dataset | 0.963575 |
2504.05888 | Guillaume Gautier | Guillaume Gautier, Alexandre Mercat, Louis Fr\'eneau, Mikko
Pitk\"anen, and Jarno Vanne | UVG-VPC: Voxelized Point Cloud Dataset for Visual Volumetric Video-based
Coding | Point cloud compression;Geometry;Visualization;Three-dimensional
displays;Video sequences;Transform coding;Media;Open dataset;point
cloud;Visual Volumetric Video-based Coding (V3C);Video-based Point Cloud
Compression (V-PCC);Extended Reality (XR) | 2023 15th International Conference on Quality of Multimedia
Experience (QoMEX), Ghent, Belgium, 2023, pp. 244-247 | 10.1109/QoMEX58391.2023.10178589 | null | cs.MM cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Point cloud compression has become a crucial factor in immersive visual media
processing and streaming. This paper presents a new open dataset called UVG-VPC
for the development, evaluation, and validation of MPEG Visual Volumetric
Video-based Coding (V3C) technology. The dataset is distributed under its own
non-commercial license. It consists of 12 point cloud test video sequences of
diverse characteristics with respect to the motion, RGB texture, 3D geometry,
and surface occlusion of the points. Each sequence is 10 seconds long and
comprises 250 frames captured at 25 frames per second. The sequences are
voxelized with a geometry precision of 9 to 12 bits, and the voxel color
attributes are represented as 8-bit RGB values. The dataset also includes
associated normals that make it more suitable for evaluating point cloud
compression solutions. The main objective of releasing the UVG-VPC dataset is
to foster the development of V3C technologies and thereby shape the future in
this field.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 10:27:53 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Gautier",
"Guillaume",
""
],
[
"Mercat",
"Alexandre",
""
],
[
"Fréneau",
"Louis",
""
],
[
"Pitkänen",
"Mikko",
""
],
[
"Vanne",
"Jarno",
""
]
] | TITLE: UVG-VPC: Voxelized Point Cloud Dataset for Visual Volumetric Video-based
Coding
ABSTRACT: Point cloud compression has become a crucial factor in immersive visual media
processing and streaming. This paper presents a new open dataset called UVG-VPC
for the development, evaluation, and validation of MPEG Visual Volumetric
Video-based Coding (V3C) technology. The dataset is distributed under its own
non-commercial license. It consists of 12 point cloud test video sequences of
diverse characteristics with respect to the motion, RGB texture, 3D geometry,
and surface occlusion of the points. Each sequence is 10 seconds long and
comprises 250 frames captured at 25 frames per second. The sequences are
voxelized with a geometry precision of 9 to 12 bits, and the voxel color
attributes are represented as 8-bit RGB values. The dataset also includes
associated normals that make it more suitable for evaluating point cloud
compression solutions. The main objective of releasing the UVG-VPC dataset is
to foster the development of V3C technologies and thereby shape the future in
this field.
| new_dataset | 0.961534 |
2504.05894 | Ivan Svetunkov | Ivan Svetunkov and Anna Sroginis | Why do zeroes happen? A model-based approach for demand classification | 39 pages, 11 figures, 3 tables | null | null | null | cs.LG stat.ME | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Effective demand forecasting is critical for inventory management, production
planning, and decision making across industries. Selecting the appropriate
model and suitable features to efficiently capture patterns in the data is one
of the main challenges in demand forecasting. In reality, this becomes even
more complicated when the recorded sales have zeroes, which can happen
naturally or due to some anomalies, such as stockouts and recording errors.
Mistreating the zeroes can lead to the application of inappropriate forecasting
methods, and thus leading to poor decision making. Furthermore, the demand
itself can have different fundamental characteristics, and being able to
distinguish one type from another might bring substantial benefits in terms of
accuracy and thus decision making. We propose a two-stage model-based
classification framework that in the first step, identifies artificially
occurring zeroes, and then classifies demand to one of the possible types:
regular/intermittent, intermittent smooth/lumpy, fractional/count. The
framework utilises statistical modelling and information criteria to detect
anomalous zeroes and then classify demand into those categories. We then argue
that different types of demand need different features, and show empirically
that they tend to increase the accuracy of the forecasting methods compared to
those applied directly to the dataset without the generated features and the
two-stage framework. Our general practical recommendation based on that is to
use the mixture approach for intermittent demand, capturing the demand sizes
and demand probability separately, as it seems to improve the accuracy of
different forecasting approaches.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 10:45:30 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Svetunkov",
"Ivan",
""
],
[
"Sroginis",
"Anna",
""
]
] | TITLE: Why do zeroes happen? A model-based approach for demand classification
ABSTRACT: Effective demand forecasting is critical for inventory management, production
planning, and decision making across industries. Selecting the appropriate
model and suitable features to efficiently capture patterns in the data is one
of the main challenges in demand forecasting. In reality, this becomes even
more complicated when the recorded sales have zeroes, which can happen
naturally or due to some anomalies, such as stockouts and recording errors.
Mistreating the zeroes can lead to the application of inappropriate forecasting
methods, and thus leading to poor decision making. Furthermore, the demand
itself can have different fundamental characteristics, and being able to
distinguish one type from another might bring substantial benefits in terms of
accuracy and thus decision making. We propose a two-stage model-based
classification framework that in the first step, identifies artificially
occurring zeroes, and then classifies demand to one of the possible types:
regular/intermittent, intermittent smooth/lumpy, fractional/count. The
framework utilises statistical modelling and information criteria to detect
anomalous zeroes and then classify demand into those categories. We then argue
that different types of demand need different features, and show empirically
that they tend to increase the accuracy of the forecasting methods compared to
those applied directly to the dataset without the generated features and the
two-stage framework. Our general practical recommendation based on that is to
use the mixture approach for intermittent demand, capturing the demand sizes
and demand probability separately, as it seems to improve the accuracy of
different forecasting approaches.
| no_new_dataset | 0.952442 |
2504.05904 | Xiangyu Zheng | Xiangyu Zheng, Wanyun Li, Songcheng He, Xiaoqiang Li, We Zhang | Intrinsic Saliency Guided Trunk-Collateral Network for Unsupervised
Video Object Segmentation | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent unsupervised video object segmentation (UVOS) methods predominantly
adopt the motion-appearance paradigm. Mainstream motion-appearance approaches
use either the two-encoder structure to separately encode motion and appearance
features, or the single-encoder structure for joint encoding. However, these
methods fail to properly balance the motion-appearance relationship.
Consequently, even with complex fusion modules for motion-appearance
integration, the extracted suboptimal features degrade the models' overall
performance. Moreover, the quality of optical flow varies across scenarios,
making it insufficient to rely solely on optical flow to achieve high-quality
segmentation results. To address these challenges, we propose the Intrinsic
Saliency guided Trunk-Collateral Net}work (ISTC-Net), which better balances the
motion-appearance relationship and incorporates model's intrinsic saliency
information to enhance segmentation performance. Specifically, considering that
optical flow maps are derived from RGB images, they share both commonalities
and differences. We propose a novel Trunk-Collateral structure. The shared
trunk backbone captures the motion-appearance commonality, while the collateral
branch learns the uniqueness of motion features. Furthermore, an Intrinsic
Saliency guided Refinement Module (ISRM) is devised to efficiently leverage the
model's intrinsic saliency information to refine high-level features, and
provide pixel-level guidance for motion-appearance fusion, thereby enhancing
performance without additional input. Experimental results show that ISTC-Net
achieved state-of-the-art performance on three UVOS datasets (89.2% J&F on
DAVIS-16, 76% J on YouTube-Objects, 86.4% J on FBMS) and four standard video
salient object detection (VSOD) benchmarks with the notable increase,
demonstrating its effectiveness and superiority over previous methods.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 11:02:14 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Zheng",
"Xiangyu",
""
],
[
"Li",
"Wanyun",
""
],
[
"He",
"Songcheng",
""
],
[
"Li",
"Xiaoqiang",
""
],
[
"Zhang",
"We",
""
]
] | TITLE: Intrinsic Saliency Guided Trunk-Collateral Network for Unsupervised
Video Object Segmentation
ABSTRACT: Recent unsupervised video object segmentation (UVOS) methods predominantly
adopt the motion-appearance paradigm. Mainstream motion-appearance approaches
use either the two-encoder structure to separately encode motion and appearance
features, or the single-encoder structure for joint encoding. However, these
methods fail to properly balance the motion-appearance relationship.
Consequently, even with complex fusion modules for motion-appearance
integration, the extracted suboptimal features degrade the models' overall
performance. Moreover, the quality of optical flow varies across scenarios,
making it insufficient to rely solely on optical flow to achieve high-quality
segmentation results. To address these challenges, we propose the Intrinsic
Saliency guided Trunk-Collateral Net}work (ISTC-Net), which better balances the
motion-appearance relationship and incorporates model's intrinsic saliency
information to enhance segmentation performance. Specifically, considering that
optical flow maps are derived from RGB images, they share both commonalities
and differences. We propose a novel Trunk-Collateral structure. The shared
trunk backbone captures the motion-appearance commonality, while the collateral
branch learns the uniqueness of motion features. Furthermore, an Intrinsic
Saliency guided Refinement Module (ISRM) is devised to efficiently leverage the
model's intrinsic saliency information to refine high-level features, and
provide pixel-level guidance for motion-appearance fusion, thereby enhancing
performance without additional input. Experimental results show that ISTC-Net
achieved state-of-the-art performance on three UVOS datasets (89.2% J&F on
DAVIS-16, 76% J on YouTube-Objects, 86.4% J on FBMS) and four standard video
salient object detection (VSOD) benchmarks with the notable increase,
demonstrating its effectiveness and superiority over previous methods.
| no_new_dataset | 0.953751 |
2504.05908 | Sriram Mandalika | Sriram Mandalika, Lalitha V, Athira Nambiar | PRIMEDrive-CoT: A Precognitive Chain-of-Thought Framework for
Uncertainty-Aware Object Interaction in Driving Scene Scenario | Accepted at The IEEE/CVF Conference on Computer Vision and Pattern
Recognition 2025 - CVPRW | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Driving scene understanding is a critical real-world problem that involves
interpreting and associating various elements of a driving environment, such as
vehicles, pedestrians, and traffic signals. Despite advancements in autonomous
driving, traditional pipelines rely on deterministic models that fail to
capture the probabilistic nature and inherent uncertainty of real-world
driving. To address this, we propose PRIMEDrive-CoT, a novel uncertainty-aware
model for object interaction and Chain-of-Thought (CoT) reasoning in driving
scenarios. In particular, our approach combines LiDAR-based 3D object detection
with multi-view RGB references to ensure interpretable and reliable scene
understanding. Uncertainty and risk assessment, along with object interactions,
are modelled using Bayesian Graph Neural Networks (BGNNs) for probabilistic
reasoning under ambiguous conditions. Interpretable decisions are facilitated
through CoT reasoning, leveraging object dynamics and contextual cues, while
Grad-CAM visualizations highlight attention regions. Extensive evaluations on
the DriveCoT dataset demonstrate that PRIMEDrive-CoT outperforms
state-of-the-art CoT and risk-aware models.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 11:06:02 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Mandalika",
"Sriram",
""
],
[
"V",
"Lalitha",
""
],
[
"Nambiar",
"Athira",
""
]
] | TITLE: PRIMEDrive-CoT: A Precognitive Chain-of-Thought Framework for
Uncertainty-Aware Object Interaction in Driving Scene Scenario
ABSTRACT: Driving scene understanding is a critical real-world problem that involves
interpreting and associating various elements of a driving environment, such as
vehicles, pedestrians, and traffic signals. Despite advancements in autonomous
driving, traditional pipelines rely on deterministic models that fail to
capture the probabilistic nature and inherent uncertainty of real-world
driving. To address this, we propose PRIMEDrive-CoT, a novel uncertainty-aware
model for object interaction and Chain-of-Thought (CoT) reasoning in driving
scenarios. In particular, our approach combines LiDAR-based 3D object detection
with multi-view RGB references to ensure interpretable and reliable scene
understanding. Uncertainty and risk assessment, along with object interactions,
are modelled using Bayesian Graph Neural Networks (BGNNs) for probabilistic
reasoning under ambiguous conditions. Interpretable decisions are facilitated
through CoT reasoning, leveraging object dynamics and contextual cues, while
Grad-CAM visualizations highlight attention regions. Extensive evaluations on
the DriveCoT dataset demonstrate that PRIMEDrive-CoT outperforms
state-of-the-art CoT and risk-aware models.
| no_new_dataset | 0.937153 |
2504.05914 | Abhiram Reddy Yanampally | Abhiram Reddy Yanampally | High-Resource Translation:Turning Abundance into Accessibility | 6 pages, 2 figures | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | This paper presents a novel approach to constructing an English-to-Telugu
translation model by leveraging transfer learning techniques and addressing the
challenges associated with low-resource languages. Utilizing the Bharat
Parallel Corpus Collection (BPCC) as the primary dataset, the model
incorporates iterative backtranslation to generate synthetic parallel data,
effectively augmenting the training dataset and enhancing the model's
translation capabilities. The research focuses on a comprehensive strategy for
improving model performance through data augmentation, optimization of training
parameters, and the effective use of pre-trained models. These methodologies
aim to create a robust translation system that can handle diverse sentence
structures and linguistic nuances in both English and Telugu. This work
highlights the significance of innovative data handling techniques and the
potential of transfer learning in overcoming limitations posed by sparse
datasets in low-resource languages. The study contributes to the field of
machine translation and seeks to improve communication between English and
Telugu speakers in practical contexts.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 11:09:51 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Yanampally",
"Abhiram Reddy",
""
]
] | TITLE: High-Resource Translation:Turning Abundance into Accessibility
ABSTRACT: This paper presents a novel approach to constructing an English-to-Telugu
translation model by leveraging transfer learning techniques and addressing the
challenges associated with low-resource languages. Utilizing the Bharat
Parallel Corpus Collection (BPCC) as the primary dataset, the model
incorporates iterative backtranslation to generate synthetic parallel data,
effectively augmenting the training dataset and enhancing the model's
translation capabilities. The research focuses on a comprehensive strategy for
improving model performance through data augmentation, optimization of training
parameters, and the effective use of pre-trained models. These methodologies
aim to create a robust translation system that can handle diverse sentence
structures and linguistic nuances in both English and Telugu. This work
highlights the significance of innovative data handling techniques and the
potential of transfer learning in overcoming limitations posed by sparse
datasets in low-resource languages. The study contributes to the field of
machine translation and seeks to improve communication between English and
Telugu speakers in practical contexts.
| no_new_dataset | 0.943504 |
2504.05917 | Solon Pissis | Giulia Bernardini and Huiping Chen and Alessio Conte and Roberto
Grossi and Veronica Guerrini and Grigorios Loukides and Nadia Pisanti and and
Solon P. Pissis | Indexing Strings with Utilities | ICDE 2025 (abstract abridged to satisfy arXiv requirements) | null | null | null | cs.DS cs.DB | http://creativecommons.org/licenses/by/4.0/ | Applications in domains ranging from bioinformatics to advertising feature
strings that come with numerical scores (utilities). The utilities quantify the
importance, interest, profit, or risk of the letters occurring at every
position of a string. Motivated by the ever-increasing rate of generating such
data, as well as by their importance in several domains, we introduce Useful
String Indexing (USI), a natural generalization of the classic String Indexing
problem. Given a string $S$ (the text) of length $n$, USI asks for
preprocessing $S$ into a compact data structure supporting the following
queries efficiently: given a shorter string $P$ (the pattern), return the
global utility $U(P)$ of $P$ in $S$, where $U$ is a function that maps any
string $P$ to a utility score based on the utilities of the letters of every
occurrence of $P$ in $S$. Our work also makes the following contributions: (1)
We propose a novel and efficient data structure for USI based on finding the
top-$K$ frequent substrings of $S$. (2) We propose a linear-space data
structure that can be used to mine the top-$K$ frequent substrings of $S$ or to
tune the parameters of the USI data structure. (3) We propose a novel
space-efficient algorithm for estimating the set of the top-$K$ frequent
substrings of $S$, thus improving the construction space of the data structure
for USI. (4) We show that popular space-efficient top-$K$ frequent item mining
strategies employed by state-of-the-art algorithms do not smoothly translate
from items to substrings. (5) Using billion-letter datasets, we experimentally
demonstrate that: (i) our top-$K$ frequent substring mining algorithms are
accurate and scalable, unlike two state-of-the-art methods; and (ii) our USI
data structures are up to $15$ times faster in querying than $4$ nontrivial
baselines while occupying the same space with them.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 11:13:53 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Bernardini",
"Giulia",
""
],
[
"Chen",
"Huiping",
""
],
[
"Conte",
"Alessio",
""
],
[
"Grossi",
"Roberto",
""
],
[
"Guerrini",
"Veronica",
""
],
[
"Loukides",
"Grigorios",
""
],
[
"Pisanti",
"Nadia",
""
],
[
"Pissis",
"and Solon P.",
""
]
] | TITLE: Indexing Strings with Utilities
ABSTRACT: Applications in domains ranging from bioinformatics to advertising feature
strings that come with numerical scores (utilities). The utilities quantify the
importance, interest, profit, or risk of the letters occurring at every
position of a string. Motivated by the ever-increasing rate of generating such
data, as well as by their importance in several domains, we introduce Useful
String Indexing (USI), a natural generalization of the classic String Indexing
problem. Given a string $S$ (the text) of length $n$, USI asks for
preprocessing $S$ into a compact data structure supporting the following
queries efficiently: given a shorter string $P$ (the pattern), return the
global utility $U(P)$ of $P$ in $S$, where $U$ is a function that maps any
string $P$ to a utility score based on the utilities of the letters of every
occurrence of $P$ in $S$. Our work also makes the following contributions: (1)
We propose a novel and efficient data structure for USI based on finding the
top-$K$ frequent substrings of $S$. (2) We propose a linear-space data
structure that can be used to mine the top-$K$ frequent substrings of $S$ or to
tune the parameters of the USI data structure. (3) We propose a novel
space-efficient algorithm for estimating the set of the top-$K$ frequent
substrings of $S$, thus improving the construction space of the data structure
for USI. (4) We show that popular space-efficient top-$K$ frequent item mining
strategies employed by state-of-the-art algorithms do not smoothly translate
from items to substrings. (5) Using billion-letter datasets, we experimentally
demonstrate that: (i) our top-$K$ frequent substring mining algorithms are
accurate and scalable, unlike two state-of-the-art methods; and (ii) our USI
data structures are up to $15$ times faster in querying than $4$ nontrivial
baselines while occupying the same space with them.
| no_new_dataset | 0.948585 |
2504.05923 | Juliett Su\'arez Ferreira | Juliett Su\'arez Ferreira, Marija Slavkovik, Jorge Casillas | Uncovering Fairness through Data Complexity as an Early Indicator | null | null | null | null | cs.LG cs.AI cs.DS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Fairness constitutes a concern within machine learning (ML) applications.
Currently, there is no study on how disparities in classification complexity
between privileged and unprivileged groups could influence the fairness of
solutions, which serves as a preliminary indicator of potential unfairness. In
this work, we investigate this gap, specifically, we focus on synthetic
datasets designed to capture a variety of biases ranging from historical bias
to measurement and representational bias to evaluate how various complexity
metrics differences correlate with group fairness metrics. We then apply
association rule mining to identify patterns that link disproportionate
complexity differences between groups with fairness-related outcomes, offering
data-centric indicators to guide bias mitigation. Our findings are also
validated by their application in real-world problems, providing evidence that
quantifying group-wise classification complexity can uncover early indicators
of potential fairness challenges. This investigation helps practitioners to
proactively address bias in classification tasks.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 11:28:40 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Ferreira",
"Juliett Suárez",
""
],
[
"Slavkovik",
"Marija",
""
],
[
"Casillas",
"Jorge",
""
]
] | TITLE: Uncovering Fairness through Data Complexity as an Early Indicator
ABSTRACT: Fairness constitutes a concern within machine learning (ML) applications.
Currently, there is no study on how disparities in classification complexity
between privileged and unprivileged groups could influence the fairness of
solutions, which serves as a preliminary indicator of potential unfairness. In
this work, we investigate this gap, specifically, we focus on synthetic
datasets designed to capture a variety of biases ranging from historical bias
to measurement and representational bias to evaluate how various complexity
metrics differences correlate with group fairness metrics. We then apply
association rule mining to identify patterns that link disproportionate
complexity differences between groups with fairness-related outcomes, offering
data-centric indicators to guide bias mitigation. Our findings are also
validated by their application in real-world problems, providing evidence that
quantifying group-wise classification complexity can uncover early indicators
of potential fairness challenges. This investigation helps practitioners to
proactively address bias in classification tasks.
| new_dataset | 0.949902 |
2504.05957 | Julian Agudelo | Julian Agudelo and Vincent Guigue and Cristina Manfredotti and Hadrien
Piot | Drought forecasting using a hybrid neural architecture for integrating
time series and static data | 10 pages, 3 figures, published as a workshop paper at Tackling
Climate Change with Machine Learning at ICLR 2025, Tackling Climate Change
with Machine Learning is a non-archival workshop | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Reliable forecasting is critical for early warning systems and adaptive
drought management. Most previous deep learning approaches focus solely on
homogeneous regions and rely on single-structured data. This paper presents a
hybrid neural architecture that integrates time series and static data,
achieving state-of-the-art performance on the DroughtED dataset. Our results
illustrate the potential of designing neural models for the treatment of
heterogeneous data in climate related tasks and present reliable prediction of
USDM categories, an expert-informed drought metric. Furthermore, this work
validates the potential of DroughtED for enabling location-agnostic training of
deep learning models.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 12:11:34 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Agudelo",
"Julian",
""
],
[
"Guigue",
"Vincent",
""
],
[
"Manfredotti",
"Cristina",
""
],
[
"Piot",
"Hadrien",
""
]
] | TITLE: Drought forecasting using a hybrid neural architecture for integrating
time series and static data
ABSTRACT: Reliable forecasting is critical for early warning systems and adaptive
drought management. Most previous deep learning approaches focus solely on
homogeneous regions and rely on single-structured data. This paper presents a
hybrid neural architecture that integrates time series and static data,
achieving state-of-the-art performance on the DroughtED dataset. Our results
illustrate the potential of designing neural models for the treatment of
heterogeneous data in climate related tasks and present reliable prediction of
USDM categories, an expert-informed drought metric. Furthermore, this work
validates the potential of DroughtED for enabling location-agnostic training of
deep learning models.
| no_new_dataset | 0.943191 |
2504.05966 | Xiaolin Fan | Xiaolin Fan, Yan Wang, Yingying Zhang, Mingkun Bao, Bosen Jia, Dong
Lu, Yifan Gu, Jian Cheng, and Haogang Zhu | AVP-AP: Self-supervised Automatic View Positioning in 3D cardiac CT via
Atlas Prompting | 12 pages, 8 figures, published to TMI | IEEE TRANSACTIONS ON MEDICAL IMAGING, March 2025 | 10.1109/TMI.2025.3554785 | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic view positioning is crucial for cardiac computed tomography (CT)
examinations, including disease diagnosis and surgical planning. However, it is
highly challenging due to individual variability and large 3D search space.
Existing work needs labor-intensive and time-consuming manual annotations to
train view-specific models, which are limited to predicting only a fixed set of
planes. However, in real clinical scenarios, the challenge of positioning
semantic 2D slices with any orientation into varying coordinate space in
arbitrary 3D volume remains unsolved. We thus introduce a novel framework,
AVP-AP, the first to use Atlas Prompting for self-supervised Automatic View
Positioning in the 3D CT volume. Specifically, this paper first proposes an
atlas prompting method, which generates a 3D canonical atlas and trains a
network to map slices into their corresponding positions in the atlas space via
a self-supervised manner. Then, guided by atlas prompts corresponding to the
given query images in a reference CT, we identify the coarse positions of
slices in the target CT volume using rigid transformation between the 3D atlas
and target CT volume, effectively reducing the search space. Finally, we refine
the coarse positions by maximizing the similarity between the predicted slices
and the query images in the feature space of a given foundation model. Our
framework is flexible and efficient compared to other methods, outperforming
other methods by 19.8% average structural similarity (SSIM) in arbitrary view
positioning and achieving 9% SSIM in two-chamber view compared to four
radiologists. Meanwhile, experiments on a public dataset validate our
framework's generalizability.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 12:24:37 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Fan",
"Xiaolin",
""
],
[
"Wang",
"Yan",
""
],
[
"Zhang",
"Yingying",
""
],
[
"Bao",
"Mingkun",
""
],
[
"Jia",
"Bosen",
""
],
[
"Lu",
"Dong",
""
],
[
"Gu",
"Yifan",
""
],
[
"Cheng",
"Jian",
""
],
[
"Zhu",
"Haogang",
""
]
] | TITLE: AVP-AP: Self-supervised Automatic View Positioning in 3D cardiac CT via
Atlas Prompting
ABSTRACT: Automatic view positioning is crucial for cardiac computed tomography (CT)
examinations, including disease diagnosis and surgical planning. However, it is
highly challenging due to individual variability and large 3D search space.
Existing work needs labor-intensive and time-consuming manual annotations to
train view-specific models, which are limited to predicting only a fixed set of
planes. However, in real clinical scenarios, the challenge of positioning
semantic 2D slices with any orientation into varying coordinate space in
arbitrary 3D volume remains unsolved. We thus introduce a novel framework,
AVP-AP, the first to use Atlas Prompting for self-supervised Automatic View
Positioning in the 3D CT volume. Specifically, this paper first proposes an
atlas prompting method, which generates a 3D canonical atlas and trains a
network to map slices into their corresponding positions in the atlas space via
a self-supervised manner. Then, guided by atlas prompts corresponding to the
given query images in a reference CT, we identify the coarse positions of
slices in the target CT volume using rigid transformation between the 3D atlas
and target CT volume, effectively reducing the search space. Finally, we refine
the coarse positions by maximizing the similarity between the predicted slices
and the query images in the feature space of a given foundation model. Our
framework is flexible and efficient compared to other methods, outperforming
other methods by 19.8% average structural similarity (SSIM) in arbitrary view
positioning and achieving 9% SSIM in two-chamber view compared to four
radiologists. Meanwhile, experiments on a public dataset validate our
framework's generalizability.
| no_new_dataset | 0.950732 |
2504.05977 | Jakob Christensen | Jakob L{\o}nborg Christensen, Morten Rieger Hannemose, Anders Bjorholm
Dahl, Vedrana Andersen Dahl | Diffusion Based Ambiguous Image Segmentation | Accepted at SCIA25 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Medical image segmentation often involves inherent uncertainty due to
variations in expert annotations. Capturing this uncertainty is an important
goal and previous works have used various generative image models for the
purpose of representing the full distribution of plausible expert ground
truths. In this work, we explore the design space of diffusion models for
generative segmentation, investigating the impact of noise schedules,
prediction types, and loss weightings. Notably, we find that making the noise
schedule harder with input scaling significantly improves performance. We
conclude that x- and v-prediction outperform epsilon-prediction, likely because
the diffusion process is in the discrete segmentation domain. Many loss
weightings achieve similar performance as long as they give enough weight to
the end of the diffusion process. We base our experiments on the LIDC-IDRI lung
lesion dataset and obtain state-of-the-art (SOTA) performance. Additionally, we
introduce a randomly cropped variant of the LIDC-IDRI dataset that is better
suited for uncertainty in image segmentation. Our model also achieves SOTA in
this harder setting.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 12:33:26 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Christensen",
"Jakob Lønborg",
""
],
[
"Hannemose",
"Morten Rieger",
""
],
[
"Dahl",
"Anders Bjorholm",
""
],
[
"Dahl",
"Vedrana Andersen",
""
]
] | TITLE: Diffusion Based Ambiguous Image Segmentation
ABSTRACT: Medical image segmentation often involves inherent uncertainty due to
variations in expert annotations. Capturing this uncertainty is an important
goal and previous works have used various generative image models for the
purpose of representing the full distribution of plausible expert ground
truths. In this work, we explore the design space of diffusion models for
generative segmentation, investigating the impact of noise schedules,
prediction types, and loss weightings. Notably, we find that making the noise
schedule harder with input scaling significantly improves performance. We
conclude that x- and v-prediction outperform epsilon-prediction, likely because
the diffusion process is in the discrete segmentation domain. Many loss
weightings achieve similar performance as long as they give enough weight to
the end of the diffusion process. We base our experiments on the LIDC-IDRI lung
lesion dataset and obtain state-of-the-art (SOTA) performance. Additionally, we
introduce a randomly cropped variant of the LIDC-IDRI dataset that is better
suited for uncertainty in image segmentation. Our model also achieves SOTA in
this harder setting.
| no_new_dataset | 0.913291 |
2504.05992 | Jie Yang | Jie Yang, Chang Su, Yuhan Zhang, Jianjun Zhu and Jianli Wang | Under-Sampled High-Dimensional Data Recovery via Symbiotic Multi-Prior
Tensor Reconstruction | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | The advancement of sensing technology has driven the widespread application
of high-dimensional data. However, issues such as missing entries during
acquisition and transmission negatively impact the accuracy of subsequent
tasks. Tensor reconstruction aims to recover the underlying complete data from
under-sampled observed data by exploring prior information in high-dimensional
data. However, due to insufficient exploration, reconstruction methods still
face challenges when sampling rate is extremely low. This work proposes a
tensor reconstruction method integrating multiple priors to comprehensively
exploit the inherent structure of the data. Specifically, the method combines
learnable tensor decomposition to enforce low-rank constraints of the
reconstructed data, a pre-trained convolutional neural network for smoothing
and denoising, and block-matching and 3D filtering regularization to enhance
the non-local similarity in the reconstructed data. An alternating direction
method of the multipliers algorithm is designed to decompose the resulting
optimization problem into three subproblems for efficient resolution. Extensive
experiments on color images, hyperspectral images, and grayscale videos
datasets demonstrate the superiority of our method in extreme cases as compared
with state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 12:55:18 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Yang",
"Jie",
""
],
[
"Su",
"Chang",
""
],
[
"Zhang",
"Yuhan",
""
],
[
"Zhu",
"Jianjun",
""
],
[
"Wang",
"Jianli",
""
]
] | TITLE: Under-Sampled High-Dimensional Data Recovery via Symbiotic Multi-Prior
Tensor Reconstruction
ABSTRACT: The advancement of sensing technology has driven the widespread application
of high-dimensional data. However, issues such as missing entries during
acquisition and transmission negatively impact the accuracy of subsequent
tasks. Tensor reconstruction aims to recover the underlying complete data from
under-sampled observed data by exploring prior information in high-dimensional
data. However, due to insufficient exploration, reconstruction methods still
face challenges when sampling rate is extremely low. This work proposes a
tensor reconstruction method integrating multiple priors to comprehensively
exploit the inherent structure of the data. Specifically, the method combines
learnable tensor decomposition to enforce low-rank constraints of the
reconstructed data, a pre-trained convolutional neural network for smoothing
and denoising, and block-matching and 3D filtering regularization to enhance
the non-local similarity in the reconstructed data. An alternating direction
method of the multipliers algorithm is designed to decompose the resulting
optimization problem into three subproblems for efficient resolution. Extensive
experiments on color images, hyperspectral images, and grayscale videos
datasets demonstrate the superiority of our method in extreme cases as compared
with state-of-the-art methods.
| no_new_dataset | 0.946498 |
2504.05995 | Firoj Alam | Firoj Alam, Md Arid Hasan, Sahinur Rahman Laskar, Mucahid Kutlu,
Shammur Absar Chowdhury | NativQA Framework: Enabling LLMs with Native, Local, and Everyday
Knowledge | LLMs, Native, Multilingual, Language Diversity, Contextual
Understanding, Minority Languages, Culturally Informed, Foundation Models,
Large Language Models | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The rapid advancement of large language models (LLMs) has raised concerns
about cultural bias, fairness, and their applicability in diverse linguistic
and underrepresented regional contexts. To enhance and benchmark the
capabilities of LLMs, there is a need to develop large-scale resources focused
on multilingual, local, and cultural contexts. In this study, we propose a
framework, NativQA, that can seamlessly construct large-scale, culturally and
regionally aligned QA datasets in native languages. The framework utilizes
user-defined seed queries and leverages search engines to collect
location-specific, everyday information. It has been evaluated across 39
locations in 24 countries and in 7 languages, ranging from extremely
low-resource to high-resource languages, which resulted over 300K Question
Answer (QA) pairs. The developed resources can be used for LLM benchmarking and
further fine-tuning. The framework has been made publicly available for the
community (https://gitlab.com/nativqa/nativqa-framework).
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 13:01:51 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Alam",
"Firoj",
""
],
[
"Hasan",
"Md Arid",
""
],
[
"Laskar",
"Sahinur Rahman",
""
],
[
"Kutlu",
"Mucahid",
""
],
[
"Chowdhury",
"Shammur Absar",
""
]
] | TITLE: NativQA Framework: Enabling LLMs with Native, Local, and Everyday
Knowledge
ABSTRACT: The rapid advancement of large language models (LLMs) has raised concerns
about cultural bias, fairness, and their applicability in diverse linguistic
and underrepresented regional contexts. To enhance and benchmark the
capabilities of LLMs, there is a need to develop large-scale resources focused
on multilingual, local, and cultural contexts. In this study, we propose a
framework, NativQA, that can seamlessly construct large-scale, culturally and
regionally aligned QA datasets in native languages. The framework utilizes
user-defined seed queries and leverages search engines to collect
location-specific, everyday information. It has been evaluated across 39
locations in 24 countries and in 7 languages, ranging from extremely
low-resource to high-resource languages, which resulted over 300K Question
Answer (QA) pairs. The developed resources can be used for LLM benchmarking and
further fine-tuning. The framework has been made publicly available for the
community (https://gitlab.com/nativqa/nativqa-framework).
| new_dataset | 0.690976 |
2504.06003 | Can Zhang | Can Zhang and Gim Hee Lee | econSG: Efficient and Multi-view Consistent Open-Vocabulary 3D Semantic
Gaussians | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The primary focus of most recent works on open-vocabulary neural fields is
extracting precise semantic features from the VLMs and then consolidating them
efficiently into a multi-view consistent 3D neural fields representation.
However, most existing works over-trusted SAM to regularize image-level CLIP
without any further refinement. Moreover, several existing works improved
efficiency by dimensionality reduction of semantic features from 2D VLMs before
fusing with 3DGS semantic fields, which inevitably leads to multi-view
inconsistency. In this work, we propose econSG for open-vocabulary semantic
segmentation with 3DGS. Our econSG consists of: 1) A Confidence-region Guided
Regularization (CRR) that mutually refines SAM and CLIP to get the best of both
worlds for precise semantic features with complete and precise boundaries. 2) A
low dimensional contextual space to enforce 3D multi-view consistency while
improving computational efficiency by fusing backprojected multi-view 2D
features and follow by dimensional reduction directly on the fused 3D features
instead of operating on each 2D view separately. Our econSG shows
state-of-the-art performance on four benchmark datasets compared to the
existing methods. Furthermore, we are also the most efficient training among
all the methods.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 13:12:31 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Zhang",
"Can",
""
],
[
"Lee",
"Gim Hee",
""
]
] | TITLE: econSG: Efficient and Multi-view Consistent Open-Vocabulary 3D Semantic
Gaussians
ABSTRACT: The primary focus of most recent works on open-vocabulary neural fields is
extracting precise semantic features from the VLMs and then consolidating them
efficiently into a multi-view consistent 3D neural fields representation.
However, most existing works over-trusted SAM to regularize image-level CLIP
without any further refinement. Moreover, several existing works improved
efficiency by dimensionality reduction of semantic features from 2D VLMs before
fusing with 3DGS semantic fields, which inevitably leads to multi-view
inconsistency. In this work, we propose econSG for open-vocabulary semantic
segmentation with 3DGS. Our econSG consists of: 1) A Confidence-region Guided
Regularization (CRR) that mutually refines SAM and CLIP to get the best of both
worlds for precise semantic features with complete and precise boundaries. 2) A
low dimensional contextual space to enforce 3D multi-view consistency while
improving computational efficiency by fusing backprojected multi-view 2D
features and follow by dimensional reduction directly on the fused 3D features
instead of operating on each 2D view separately. Our econSG shows
state-of-the-art performance on four benchmark datasets compared to the
existing methods. Furthermore, we are also the most efficient training among
all the methods.
| no_new_dataset | 0.948202 |
2504.06004 | Mrityunjoy Gain | Mrityunjoy Gain, Kitae Kim, Avi Deb Raha, Apurba Adhikary, Eui-Nam
Huh, Zhu Han, and Choong Seon Hong | FedFeat+: A Robust Federated Learning Framework Through Federated
Aggregation and Differentially Private Feature-Based Classifier Retraining | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this paper, we propose the FedFeat+ framework, which distinctively
separates feature extraction from classification. We develop a two-tiered model
training process: following local training, clients transmit their weights and
some features extracted from the feature extractor from the final local epochs
to the server. The server aggregates these models using the FedAvg method and
subsequently retrains the global classifier utilizing the shared features. The
classifier retraining process enhances the model's understanding of the
holistic view of the data distribution, ensuring better generalization across
diverse datasets. This improved generalization enables the classifier to
adaptively influence the feature extractor during subsequent local training
epochs. We establish a balance between enhancing model accuracy and
safeguarding individual privacy through the implementation of differential
privacy mechanisms. By incorporating noise into the feature vectors shared with
the server, we ensure that sensitive data remains confidential. We present a
comprehensive convergence analysis, along with theoretical reasoning regarding
performance enhancement and privacy preservation. We validate our approach
through empirical evaluations conducted on benchmark datasets, including
CIFAR-10, CIFAR-100, MNIST, and FMNIST, achieving high accuracy while adhering
to stringent privacy guarantees. The experimental results demonstrate that the
FedFeat+ framework, despite using only a lightweight two-layer CNN classifier,
outperforms the FedAvg method in both IID and non-IID scenarios, achieving
accuracy improvements ranging from 3.92 % to 12.34 % across CIFAR-10,
CIFAR-100, and Fashion-MNIST datasets.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 13:12:38 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Gain",
"Mrityunjoy",
""
],
[
"Kim",
"Kitae",
""
],
[
"Raha",
"Avi Deb",
""
],
[
"Adhikary",
"Apurba",
""
],
[
"Huh",
"Eui-Nam",
""
],
[
"Han",
"Zhu",
""
],
[
"Hong",
"Choong Seon",
""
]
] | TITLE: FedFeat+: A Robust Federated Learning Framework Through Federated
Aggregation and Differentially Private Feature-Based Classifier Retraining
ABSTRACT: In this paper, we propose the FedFeat+ framework, which distinctively
separates feature extraction from classification. We develop a two-tiered model
training process: following local training, clients transmit their weights and
some features extracted from the feature extractor from the final local epochs
to the server. The server aggregates these models using the FedAvg method and
subsequently retrains the global classifier utilizing the shared features. The
classifier retraining process enhances the model's understanding of the
holistic view of the data distribution, ensuring better generalization across
diverse datasets. This improved generalization enables the classifier to
adaptively influence the feature extractor during subsequent local training
epochs. We establish a balance between enhancing model accuracy and
safeguarding individual privacy through the implementation of differential
privacy mechanisms. By incorporating noise into the feature vectors shared with
the server, we ensure that sensitive data remains confidential. We present a
comprehensive convergence analysis, along with theoretical reasoning regarding
performance enhancement and privacy preservation. We validate our approach
through empirical evaluations conducted on benchmark datasets, including
CIFAR-10, CIFAR-100, MNIST, and FMNIST, achieving high accuracy while adhering
to stringent privacy guarantees. The experimental results demonstrate that the
FedFeat+ framework, despite using only a lightweight two-layer CNN classifier,
outperforms the FedAvg method in both IID and non-IID scenarios, achieving
accuracy improvements ranging from 3.92 % to 12.34 % across CIFAR-10,
CIFAR-100, and Fashion-MNIST datasets.
| no_new_dataset | 0.947914 |
2504.06006 | Roman Kochnev | Roman Kochnev, Arash Torabi Goodarzi, Zofia Antonina Bentyn, Dmitry
Ignatov, Radu Timofte | Optuna vs Code Llama: Are LLMs a New Paradigm for Hyperparameter Tuning? | null | null | null | null | cs.LG cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Optimal hyperparameter selection is critical for maximizing neural network
performance, especially as models grow in complexity. This work investigates
the viability of using large language models (LLMs) for hyperparameter
optimization by employing a fine-tuned version of Code Llama. Through
parameter-efficient fine-tuning using LoRA, we adapt the LLM to generate
accurate and efficient hyperparameter recommendations tailored to diverse
neural network architectures. Unlike traditional methods such as Optuna, which
rely on exhaustive trials, the proposed approach achieves competitive or
superior results in terms of Root Mean Square Error (RMSE) while significantly
reducing computational overhead. Our approach highlights that LLM-based
optimization not only matches state-of-the-art methods like Tree-structured
Parzen Estimators but also accelerates the tuning process. This positions LLMs
as a promising alternative to conventional optimization techniques,
particularly for rapid experimentation. Furthermore, the ability to generate
hyperparameters in a single inference step makes this method particularly
well-suited for resource-constrained environments such as edge devices and
mobile applications, where computational efficiency is paramount. The results
confirm that LLMs, beyond their efficiency, offer substantial time savings and
comparable stability, underscoring their value in advancing machine learning
workflows. All generated hyperparameters are included in the LEMUR Neural
Network (NN) Dataset, which is publicly available and serves as an open-source
benchmark for hyperparameter optimization research.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 13:15:47 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Kochnev",
"Roman",
""
],
[
"Goodarzi",
"Arash Torabi",
""
],
[
"Bentyn",
"Zofia Antonina",
""
],
[
"Ignatov",
"Dmitry",
""
],
[
"Timofte",
"Radu",
""
]
] | TITLE: Optuna vs Code Llama: Are LLMs a New Paradigm for Hyperparameter Tuning?
ABSTRACT: Optimal hyperparameter selection is critical for maximizing neural network
performance, especially as models grow in complexity. This work investigates
the viability of using large language models (LLMs) for hyperparameter
optimization by employing a fine-tuned version of Code Llama. Through
parameter-efficient fine-tuning using LoRA, we adapt the LLM to generate
accurate and efficient hyperparameter recommendations tailored to diverse
neural network architectures. Unlike traditional methods such as Optuna, which
rely on exhaustive trials, the proposed approach achieves competitive or
superior results in terms of Root Mean Square Error (RMSE) while significantly
reducing computational overhead. Our approach highlights that LLM-based
optimization not only matches state-of-the-art methods like Tree-structured
Parzen Estimators but also accelerates the tuning process. This positions LLMs
as a promising alternative to conventional optimization techniques,
particularly for rapid experimentation. Furthermore, the ability to generate
hyperparameters in a single inference step makes this method particularly
well-suited for resource-constrained environments such as edge devices and
mobile applications, where computational efficiency is paramount. The results
confirm that LLMs, beyond their efficiency, offer substantial time savings and
comparable stability, underscoring their value in advancing machine learning
workflows. All generated hyperparameters are included in the LEMUR Neural
Network (NN) Dataset, which is publicly available and serves as an open-source
benchmark for hyperparameter optimization research.
| no_new_dataset | 0.951639 |
2504.06010 | Stefanos-Iordanis Papadopoulos | Stefanos-Iordanis Papadopoulos, Christos Koutlis, Symeon Papadopoulos,
Panagiotis C. Petrantonakis | Latent Multimodal Reconstruction for Misinformation Detection | null | null | null | null | cs.CV cs.MM | http://creativecommons.org/licenses/by-sa/4.0/ | Multimodal misinformation, such as miscaptioned images, where captions
misrepresent an image's origin, context, or meaning, poses a growing challenge
in the digital age. To support fact-checkers, researchers have been focusing on
creating datasets and developing methods for multimodal misinformation
detection (MMD). Due to the scarcity of large-scale annotated MMD datasets,
recent studies leverage synthetic training data via out-of-context
image-caption pairs or named entity manipulations; altering names, dates, and
locations. However, these approaches often produce simplistic misinformation
that fails to reflect real-world complexity, limiting the robustness of
detection models trained on them. Meanwhile, despite recent advancements, Large
Vision-Language Models (LVLMs) remain underutilized for generating diverse,
realistic synthetic training data for MMD. To address this gap, we introduce
"MisCaption This!", a training dataset comprising LVLM-generated miscaptioned
images. Additionally, we introduce "Latent Multimodal Reconstruction" (LAMAR),
a network trained to reconstruct the embeddings of truthful captions, providing
a strong auxiliary signal to the detection process. To optimize LAMAR, we
explore different training strategies (end-to-end training and large-scale
pre-training) and integration approaches (direct, mask, gate, and attention).
Extensive experiments show that models trained on "MisCaption This!" generalize
better on real-world misinformation, while LAMAR sets new state-of-the-art on
both NewsCLIPpings and VERITE benchmarks; highlighting the potential of
LVLM-generated data and reconstruction-based approaches for advancing MMD. We
release our code at:
https://github.com/stevejpapad/miscaptioned-image-reconstruction
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 13:16:48 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Papadopoulos",
"Stefanos-Iordanis",
""
],
[
"Koutlis",
"Christos",
""
],
[
"Papadopoulos",
"Symeon",
""
],
[
"Petrantonakis",
"Panagiotis C.",
""
]
] | TITLE: Latent Multimodal Reconstruction for Misinformation Detection
ABSTRACT: Multimodal misinformation, such as miscaptioned images, where captions
misrepresent an image's origin, context, or meaning, poses a growing challenge
in the digital age. To support fact-checkers, researchers have been focusing on
creating datasets and developing methods for multimodal misinformation
detection (MMD). Due to the scarcity of large-scale annotated MMD datasets,
recent studies leverage synthetic training data via out-of-context
image-caption pairs or named entity manipulations; altering names, dates, and
locations. However, these approaches often produce simplistic misinformation
that fails to reflect real-world complexity, limiting the robustness of
detection models trained on them. Meanwhile, despite recent advancements, Large
Vision-Language Models (LVLMs) remain underutilized for generating diverse,
realistic synthetic training data for MMD. To address this gap, we introduce
"MisCaption This!", a training dataset comprising LVLM-generated miscaptioned
images. Additionally, we introduce "Latent Multimodal Reconstruction" (LAMAR),
a network trained to reconstruct the embeddings of truthful captions, providing
a strong auxiliary signal to the detection process. To optimize LAMAR, we
explore different training strategies (end-to-end training and large-scale
pre-training) and integration approaches (direct, mask, gate, and attention).
Extensive experiments show that models trained on "MisCaption This!" generalize
better on real-world misinformation, while LAMAR sets new state-of-the-art on
both NewsCLIPpings and VERITE benchmarks; highlighting the potential of
LVLM-generated data and reconstruction-based approaches for advancing MMD. We
release our code at:
https://github.com/stevejpapad/miscaptioned-image-reconstruction
| new_dataset | 0.961316 |
2504.06022 | Luis Denninger | Luis Denninger, Sina Mokhtarzadeh Azar, Juergen Gall | CamContextI2V: Context-aware Controllable Video Generation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, image-to-video (I2V) diffusion models have demonstrated impressive
scene understanding and generative quality, incorporating image conditions to
guide generation. However, these models primarily animate static images without
extending beyond their provided context. Introducing additional constraints,
such as camera trajectories, can enhance diversity but often degrades visual
quality, limiting their applicability for tasks requiring faithful scene
representation. We propose CamContextI2V, an I2V model that integrates multiple
image conditions with 3D constraints alongside camera control to enrich both
global semantics and fine-grained visual details. This enables more coherent
and context-aware video generation. Moreover, we motivate the necessity of
temporal awareness for an effective context representation. Our comprehensive
study on the RealEstate10K dataset demonstrates improvements in visual quality
and camera controllability. We make our code and models publicly available at:
https://github.com/LDenninger/CamContextI2V.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 13:26:59 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Denninger",
"Luis",
""
],
[
"Azar",
"Sina Mokhtarzadeh",
""
],
[
"Gall",
"Juergen",
""
]
] | TITLE: CamContextI2V: Context-aware Controllable Video Generation
ABSTRACT: Recently, image-to-video (I2V) diffusion models have demonstrated impressive
scene understanding and generative quality, incorporating image conditions to
guide generation. However, these models primarily animate static images without
extending beyond their provided context. Introducing additional constraints,
such as camera trajectories, can enhance diversity but often degrades visual
quality, limiting their applicability for tasks requiring faithful scene
representation. We propose CamContextI2V, an I2V model that integrates multiple
image conditions with 3D constraints alongside camera control to enrich both
global semantics and fine-grained visual details. This enables more coherent
and context-aware video generation. Moreover, we motivate the necessity of
temporal awareness for an effective context representation. Our comprehensive
study on the RealEstate10K dataset demonstrates improvements in visual quality
and camera controllability. We make our code and models publicly available at:
https://github.com/LDenninger/CamContextI2V.
| no_new_dataset | 0.950915 |
2504.06039 | Julia Werner | Julia Werner, Christoph Gerum, Jorg Nick, Maxime Le Floch, Franz
Brinkmann, Jochen Hampe, and Oliver Bringmann | Enhanced Anomaly Detection for Capsule Endoscopy Using Ensemble Learning
Strategies | Accepted at the 47th Annual International Conference of the IEEE
Engineering in Medicine and Biology Society (EMBS EMBC) | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Capsule endoscopy is a method to capture images of the gastrointestinal tract
and screen for diseases which might remain hidden if investigated with standard
endoscopes. Due to the limited size of a video capsule, embedding AI models
directly into the capsule demands careful consideration of the model size and
thus complicates anomaly detection in this field. Furthermore, the scarcity of
available data in this domain poses an ongoing challenge to achieving effective
anomaly detection. Thus, this work introduces an ensemble strategy to address
this challenge in anomaly detection tasks in video capsule endoscopies,
requiring only a small number of individual neural networks during both the
training and inference phases. Ensemble learning combines the predictions of
multiple independently trained neural networks. This has shown to be highly
effective in enhancing both the accuracy and robustness of machine learning
models. However, this comes at the cost of higher memory usage and increased
computational effort, which quickly becomes prohibitive in many real-world
applications. Instead of applying the same training algorithm to each
individual network, we propose using various loss functions, drawn from the
anomaly detection field, to train each network. The methods are validated on
the two largest publicly available datasets for video capsule endoscopy images,
the Galar and the Kvasir-Capsule dataset. We achieve an AUC score of 76.86% on
the Kvasir-Capsule and an AUC score of 76.98% on the Galar dataset. Our
approach outperforms current baselines with significantly fewer parameters
across all models, which is a crucial step towards incorporating artificial
intelligence into capsule endoscopies.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 13:39:39 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Werner",
"Julia",
""
],
[
"Gerum",
"Christoph",
""
],
[
"Nick",
"Jorg",
""
],
[
"Floch",
"Maxime Le",
""
],
[
"Brinkmann",
"Franz",
""
],
[
"Hampe",
"Jochen",
""
],
[
"Bringmann",
"Oliver",
""
]
] | TITLE: Enhanced Anomaly Detection for Capsule Endoscopy Using Ensemble Learning
Strategies
ABSTRACT: Capsule endoscopy is a method to capture images of the gastrointestinal tract
and screen for diseases which might remain hidden if investigated with standard
endoscopes. Due to the limited size of a video capsule, embedding AI models
directly into the capsule demands careful consideration of the model size and
thus complicates anomaly detection in this field. Furthermore, the scarcity of
available data in this domain poses an ongoing challenge to achieving effective
anomaly detection. Thus, this work introduces an ensemble strategy to address
this challenge in anomaly detection tasks in video capsule endoscopies,
requiring only a small number of individual neural networks during both the
training and inference phases. Ensemble learning combines the predictions of
multiple independently trained neural networks. This has shown to be highly
effective in enhancing both the accuracy and robustness of machine learning
models. However, this comes at the cost of higher memory usage and increased
computational effort, which quickly becomes prohibitive in many real-world
applications. Instead of applying the same training algorithm to each
individual network, we propose using various loss functions, drawn from the
anomaly detection field, to train each network. The methods are validated on
the two largest publicly available datasets for video capsule endoscopy images,
the Galar and the Kvasir-Capsule dataset. We achieve an AUC score of 76.86% on
the Kvasir-Capsule and an AUC score of 76.98% on the Galar dataset. Our
approach outperforms current baselines with significantly fewer parameters
across all models, which is a crucial step towards incorporating artificial
intelligence into capsule endoscopies.
| no_new_dataset | 0.945901 |
2504.06055 | Panagiota Rempi | Panagiota Rempi, Sotiris Pelekis, Alexandros Menelaos Tzortzis,
Evangelos Karakolis, Christos Ntanos, Dimitris Askounis | Explainable AI for building energy retrofitting under data scarcity | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Enhancing energy efficiency in residential buildings is a crucial step toward
mitigating climate change and reducing greenhouse gas emissions. Retrofitting
existing buildings, which account for a significant portion of energy
consumption, is critical particularly in regions with outdated and inefficient
building stocks. This study presents an Artificial Intelligence (AI) and
Machine Learning (ML)-based framework to recommend energy efficiency measures
for residential buildings, leveraging accessible building characteristics to
achieve energy class targets. Using Latvia as a case study, the methodology
addresses challenges associated with limited datasets, class imbalance and data
scarcity. The proposed approach integrates Conditional Tabular Generative
Adversarial Networks (CTGAN) to generate synthetic data, enriching and
balancing the dataset. A Multi-Layer Perceptron (MLP) model serves as the
predictive model performing multi-label classification to predict appropriate
retrofit strategies. Explainable Artificial Intelligence (XAI), specifically
SHapley Additive exPlanations (SHAP), ensures transparency and trust by
identifying key features that influence recommendations and guiding feature
engineering choices for improved reliability and performance. The evaluation of
the approach shows that it notably overcomes data limitations, achieving
improvements up to 54% in precision, recall and F1 score. Although this study
focuses on Latvia, the methodology is adaptable to other regions, underscoring
the potential of AI in reducing the complexity and cost of building energy
retrofitting overcoming data limitations. By facilitating decision-making
processes and promoting stakeholders engagement, this work supports the global
transition toward sustainable energy use in the residential building sector.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 14:00:08 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Rempi",
"Panagiota",
""
],
[
"Pelekis",
"Sotiris",
""
],
[
"Tzortzis",
"Alexandros Menelaos",
""
],
[
"Karakolis",
"Evangelos",
""
],
[
"Ntanos",
"Christos",
""
],
[
"Askounis",
"Dimitris",
""
]
] | TITLE: Explainable AI for building energy retrofitting under data scarcity
ABSTRACT: Enhancing energy efficiency in residential buildings is a crucial step toward
mitigating climate change and reducing greenhouse gas emissions. Retrofitting
existing buildings, which account for a significant portion of energy
consumption, is critical particularly in regions with outdated and inefficient
building stocks. This study presents an Artificial Intelligence (AI) and
Machine Learning (ML)-based framework to recommend energy efficiency measures
for residential buildings, leveraging accessible building characteristics to
achieve energy class targets. Using Latvia as a case study, the methodology
addresses challenges associated with limited datasets, class imbalance and data
scarcity. The proposed approach integrates Conditional Tabular Generative
Adversarial Networks (CTGAN) to generate synthetic data, enriching and
balancing the dataset. A Multi-Layer Perceptron (MLP) model serves as the
predictive model performing multi-label classification to predict appropriate
retrofit strategies. Explainable Artificial Intelligence (XAI), specifically
SHapley Additive exPlanations (SHAP), ensures transparency and trust by
identifying key features that influence recommendations and guiding feature
engineering choices for improved reliability and performance. The evaluation of
the approach shows that it notably overcomes data limitations, achieving
improvements up to 54% in precision, recall and F1 score. Although this study
focuses on Latvia, the methodology is adaptable to other regions, underscoring
the potential of AI in reducing the complexity and cost of building energy
retrofitting overcoming data limitations. By facilitating decision-making
processes and promoting stakeholders engagement, this work supports the global
transition toward sustainable energy use in the residential building sector.
| no_new_dataset | 0.947284 |
2504.06069 | Reza Masoudian Saadabad | Hanieh Masoudian Saadabad, Lingraj Kumar, Reza Masoudian Saadabad, and
Maja Colautti | Physics-Constrained Neural Network for Metasurface Optical Response
Prediction | null | null | null | null | physics.optics | http://creativecommons.org/licenses/by/4.0/ | A physics-constrained neural network is presented for predicting the optical
response of metasurfaces. Our approach incorporates physical laws directly into
the neural network architecture and loss function, addressing critical
challenges in the modeling of metasurfaces. Unlike methods that require
specialized weighting strategies or separate architectural branches to handle
different data regimes and phase wrapping discontinuities, this unified
approach effectively addresses phase discontinuities, energy conservation
constraints, and complex gap-dependent behavior. We implement sine-cosine phase
representation with Euclidean normalization as a non-trainable layer within the
network, enabling the model to account for the periodic nature of phase while
enforcing the mathematical constraint $\sin^2 \phi + \cos^2 \phi = 1$. A
Euclidean distance-based loss function in the sine-cosine space ensures a
physically meaningful error metric while preventing discontinuity issues. The
model achieves good, consistent performance with small, imbalanced datasets of
580 and 1075 data points, compared to several thousand typically required by
alternative approaches. This physics-informed approach preserves physical
interpretability while reducing reliance on large datasets and could be
extended to other photonic structures by incorporating additional physical
constraints tailored to specific applications.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 14:10:28 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Saadabad",
"Hanieh Masoudian",
""
],
[
"Kumar",
"Lingraj",
""
],
[
"Saadabad",
"Reza Masoudian",
""
],
[
"Colautti",
"Maja",
""
]
] | TITLE: Physics-Constrained Neural Network for Metasurface Optical Response
Prediction
ABSTRACT: A physics-constrained neural network is presented for predicting the optical
response of metasurfaces. Our approach incorporates physical laws directly into
the neural network architecture and loss function, addressing critical
challenges in the modeling of metasurfaces. Unlike methods that require
specialized weighting strategies or separate architectural branches to handle
different data regimes and phase wrapping discontinuities, this unified
approach effectively addresses phase discontinuities, energy conservation
constraints, and complex gap-dependent behavior. We implement sine-cosine phase
representation with Euclidean normalization as a non-trainable layer within the
network, enabling the model to account for the periodic nature of phase while
enforcing the mathematical constraint $\sin^2 \phi + \cos^2 \phi = 1$. A
Euclidean distance-based loss function in the sine-cosine space ensures a
physically meaningful error metric while preventing discontinuity issues. The
model achieves good, consistent performance with small, imbalanced datasets of
580 and 1075 data points, compared to several thousand typically required by
alternative approaches. This physics-informed approach preserves physical
interpretability while reducing reliance on large datasets and could be
extended to other photonic structures by incorporating additional physical
constraints tailored to specific applications.
| no_new_dataset | 0.950041 |
2504.06084 | Alexey Gavryushin | Alexey Gavryushin, Xi Wang, Robert J. S. Malate, Chenyu Yang, Xiangyi
Jia, Shubh Goel, Davide Liconti, Ren\'e Zurbr\"ugg, Robert K. Katzschmann,
Marc Pollefeys | MAPLE: Encoding Dexterous Robotic Manipulation Priors Learned From
Egocentric Videos | null | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large-scale egocentric video datasets capture diverse human activities across
a wide range of scenarios, offering rich and detailed insights into how humans
interact with objects, especially those that require fine-grained dexterous
control. Such complex, dexterous skills with precise controls are crucial for
many robotic manipulation tasks, yet are often insufficiently addressed by
traditional data-driven approaches to robotic manipulation. To address this
gap, we leverage manipulation priors learned from large-scale egocentric video
datasets to improve policy learning for dexterous robotic manipulation tasks.
We present MAPLE, a novel method for dexterous robotic manipulation that
exploits rich manipulation priors to enable efficient policy learning and
better performance on diverse, complex manipulation tasks. Specifically, we
predict hand-object contact points and detailed hand poses at the moment of
hand-object contact and use the learned features to train policies for
downstream manipulation tasks. Experimental results demonstrate the
effectiveness of MAPLE across existing simulation benchmarks, as well as a
newly designed set of challenging simulation tasks, which require fine-grained
object control and complex dexterous skills. The benefits of MAPLE are further
highlighted in real-world experiments using a dexterous robotic hand, whereas
simultaneous evaluation across both simulation and real-world experiments has
remained underexplored in prior work.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 14:25:25 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Gavryushin",
"Alexey",
""
],
[
"Wang",
"Xi",
""
],
[
"Malate",
"Robert J. S.",
""
],
[
"Yang",
"Chenyu",
""
],
[
"Jia",
"Xiangyi",
""
],
[
"Goel",
"Shubh",
""
],
[
"Liconti",
"Davide",
""
],
[
"Zurbrügg",
"René",
""
],
[
"Katzschmann",
"Robert K.",
""
],
[
"Pollefeys",
"Marc",
""
]
] | TITLE: MAPLE: Encoding Dexterous Robotic Manipulation Priors Learned From
Egocentric Videos
ABSTRACT: Large-scale egocentric video datasets capture diverse human activities across
a wide range of scenarios, offering rich and detailed insights into how humans
interact with objects, especially those that require fine-grained dexterous
control. Such complex, dexterous skills with precise controls are crucial for
many robotic manipulation tasks, yet are often insufficiently addressed by
traditional data-driven approaches to robotic manipulation. To address this
gap, we leverage manipulation priors learned from large-scale egocentric video
datasets to improve policy learning for dexterous robotic manipulation tasks.
We present MAPLE, a novel method for dexterous robotic manipulation that
exploits rich manipulation priors to enable efficient policy learning and
better performance on diverse, complex manipulation tasks. Specifically, we
predict hand-object contact points and detailed hand poses at the moment of
hand-object contact and use the learned features to train policies for
downstream manipulation tasks. Experimental results demonstrate the
effectiveness of MAPLE across existing simulation benchmarks, as well as a
newly designed set of challenging simulation tasks, which require fine-grained
object control and complex dexterous skills. The benefits of MAPLE are further
highlighted in real-world experiments using a dexterous robotic hand, whereas
simultaneous evaluation across both simulation and real-world experiments has
remained underexplored in prior work.
| no_new_dataset | 0.936518 |
2504.06088 | Pramit Saha | Divyanshu Mishra, Pramit Saha, He Zhao, Netzahualcoyotl
Hernandez-Cruz, Olga Patey, Aris Papageorghiou, J. Alison Noble | MCAT: Visual Query-Based Localization of Standard Anatomical Clips in
Fetal Ultrasound Videos Using Multi-Tier Class-Aware Token Transformer | Accepted in AAAI 2025 | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Accurate standard plane acquisition in fetal ultrasound (US) videos is
crucial for fetal growth assessment, anomaly detection, and adherence to
clinical guidelines. However, manually selecting standard frames is
time-consuming and prone to intra- and inter-sonographer variability. Existing
methods primarily rely on image-based approaches that capture standard frames
and then classify the input frames across different anatomies. This ignores the
dynamic nature of video acquisition and its interpretation. To address these
challenges, we introduce Multi-Tier Class-Aware Token Transformer (MCAT), a
visual query-based video clip localization (VQ-VCL) method, to assist
sonographers by enabling them to capture a quick US sweep. By then providing a
visual query of the anatomy they wish to analyze, MCAT returns the video clip
containing the standard frames for that anatomy, facilitating thorough
screening for potential anomalies. We evaluate MCAT on two ultrasound video
datasets and a natural image VQ-VCL dataset based on Ego4D. Our model
outperforms state-of-the-art methods by 10% and 13% mIoU on the ultrasound
datasets and by 5.35% mIoU on the Ego4D dataset, using 96% fewer tokens. MCAT's
efficiency and accuracy have significant potential implications for public
health, especially in low- and middle-income countries (LMICs), where it may
enhance prenatal care by streamlining standard plane acquisition, simplifying
US-based screening, diagnosis and allowing sonographers to examine more
patients.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 14:29:15 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Mishra",
"Divyanshu",
""
],
[
"Saha",
"Pramit",
""
],
[
"Zhao",
"He",
""
],
[
"Hernandez-Cruz",
"Netzahualcoyotl",
""
],
[
"Patey",
"Olga",
""
],
[
"Papageorghiou",
"Aris",
""
],
[
"Noble",
"J. Alison",
""
]
] | TITLE: MCAT: Visual Query-Based Localization of Standard Anatomical Clips in
Fetal Ultrasound Videos Using Multi-Tier Class-Aware Token Transformer
ABSTRACT: Accurate standard plane acquisition in fetal ultrasound (US) videos is
crucial for fetal growth assessment, anomaly detection, and adherence to
clinical guidelines. However, manually selecting standard frames is
time-consuming and prone to intra- and inter-sonographer variability. Existing
methods primarily rely on image-based approaches that capture standard frames
and then classify the input frames across different anatomies. This ignores the
dynamic nature of video acquisition and its interpretation. To address these
challenges, we introduce Multi-Tier Class-Aware Token Transformer (MCAT), a
visual query-based video clip localization (VQ-VCL) method, to assist
sonographers by enabling them to capture a quick US sweep. By then providing a
visual query of the anatomy they wish to analyze, MCAT returns the video clip
containing the standard frames for that anatomy, facilitating thorough
screening for potential anomalies. We evaluate MCAT on two ultrasound video
datasets and a natural image VQ-VCL dataset based on Ego4D. Our model
outperforms state-of-the-art methods by 10% and 13% mIoU on the ultrasound
datasets and by 5.35% mIoU on the Ego4D dataset, using 96% fewer tokens. MCAT's
efficiency and accuracy have significant potential implications for public
health, especially in low- and middle-income countries (LMICs), where it may
enhance prenatal care by streamlining standard plane acquisition, simplifying
US-based screening, diagnosis and allowing sonographers to examine more
patients.
| no_new_dataset | 0.949295 |
2504.06099 | \v{S}imon Bil\'ik | Samuel Bielik, Simon Bilik | Towards Varroa destructor mite detection using a narrow spectra
illumination | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper focuses on the development and modification of a beehive
monitoring device and Varroa destructor detection on the bees with the help of
hyperspectral imagery while utilizing a U-net, semantic segmentation
architecture, and conventional computer vision methods. The main objectives
were to collect a dataset of bees and mites, and propose the computer vision
model which can achieve the detection between bees and mites.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 14:41:42 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Bielik",
"Samuel",
""
],
[
"Bilik",
"Simon",
""
]
] | TITLE: Towards Varroa destructor mite detection using a narrow spectra
illumination
ABSTRACT: This paper focuses on the development and modification of a beehive
monitoring device and Varroa destructor detection on the bees with the help of
hyperspectral imagery while utilizing a U-net, semantic segmentation
architecture, and conventional computer vision methods. The main objectives
were to collect a dataset of bees and mites, and propose the computer vision
model which can achieve the detection between bees and mites.
| no_new_dataset | 0.839668 |
2504.06102 | Eric Wagner | Eric Wagner and Lennart Bader and Konrad Wolsing and Martin Serror | Sherlock: A Dataset for Process-aware Intrusion Detection Research on
Power Grid Networks | accepted at CODASPY'25 | null | 10.1145/3714393.3726006 | null | cs.CR cs.NI | http://creativecommons.org/licenses/by/4.0/ | Physically distributed components and legacy protocols make the protection of
power grids against increasing cyberattack threats challenging. Infamously, the
2015 and 2016 blackouts in Ukraine were caused by cyberattacks, and the German
Federal Office for Information Security (BSI) recorded over 200 cyber incidents
against the German energy sector between 2023 and 2024. Intrusion detection
promises to quickly detect such attacks and mitigate the worst consequences.
However, public datasets of realistic scenarios are vital to evaluate these
systems. This paper introduces Sherlock, a dataset generated with the
co-simulator Wattson. In total, Sherlock covers three scenarios with various
attacks manipulating the process state by injecting malicious commands or
manipulating measurement values. We additionally test five recently-published
intrusion detection systems on Sherlock, highlighting specific challenges for
intrusion detection in power grids. Dataset and documentation are available at
https://sherlock.wattson.it/.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 14:46:35 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Wagner",
"Eric",
""
],
[
"Bader",
"Lennart",
""
],
[
"Wolsing",
"Konrad",
""
],
[
"Serror",
"Martin",
""
]
] | TITLE: Sherlock: A Dataset for Process-aware Intrusion Detection Research on
Power Grid Networks
ABSTRACT: Physically distributed components and legacy protocols make the protection of
power grids against increasing cyberattack threats challenging. Infamously, the
2015 and 2016 blackouts in Ukraine were caused by cyberattacks, and the German
Federal Office for Information Security (BSI) recorded over 200 cyber incidents
against the German energy sector between 2023 and 2024. Intrusion detection
promises to quickly detect such attacks and mitigate the worst consequences.
However, public datasets of realistic scenarios are vital to evaluate these
systems. This paper introduces Sherlock, a dataset generated with the
co-simulator Wattson. In total, Sherlock covers three scenarios with various
attacks manipulating the process state by injecting malicious commands or
manipulating measurement values. We additionally test five recently-published
intrusion detection systems on Sherlock, highlighting specific challenges for
intrusion detection in power grids. Dataset and documentation are available at
https://sherlock.wattson.it/.
| new_dataset | 0.955068 |
2504.06105 | Abinav Kalyanasundaram | Abinav Kalyanasundaram, Karthikeyan Chandra Sekaran, Philipp Stauber,
Michael Lange, Wolfgang Utschick and Michael Botsch | Uncertainty-Aware Hybrid Machine Learning in Virtual Sensors for Vehicle
Sideslip Angle Estimation | Accepted at the 2025 IEEE Intelligent Vehicles Symposium (IV) | null | null | null | cs.RO cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Precise vehicle state estimation is crucial for safe and reliable autonomous
driving. The number of measurable states and their precision offered by the
onboard vehicle sensor system are often constrained by cost. For instance,
measuring critical quantities such as the Vehicle Sideslip Angle (VSA) poses
significant commercial challenges using current optical sensors. This paper
addresses these limitations by focusing on the development of high-performance
virtual sensors to enhance vehicle state estimation for active safety. The
proposed Uncertainty-Aware Hybrid Learning (UAHL) architecture integrates a
machine learning model with vehicle motion models to estimate VSA directly from
onboard sensor data. A key aspect of the UAHL architecture is its focus on
uncertainty quantification for individual model estimates and hybrid fusion.
These mechanisms enable the dynamic weighting of uncertainty-aware predictions
from machine learning and vehicle motion models to produce accurate and
reliable hybrid VSA estimates. This work also presents a novel dataset named
Real-world Vehicle State Estimation Dataset (ReV-StED), comprising synchronized
measurements from advanced vehicle dynamic sensors. The experimental results
demonstrate the superior performance of the proposed method for VSA estimation,
highlighting UAHL as a promising architecture for advancing virtual sensors and
enhancing active safety in autonomous vehicles.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 14:49:58 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Kalyanasundaram",
"Abinav",
""
],
[
"Sekaran",
"Karthikeyan Chandra",
""
],
[
"Stauber",
"Philipp",
""
],
[
"Lange",
"Michael",
""
],
[
"Utschick",
"Wolfgang",
""
],
[
"Botsch",
"Michael",
""
]
] | TITLE: Uncertainty-Aware Hybrid Machine Learning in Virtual Sensors for Vehicle
Sideslip Angle Estimation
ABSTRACT: Precise vehicle state estimation is crucial for safe and reliable autonomous
driving. The number of measurable states and their precision offered by the
onboard vehicle sensor system are often constrained by cost. For instance,
measuring critical quantities such as the Vehicle Sideslip Angle (VSA) poses
significant commercial challenges using current optical sensors. This paper
addresses these limitations by focusing on the development of high-performance
virtual sensors to enhance vehicle state estimation for active safety. The
proposed Uncertainty-Aware Hybrid Learning (UAHL) architecture integrates a
machine learning model with vehicle motion models to estimate VSA directly from
onboard sensor data. A key aspect of the UAHL architecture is its focus on
uncertainty quantification for individual model estimates and hybrid fusion.
These mechanisms enable the dynamic weighting of uncertainty-aware predictions
from machine learning and vehicle motion models to produce accurate and
reliable hybrid VSA estimates. This work also presents a novel dataset named
Real-world Vehicle State Estimation Dataset (ReV-StED), comprising synchronized
measurements from advanced vehicle dynamic sensors. The experimental results
demonstrate the superior performance of the proposed method for VSA estimation,
highlighting UAHL as a promising architecture for advancing virtual sensors and
enhancing active safety in autonomous vehicles.
| new_dataset | 0.955569 |
2504.06116 | Davide Sferrazza | Davide Sferrazza, Gabriele Berton, Gabriele Trivigno, Carlo Masone | To Match or Not to Match: Revisiting Image Matching for Reliable Visual
Place Recognition | CVPRW 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual Place Recognition (VPR) is a critical task in computer vision,
traditionally enhanced by re-ranking retrieval results with image matching.
However, recent advancements in VPR methods have significantly improved
performance, challenging the necessity of re-ranking. In this work, we show
that modern retrieval systems often reach a point where re-ranking can degrade
results, as current VPR datasets are largely saturated. We propose using image
matching as a verification step to assess retrieval confidence, demonstrating
that inlier counts can reliably predict when re-ranking is beneficial. Our
findings shift the paradigm of retrieval pipelines, offering insights for more
robust and adaptive VPR systems.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 15:10:10 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Sferrazza",
"Davide",
""
],
[
"Berton",
"Gabriele",
""
],
[
"Trivigno",
"Gabriele",
""
],
[
"Masone",
"Carlo",
""
]
] | TITLE: To Match or Not to Match: Revisiting Image Matching for Reliable Visual
Place Recognition
ABSTRACT: Visual Place Recognition (VPR) is a critical task in computer vision,
traditionally enhanced by re-ranking retrieval results with image matching.
However, recent advancements in VPR methods have significantly improved
performance, challenging the necessity of re-ranking. In this work, we show
that modern retrieval systems often reach a point where re-ranking can degrade
results, as current VPR datasets are largely saturated. We propose using image
matching as a verification step to assess retrieval confidence, demonstrating
that inlier counts can reliably predict when re-ranking is beneficial. Our
findings shift the paradigm of retrieval pipelines, offering insights for more
robust and adaptive VPR systems.
| no_new_dataset | 0.951278 |
2504.06120 | Yuanpei Liu | Yuanpei Liu, Zhenqi He, Kai Han | Hyperbolic Category Discovery | Accepted as a conference paper at CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Generalized Category Discovery (GCD) is an intriguing open-world problem that
has garnered increasing attention. Given a dataset that includes both labelled
and unlabelled images, GCD aims to categorize all images in the unlabelled
subset, regardless of whether they belong to known or unknown classes. In GCD,
the common practice typically involves applying a spherical projection operator
at the end of the self-supervised pretrained backbone, operating within
Euclidean or spherical space. However, both of these spaces have been shown to
be suboptimal for encoding samples that possesses hierarchical structures. In
contrast, hyperbolic space exhibits exponential volume growth relative to
radius, making it inherently strong at capturing the hierarchical structure of
samples from both seen and unseen categories. Therefore, we propose to tackle
the category discovery challenge in the hyperbolic space. We introduce HypCD, a
simple \underline{Hyp}erbolic framework for learning hierarchy-aware
representations and classifiers for generalized \underline{C}ategory
\underline{D}iscovery. HypCD first transforms the Euclidean embedding space of
the backbone network into hyperbolic space, facilitating subsequent
representation and classification learning by considering both hyperbolic
distance and the angle between samples. This approach is particularly helpful
for knowledge transfer from known to unknown categories in GCD. We thoroughly
evaluate HypCD on public GCD benchmarks, by applying it to various baseline and
state-of-the-art methods, consistently achieving significant improvements.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 15:12:33 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Liu",
"Yuanpei",
""
],
[
"He",
"Zhenqi",
""
],
[
"Han",
"Kai",
""
]
] | TITLE: Hyperbolic Category Discovery
ABSTRACT: Generalized Category Discovery (GCD) is an intriguing open-world problem that
has garnered increasing attention. Given a dataset that includes both labelled
and unlabelled images, GCD aims to categorize all images in the unlabelled
subset, regardless of whether they belong to known or unknown classes. In GCD,
the common practice typically involves applying a spherical projection operator
at the end of the self-supervised pretrained backbone, operating within
Euclidean or spherical space. However, both of these spaces have been shown to
be suboptimal for encoding samples that possesses hierarchical structures. In
contrast, hyperbolic space exhibits exponential volume growth relative to
radius, making it inherently strong at capturing the hierarchical structure of
samples from both seen and unseen categories. Therefore, we propose to tackle
the category discovery challenge in the hyperbolic space. We introduce HypCD, a
simple \underline{Hyp}erbolic framework for learning hierarchy-aware
representations and classifiers for generalized \underline{C}ategory
\underline{D}iscovery. HypCD first transforms the Euclidean embedding space of
the backbone network into hyperbolic space, facilitating subsequent
representation and classification learning by considering both hyperbolic
distance and the angle between samples. This approach is particularly helpful
for knowledge transfer from known to unknown categories in GCD. We thoroughly
evaluate HypCD on public GCD benchmarks, by applying it to various baseline and
state-of-the-art methods, consistently achieving significant improvements.
| no_new_dataset | 0.944177 |
2504.06121 | Yuhang Ma | Ronghui Zhang, Yuhang Ma, Tengfei Li, Ziyu Lin, Yueying Wu, Junzhou
Chen, Lin Zhang, Jia Hu, Tony Z. Qiu and Konghui Guo | A Robust Real-Time Lane Detection Method with Fog-Enhanced Feature
Fusion for Foggy Conditions | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lane detection is a critical component of Advanced Driver Assistance Systems
(ADAS). Existing lane detection algorithms generally perform well under
favorable weather conditions. However, their performance degrades significantly
in adverse conditions, such as fog, which increases the risk of traffic
accidents. This challenge is compounded by the lack of specialized datasets and
methods designed for foggy environments. To address this, we introduce the
FoggyLane dataset, captured in real-world foggy scenarios, and synthesize two
additional datasets, FoggyCULane and FoggyTusimple, from existing popular lane
detection datasets. Furthermore, we propose a robust Fog-Enhanced Network for
lane detection, incorporating a Global Feature Fusion Module (GFFM) to capture
global relationships in foggy images, a Kernel Feature Fusion Module (KFFM) to
model the structural and positional relationships of lane instances, and a
Low-level Edge Enhanced Module (LEEM) to address missing edge details in foggy
conditions. Comprehensive experiments demonstrate that our method achieves
state-of-the-art performance, with F1-scores of 95.04 on FoggyLane, 79.85 on
FoggyCULane, and 96.95 on FoggyTusimple. Additionally, with TensorRT
acceleration, the method reaches a processing speed of 38.4 FPS on the NVIDIA
Jetson AGX Orin, confirming its real-time capabilities and robustness in foggy
environments.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 15:13:01 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Zhang",
"Ronghui",
""
],
[
"Ma",
"Yuhang",
""
],
[
"Li",
"Tengfei",
""
],
[
"Lin",
"Ziyu",
""
],
[
"Wu",
"Yueying",
""
],
[
"Chen",
"Junzhou",
""
],
[
"Zhang",
"Lin",
""
],
[
"Hu",
"Jia",
""
],
[
"Qiu",
"Tony Z.",
""
],
[
"Guo",
"Konghui",
""
]
] | TITLE: A Robust Real-Time Lane Detection Method with Fog-Enhanced Feature
Fusion for Foggy Conditions
ABSTRACT: Lane detection is a critical component of Advanced Driver Assistance Systems
(ADAS). Existing lane detection algorithms generally perform well under
favorable weather conditions. However, their performance degrades significantly
in adverse conditions, such as fog, which increases the risk of traffic
accidents. This challenge is compounded by the lack of specialized datasets and
methods designed for foggy environments. To address this, we introduce the
FoggyLane dataset, captured in real-world foggy scenarios, and synthesize two
additional datasets, FoggyCULane and FoggyTusimple, from existing popular lane
detection datasets. Furthermore, we propose a robust Fog-Enhanced Network for
lane detection, incorporating a Global Feature Fusion Module (GFFM) to capture
global relationships in foggy images, a Kernel Feature Fusion Module (KFFM) to
model the structural and positional relationships of lane instances, and a
Low-level Edge Enhanced Module (LEEM) to address missing edge details in foggy
conditions. Comprehensive experiments demonstrate that our method achieves
state-of-the-art performance, with F1-scores of 95.04 on FoggyLane, 79.85 on
FoggyCULane, and 96.95 on FoggyTusimple. Additionally, with TensorRT
acceleration, the method reaches a processing speed of 38.4 FPS on the NVIDIA
Jetson AGX Orin, confirming its real-time capabilities and robustness in foggy
environments.
| new_dataset | 0.974166 |
2504.06136 | Movina Moses | Movina Moses, Mohab Elkaref, James Barry, Shinnosuke Tanaka, Vishnudev
Kuruvanthodi, Nathan Herr, Campbell D Watson, Geeth De Mel | QGen Studio: An Adaptive Question-Answer Generation, Training and
Evaluation Platform | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We present QGen Studio: an adaptive question-answer generation, training, and
evaluation platform. QGen Studio enables users to leverage large language
models (LLMs) to create custom question-answer datasets and fine-tune models on
this synthetic data. It features a dataset viewer and model explorer to
streamline this process. The dataset viewer provides key metrics and visualizes
the context from which the QA pairs are generated, offering insights into data
quality. The model explorer supports model comparison, allowing users to
contrast the performance of their trained LLMs against other models, supporting
performance benchmarking and refinement. QGen Studio delivers an interactive,
end-to-end solution for generating QA datasets and training scalable,
domain-adaptable models. The studio will be open-sourced soon, allowing users
to deploy it locally.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 15:32:09 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Moses",
"Movina",
""
],
[
"Elkaref",
"Mohab",
""
],
[
"Barry",
"James",
""
],
[
"Tanaka",
"Shinnosuke",
""
],
[
"Kuruvanthodi",
"Vishnudev",
""
],
[
"Herr",
"Nathan",
""
],
[
"Watson",
"Campbell D",
""
],
[
"De Mel",
"Geeth",
""
]
] | TITLE: QGen Studio: An Adaptive Question-Answer Generation, Training and
Evaluation Platform
ABSTRACT: We present QGen Studio: an adaptive question-answer generation, training, and
evaluation platform. QGen Studio enables users to leverage large language
models (LLMs) to create custom question-answer datasets and fine-tune models on
this synthetic data. It features a dataset viewer and model explorer to
streamline this process. The dataset viewer provides key metrics and visualizes
the context from which the QA pairs are generated, offering insights into data
quality. The model explorer supports model comparison, allowing users to
contrast the performance of their trained LLMs against other models, supporting
performance benchmarking and refinement. QGen Studio delivers an interactive,
end-to-end solution for generating QA datasets and training scalable,
domain-adaptable models. The studio will be open-sourced soon, allowing users
to deploy it locally.
| no_new_dataset | 0.914596 |
2504.06148 | Xiangxi Zheng | Xiangxi Zheng, Linjie Li, Zhengyuan Yang, Ping Yu, Alex Jinpeng Wang,
Rui Yan, Yuan Yao, Lijuan Wang | V-MAGE: A Game Evaluation Framework for Assessing Visual-Centric
Capabilities in Multimodal Large Language Models | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in Multimodal Large Language Models (MLLMs) have led to
significant improvements across various multimodal benchmarks. However, as
evaluations shift from static datasets to open-world, dynamic environments,
current game-based benchmarks remain inadequate because they lack
visual-centric tasks and fail to assess the diverse reasoning skills required
for real-world decision-making. To address this, we introduce Visual-centric
Multiple Abilities Game Evaluation (V-MAGE), a game-based evaluation framework
designed to assess visual reasoning capabilities of MLLMs. V-MAGE features five
diverse games with 30+ handcrafted levels, testing models on core visual skills
such as positioning, trajectory tracking, timing, and visual memory, alongside
higher-level reasoning like long-term planning and deliberation. We use V-MAGE
to evaluate leading MLLMs, revealing significant challenges in their visual
perception and reasoning. In all game environments, the top-performing MLLMs,
as determined by Elo rating comparisons, exhibit a substantial performance gap
compared to humans. Our findings highlight critical limitations, including
various types of perceptual errors made by the models, and suggest potential
avenues for improvement from an agent-centric perspective, such as refining
agent strategies and addressing perceptual inaccuracies. Code is available at
https://github.com/CSU-JPG/V-MAGE.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 15:43:01 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Zheng",
"Xiangxi",
""
],
[
"Li",
"Linjie",
""
],
[
"Yang",
"Zhengyuan",
""
],
[
"Yu",
"Ping",
""
],
[
"Wang",
"Alex Jinpeng",
""
],
[
"Yan",
"Rui",
""
],
[
"Yao",
"Yuan",
""
],
[
"Wang",
"Lijuan",
""
]
] | TITLE: V-MAGE: A Game Evaluation Framework for Assessing Visual-Centric
Capabilities in Multimodal Large Language Models
ABSTRACT: Recent advancements in Multimodal Large Language Models (MLLMs) have led to
significant improvements across various multimodal benchmarks. However, as
evaluations shift from static datasets to open-world, dynamic environments,
current game-based benchmarks remain inadequate because they lack
visual-centric tasks and fail to assess the diverse reasoning skills required
for real-world decision-making. To address this, we introduce Visual-centric
Multiple Abilities Game Evaluation (V-MAGE), a game-based evaluation framework
designed to assess visual reasoning capabilities of MLLMs. V-MAGE features five
diverse games with 30+ handcrafted levels, testing models on core visual skills
such as positioning, trajectory tracking, timing, and visual memory, alongside
higher-level reasoning like long-term planning and deliberation. We use V-MAGE
to evaluate leading MLLMs, revealing significant challenges in their visual
perception and reasoning. In all game environments, the top-performing MLLMs,
as determined by Elo rating comparisons, exhibit a substantial performance gap
compared to humans. Our findings highlight critical limitations, including
various types of perceptual errors made by the models, and suggest potential
avenues for improvement from an agent-centric perspective, such as refining
agent strategies and addressing perceptual inaccuracies. Code is available at
https://github.com/CSU-JPG/V-MAGE.
| no_new_dataset | 0.943243 |
2504.06153 | Akash Kumar | Akash Kumar, Ashlesha Kumar, Vibhav Vineet, Yogesh S Rawat | A Large-Scale Analysis on Contextual Self-Supervised Video
Representation Learning | CVPR'25 Workshop: 6th Data-Efficient Workshop | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Self-supervised learning has emerged as a powerful paradigm for label-free
model pretraining, particularly in the video domain, where manual annotation is
costly and time-intensive. However, existing self-supervised approaches employ
diverse experimental setups, making direct comparisons challenging due to the
absence of a standardized benchmark. In this work, we establish a unified
benchmark that enables fair comparisons across different methods. Additionally,
we systematically investigate five critical aspects of self-supervised learning
in videos: (1) dataset size, (2) model complexity, (3) data distribution, (4)
data noise, and (5) feature representations. To facilitate this study, we
evaluate six self-supervised learning methods across six network architectures,
conducting extensive experiments on five benchmark datasets and assessing
performance on two distinct downstream tasks. Our analysis reveals key insights
into the interplay between pretraining strategies, dataset characteristics,
pretext tasks, and model architectures. Furthermore, we extend these findings
to Video Foundation Models (ViFMs), demonstrating their relevance in
large-scale video representation learning. Finally, leveraging these insights,
we propose a novel approach that significantly reduces training data
requirements while surpassing state-of-the-art methods that rely on 10% more
pretraining data. We believe this work will guide future research toward a
deeper understanding of self-supervised video representation learning and its
broader implications.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 15:47:58 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Kumar",
"Akash",
""
],
[
"Kumar",
"Ashlesha",
""
],
[
"Vineet",
"Vibhav",
""
],
[
"Rawat",
"Yogesh S",
""
]
] | TITLE: A Large-Scale Analysis on Contextual Self-Supervised Video
Representation Learning
ABSTRACT: Self-supervised learning has emerged as a powerful paradigm for label-free
model pretraining, particularly in the video domain, where manual annotation is
costly and time-intensive. However, existing self-supervised approaches employ
diverse experimental setups, making direct comparisons challenging due to the
absence of a standardized benchmark. In this work, we establish a unified
benchmark that enables fair comparisons across different methods. Additionally,
we systematically investigate five critical aspects of self-supervised learning
in videos: (1) dataset size, (2) model complexity, (3) data distribution, (4)
data noise, and (5) feature representations. To facilitate this study, we
evaluate six self-supervised learning methods across six network architectures,
conducting extensive experiments on five benchmark datasets and assessing
performance on two distinct downstream tasks. Our analysis reveals key insights
into the interplay between pretraining strategies, dataset characteristics,
pretext tasks, and model architectures. Furthermore, we extend these findings
to Video Foundation Models (ViFMs), demonstrating their relevance in
large-scale video representation learning. Finally, leveraging these insights,
we propose a novel approach that significantly reduces training data
requirements while surpassing state-of-the-art methods that rely on 10% more
pretraining data. We believe this work will guide future research toward a
deeper understanding of self-supervised video representation learning and its
broader implications.
| no_new_dataset | 0.946646 |
2504.06156 | Chuanyu Li | Fangchen Liu, Chuanyu Li, Yihua Qin, Ankit Shaw, Jing Xu, Pieter
Abbeel, Rui Chen | ViTaMIn: Learning Contact-Rich Tasks Through Robot-Free Visuo-Tactile
Manipulation Interface | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Tactile information plays a crucial role for humans and robots to interact
effectively with their environment, particularly for tasks requiring the
understanding of contact properties. Solving such dexterous manipulation tasks
often relies on imitation learning from demonstration datasets, which are
typically collected via teleoperation systems and often demand substantial time
and effort. To address these challenges, we present ViTaMIn, an embodiment-free
manipulation interface that seamlessly integrates visual and tactile sensing
into a hand-held gripper, enabling data collection without the need for
teleoperation. Our design employs a compliant Fin Ray gripper with tactile
sensing, allowing operators to perceive force feedback during manipulation for
more intuitive operation. Additionally, we propose a multimodal representation
learning strategy to obtain pre-trained tactile representations, improving data
efficiency and policy robustness. Experiments on seven contact-rich
manipulation tasks demonstrate that ViTaMIn significantly outperforms baseline
methods, demonstrating its effectiveness for complex manipulation tasks.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 15:51:18 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Liu",
"Fangchen",
""
],
[
"Li",
"Chuanyu",
""
],
[
"Qin",
"Yihua",
""
],
[
"Shaw",
"Ankit",
""
],
[
"Xu",
"Jing",
""
],
[
"Abbeel",
"Pieter",
""
],
[
"Chen",
"Rui",
""
]
] | TITLE: ViTaMIn: Learning Contact-Rich Tasks Through Robot-Free Visuo-Tactile
Manipulation Interface
ABSTRACT: Tactile information plays a crucial role for humans and robots to interact
effectively with their environment, particularly for tasks requiring the
understanding of contact properties. Solving such dexterous manipulation tasks
often relies on imitation learning from demonstration datasets, which are
typically collected via teleoperation systems and often demand substantial time
and effort. To address these challenges, we present ViTaMIn, an embodiment-free
manipulation interface that seamlessly integrates visual and tactile sensing
into a hand-held gripper, enabling data collection without the need for
teleoperation. Our design employs a compliant Fin Ray gripper with tactile
sensing, allowing operators to perceive force feedback during manipulation for
more intuitive operation. Additionally, we propose a multimodal representation
learning strategy to obtain pre-trained tactile representations, improving data
efficiency and policy robustness. Experiments on seven contact-rich
manipulation tasks demonstrate that ViTaMIn significantly outperforms baseline
methods, demonstrating its effectiveness for complex manipulation tasks.
| no_new_dataset | 0.951051 |
2504.06158 | Saad Wazir | Saad Wazir, Daeyoung Kim | Rethinking the Nested U-Net Approach: Enhancing Biomarker Segmentation
with Attention Mechanisms and Multiscale Feature Fusion | Published in the Proceedings of the 2024 International Conference on
Medical Imaging and Computer-Aided Diagnosis (MICAD 2024), Lecture Notes in
Electrical Engineering (LNEE), Volume 1372, Springer Nature, Singapore | Lecture Notes in Electrical Engineering, vol. 1372, pp. 175-186,
Springer Nature, Singapore, 2025 | 10.1007/978-981-96-3863-5_17 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Identifying biomarkers in medical images is vital for a wide range of biotech
applications. However, recent Transformer and CNN based methods often struggle
with variations in morphology and staining, which limits their feature
extraction capabilities. In medical image segmentation, where data samples are
often limited, state-of-the-art (SOTA) methods improve accuracy by using
pre-trained encoders, while end-to-end approaches typically fall short due to
difficulties in transferring multiscale features effectively between encoders
and decoders. To handle these challenges, we introduce a nested UNet
architecture that captures both local and global context through Multiscale
Feature Fusion and Attention Mechanisms. This design improves feature
integration from encoders, highlights key channels and regions, and restores
spatial details to enhance segmentation performance. Our method surpasses SOTA
approaches, as evidenced by experiments across four datasets and detailed
ablation studies. Code: https://github.com/saadwazir/ReN-UNet
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 15:53:46 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Wazir",
"Saad",
""
],
[
"Kim",
"Daeyoung",
""
]
] | TITLE: Rethinking the Nested U-Net Approach: Enhancing Biomarker Segmentation
with Attention Mechanisms and Multiscale Feature Fusion
ABSTRACT: Identifying biomarkers in medical images is vital for a wide range of biotech
applications. However, recent Transformer and CNN based methods often struggle
with variations in morphology and staining, which limits their feature
extraction capabilities. In medical image segmentation, where data samples are
often limited, state-of-the-art (SOTA) methods improve accuracy by using
pre-trained encoders, while end-to-end approaches typically fall short due to
difficulties in transferring multiscale features effectively between encoders
and decoders. To handle these challenges, we introduce a nested UNet
architecture that captures both local and global context through Multiscale
Feature Fusion and Attention Mechanisms. This design improves feature
integration from encoders, highlights key channels and regions, and restores
spatial details to enhance segmentation performance. Our method surpasses SOTA
approaches, as evidenced by experiments across four datasets and detailed
ablation studies. Code: https://github.com/saadwazir/ReN-UNet
| no_new_dataset | 0.951953 |
2504.06166 | Montgomery Gole | Montgomery Gole and Andriy Miranskyy | Assessing how hyperparameters impact Large Language Models' sarcasm
detection performance | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sarcasm detection is challenging for both humans and machines. This work
explores how model characteristics impact sarcasm detection in OpenAI's GPT,
and Meta's Llama-2 models, given their strong natural language understanding,
and popularity. We evaluate fine-tuned and zero-shot models across various
sizes, releases, and hyperparameters. Experiments were conducted on the
political and balanced (pol-bal) portion of the popular Self-Annotated Reddit
Corpus (SARC2.0) sarcasm dataset. Fine-tuned performance improves monotonically
with model size within a model family, while hyperparameter tuning also impacts
performance. In the fine-tuning scenario, full precision Llama-2-13b achieves
state-of-the-art accuracy and $F_1$-score, both measured at 0.83, comparable to
average human performance. In the zero-shot setting, one GPT-4 model achieves
competitive performance to prior attempts, yielding an accuracy of 0.70 and an
$F_1$-score of 0.75. Furthermore, a model's performance may increase or decline
with each release, highlighting the need to reassess performance after each
release.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 16:05:25 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Gole",
"Montgomery",
""
],
[
"Miranskyy",
"Andriy",
""
]
] | TITLE: Assessing how hyperparameters impact Large Language Models' sarcasm
detection performance
ABSTRACT: Sarcasm detection is challenging for both humans and machines. This work
explores how model characteristics impact sarcasm detection in OpenAI's GPT,
and Meta's Llama-2 models, given their strong natural language understanding,
and popularity. We evaluate fine-tuned and zero-shot models across various
sizes, releases, and hyperparameters. Experiments were conducted on the
political and balanced (pol-bal) portion of the popular Self-Annotated Reddit
Corpus (SARC2.0) sarcasm dataset. Fine-tuned performance improves monotonically
with model size within a model family, while hyperparameter tuning also impacts
performance. In the fine-tuning scenario, full precision Llama-2-13b achieves
state-of-the-art accuracy and $F_1$-score, both measured at 0.83, comparable to
average human performance. In the zero-shot setting, one GPT-4 model achieves
competitive performance to prior attempts, yielding an accuracy of 0.70 and an
$F_1$-score of 0.75. Furthermore, a model's performance may increase or decline
with each release, highlighting the need to reassess performance after each
release.
| no_new_dataset | 0.931525 |
2504.06176 | Ian Groves | Ian Groves, Andrew Campbell, James Fernandes, Diego Rodriguez, Paul
Murray, Massimiliano Vasile, Victoria Nockles | A Self-Supervised Framework for Space Object Behaviour Characterisation | 15 pages, 10 figures | null | null | null | cs.LG cs.AI physics.space-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Foundation Models, pre-trained on large unlabelled datasets before
task-specific fine-tuning, are increasingly being applied to specialised
domains. Recent examples include ClimaX for climate and Clay for satellite
Earth observation, but a Foundation Model for Space Object Behavioural Analysis
has not yet been developed. As orbital populations grow, automated methods for
characterising space object behaviour are crucial for space safety. We present
a Space Safety and Sustainability Foundation Model focusing on space object
behavioural analysis using light curves (LCs). We implemented a
Perceiver-Variational Autoencoder (VAE) architecture, pre-trained with
self-supervised reconstruction and masked reconstruction on 227,000 LCs from
the MMT-9 observatory. The VAE enables anomaly detection, motion prediction,
and LC generation. We fine-tuned the model for anomaly detection & motion
prediction using two independent LC simulators (CASSANDRA and GRIAL
respectively), using CAD models of boxwing, Sentinel-3, SMOS, and Starlink
platforms. Our pre-trained model achieved a reconstruction error of 0.01%,
identifying potentially anomalous light curves through reconstruction
difficulty. After fine-tuning, the model scored 88% and 82% accuracy, with 0.90
and 0.95 ROC AUC scores respectively in both anomaly detection and motion mode
prediction (sun-pointing, spin, etc.). Analysis of high-confidence anomaly
predictions on real data revealed distinct patterns including characteristic
object profiles and satellite glinting. Here, we demonstrate how
self-supervised learning can simultaneously enable anomaly detection, motion
prediction, and synthetic data generation from rich representations learned in
pre-training. Our work therefore supports space safety and sustainability
through automated monitoring and simulation capabilities.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 16:19:19 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Groves",
"Ian",
""
],
[
"Campbell",
"Andrew",
""
],
[
"Fernandes",
"James",
""
],
[
"Rodriguez",
"Diego",
""
],
[
"Murray",
"Paul",
""
],
[
"Vasile",
"Massimiliano",
""
],
[
"Nockles",
"Victoria",
""
]
] | TITLE: A Self-Supervised Framework for Space Object Behaviour Characterisation
ABSTRACT: Foundation Models, pre-trained on large unlabelled datasets before
task-specific fine-tuning, are increasingly being applied to specialised
domains. Recent examples include ClimaX for climate and Clay for satellite
Earth observation, but a Foundation Model for Space Object Behavioural Analysis
has not yet been developed. As orbital populations grow, automated methods for
characterising space object behaviour are crucial for space safety. We present
a Space Safety and Sustainability Foundation Model focusing on space object
behavioural analysis using light curves (LCs). We implemented a
Perceiver-Variational Autoencoder (VAE) architecture, pre-trained with
self-supervised reconstruction and masked reconstruction on 227,000 LCs from
the MMT-9 observatory. The VAE enables anomaly detection, motion prediction,
and LC generation. We fine-tuned the model for anomaly detection & motion
prediction using two independent LC simulators (CASSANDRA and GRIAL
respectively), using CAD models of boxwing, Sentinel-3, SMOS, and Starlink
platforms. Our pre-trained model achieved a reconstruction error of 0.01%,
identifying potentially anomalous light curves through reconstruction
difficulty. After fine-tuning, the model scored 88% and 82% accuracy, with 0.90
and 0.95 ROC AUC scores respectively in both anomaly detection and motion mode
prediction (sun-pointing, spin, etc.). Analysis of high-confidence anomaly
predictions on real data revealed distinct patterns including characteristic
object profiles and satellite glinting. Here, we demonstrate how
self-supervised learning can simultaneously enable anomaly detection, motion
prediction, and synthetic data generation from rich representations learned in
pre-training. Our work therefore supports space safety and sustainability
through automated monitoring and simulation capabilities.
| no_new_dataset | 0.956145 |
2504.06185 | Vanessa Borst | Vanessa Borst, Timo Dittus, Tassilo Dege, Astrid Schmieder, and Samuel
Kounev | WoundAmbit: Bridging State-of-the-Art Semantic Segmentation and
Real-World Wound Care | Main paper: 17 pages; supplementary material: 16 pages; paper
submitted to the application track of the European Conference on Machine
Learning and Principles and Practice of Knowledge Discovery in Databases
(ECML PKDD 2025) | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Chronic wounds affect a large population, particularly the elderly and
diabetic patients, who often exhibit limited mobility and co-existing health
conditions. Automated wound monitoring via mobile image capture can reduce
in-person physician visits by enabling remote tracking of wound size. Semantic
segmentation is key to this process, yet wound segmentation remains
underrepresented in medical imaging research. To address this, we benchmark
state-of-the-art deep learning models from general-purpose vision, medical
imaging, and top methods from public wound challenges. For fair comparison, we
standardize training, data augmentation, and evaluation, conducting
cross-validationto minimize partitioning bias. We also assess real-world
deployment aspects, including generalization to an out-of-distribution wound
dataset, computational efficiency, and interpretability. Additionally, we
propose a reference object-based approach to convert AI-generated masks into
clinically relevant wound size estimates, and evaluate this, along with mask
quality, for the best models based on physician assessments. Overall, the
transformer-based TransNeXt showed the highest levels of generalizability.
Despite variations in inference times, all models processed at least one image
per second on the CPU, which is deemed adequate for the intended application.
Interpretability analysis typically revealed prominent activations in wound
regions, emphasizing focus on clinically relevant features. Expert evaluation
showed high mask approval for all analyzed models, with VWFormer and ConvNeXtS
backbone performing the best. Size retrieval accuracy was similar across
models, and predictions closely matched expert annotations. Finally, we
demonstrate how our AI-driven wound size estimation framework, WoundAmbit, can
be integrated into a custom telehealth system. Our code will be made available
on GitHub upon publication.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 16:25:59 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Borst",
"Vanessa",
""
],
[
"Dittus",
"Timo",
""
],
[
"Dege",
"Tassilo",
""
],
[
"Schmieder",
"Astrid",
""
],
[
"Kounev",
"Samuel",
""
]
] | TITLE: WoundAmbit: Bridging State-of-the-Art Semantic Segmentation and
Real-World Wound Care
ABSTRACT: Chronic wounds affect a large population, particularly the elderly and
diabetic patients, who often exhibit limited mobility and co-existing health
conditions. Automated wound monitoring via mobile image capture can reduce
in-person physician visits by enabling remote tracking of wound size. Semantic
segmentation is key to this process, yet wound segmentation remains
underrepresented in medical imaging research. To address this, we benchmark
state-of-the-art deep learning models from general-purpose vision, medical
imaging, and top methods from public wound challenges. For fair comparison, we
standardize training, data augmentation, and evaluation, conducting
cross-validationto minimize partitioning bias. We also assess real-world
deployment aspects, including generalization to an out-of-distribution wound
dataset, computational efficiency, and interpretability. Additionally, we
propose a reference object-based approach to convert AI-generated masks into
clinically relevant wound size estimates, and evaluate this, along with mask
quality, for the best models based on physician assessments. Overall, the
transformer-based TransNeXt showed the highest levels of generalizability.
Despite variations in inference times, all models processed at least one image
per second on the CPU, which is deemed adequate for the intended application.
Interpretability analysis typically revealed prominent activations in wound
regions, emphasizing focus on clinically relevant features. Expert evaluation
showed high mask approval for all analyzed models, with VWFormer and ConvNeXtS
backbone performing the best. Size retrieval accuracy was similar across
models, and predictions closely matched expert annotations. Finally, we
demonstrate how our AI-driven wound size estimation framework, WoundAmbit, can
be integrated into a custom telehealth system. Our code will be made available
on GitHub upon publication.
| no_new_dataset | 0.950824 |
2504.06193 | Zongyue Qin | Zongyue Qin, Shichang Zhang, Mingxuan Ju, Tong Zhao, Neil Shah, Yizhou
Sun | Heuristic Methods are Good Teachers to Distill MLPs for Graph Link
Prediction | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Link prediction is a crucial graph-learning task with applications including
citation prediction and product recommendation. Distilling Graph Neural
Networks (GNNs) teachers into Multi-Layer Perceptrons (MLPs) students has
emerged as an effective approach to achieve strong performance and reducing
computational cost by removing graph dependency. However, existing distillation
methods only use standard GNNs and overlook alternative teachers such as
specialized model for link prediction (GNN4LP) and heuristic methods (e.g.,
common neighbors). This paper first explores the impact of different teachers
in GNN-to-MLP distillation. Surprisingly, we find that stronger teachers do not
always produce stronger students: MLPs distilled from GNN4LP can underperform
those distilled from simpler GNNs, while weaker heuristic methods can teach
MLPs to near-GNN performance with drastically reduced training costs. Building
on these insights, we propose Ensemble Heuristic-Distilled MLPs (EHDM), which
eliminates graph dependencies while effectively integrating complementary
signals via a gating mechanism. Experiments on ten datasets show an average
7.93% improvement over previous GNN-to-MLP approaches with 1.95-3.32 times less
training time, indicating EHDM is an efficient and effective link prediction
method.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 16:35:11 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Qin",
"Zongyue",
""
],
[
"Zhang",
"Shichang",
""
],
[
"Ju",
"Mingxuan",
""
],
[
"Zhao",
"Tong",
""
],
[
"Shah",
"Neil",
""
],
[
"Sun",
"Yizhou",
""
]
] | TITLE: Heuristic Methods are Good Teachers to Distill MLPs for Graph Link
Prediction
ABSTRACT: Link prediction is a crucial graph-learning task with applications including
citation prediction and product recommendation. Distilling Graph Neural
Networks (GNNs) teachers into Multi-Layer Perceptrons (MLPs) students has
emerged as an effective approach to achieve strong performance and reducing
computational cost by removing graph dependency. However, existing distillation
methods only use standard GNNs and overlook alternative teachers such as
specialized model for link prediction (GNN4LP) and heuristic methods (e.g.,
common neighbors). This paper first explores the impact of different teachers
in GNN-to-MLP distillation. Surprisingly, we find that stronger teachers do not
always produce stronger students: MLPs distilled from GNN4LP can underperform
those distilled from simpler GNNs, while weaker heuristic methods can teach
MLPs to near-GNN performance with drastically reduced training costs. Building
on these insights, we propose Ensemble Heuristic-Distilled MLPs (EHDM), which
eliminates graph dependencies while effectively integrating complementary
signals via a gating mechanism. Experiments on ten datasets show an average
7.93% improvement over previous GNN-to-MLP approaches with 1.95-3.32 times less
training time, indicating EHDM is an efficient and effective link prediction
method.
| no_new_dataset | 0.946498 |
2504.06196 | Shekoofeh Azizi | Eric Wang, Samuel Schmidgall, Paul F. Jaeger, Fan Zhang, Rory Pilgrim,
Yossi Matias, Joelle Barral, David Fleet, Shekoofeh Azizi | TxGemma: Efficient and Agentic LLMs for Therapeutics | null | null | null | null | cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Therapeutic development is a costly and high-risk endeavor that is often
plagued by high failure rates. To address this, we introduce TxGemma, a suite
of efficient, generalist large language models (LLMs) capable of therapeutic
property prediction as well as interactive reasoning and explainability. Unlike
task-specific models, TxGemma synthesizes information from diverse sources,
enabling broad application across the therapeutic development pipeline. The
suite includes 2B, 9B, and 27B parameter models, fine-tuned from Gemma-2 on a
comprehensive dataset of small molecules, proteins, nucleic acids, diseases,
and cell lines. Across 66 therapeutic development tasks, TxGemma achieved
superior or comparable performance to the state-of-the-art generalist model on
64 (superior on 45), and against state-of-the-art specialist models on 50
(superior on 26). Fine-tuning TxGemma models on therapeutic downstream tasks,
such as clinical trial adverse event prediction, requires less training data
than fine-tuning base LLMs, making TxGemma suitable for data-limited
applications. Beyond these predictive capabilities, TxGemma features
conversational models that bridge the gap between general LLMs and specialized
property predictors. These allow scientists to interact in natural language,
provide mechanistic reasoning for predictions based on molecular structure, and
engage in scientific discussions. Building on this, we further introduce
Agentic-Tx, a generalist therapeutic agentic system powered by Gemini 2.5 that
reasons, acts, manages diverse workflows, and acquires external domain
knowledge. Agentic-Tx surpasses prior leading models on the Humanity's Last
Exam benchmark (Chemistry & Biology) with 52.3% relative improvement over
o3-mini (high) and 26.7% over o3-mini (high) on GPQA (Chemistry) and excels
with improvements of 6.3% (ChemBench-Preference) and 2.4% (ChemBench-Mini) over
o3-mini (high).
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 16:39:02 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Wang",
"Eric",
""
],
[
"Schmidgall",
"Samuel",
""
],
[
"Jaeger",
"Paul F.",
""
],
[
"Zhang",
"Fan",
""
],
[
"Pilgrim",
"Rory",
""
],
[
"Matias",
"Yossi",
""
],
[
"Barral",
"Joelle",
""
],
[
"Fleet",
"David",
""
],
[
"Azizi",
"Shekoofeh",
""
]
] | TITLE: TxGemma: Efficient and Agentic LLMs for Therapeutics
ABSTRACT: Therapeutic development is a costly and high-risk endeavor that is often
plagued by high failure rates. To address this, we introduce TxGemma, a suite
of efficient, generalist large language models (LLMs) capable of therapeutic
property prediction as well as interactive reasoning and explainability. Unlike
task-specific models, TxGemma synthesizes information from diverse sources,
enabling broad application across the therapeutic development pipeline. The
suite includes 2B, 9B, and 27B parameter models, fine-tuned from Gemma-2 on a
comprehensive dataset of small molecules, proteins, nucleic acids, diseases,
and cell lines. Across 66 therapeutic development tasks, TxGemma achieved
superior or comparable performance to the state-of-the-art generalist model on
64 (superior on 45), and against state-of-the-art specialist models on 50
(superior on 26). Fine-tuning TxGemma models on therapeutic downstream tasks,
such as clinical trial adverse event prediction, requires less training data
than fine-tuning base LLMs, making TxGemma suitable for data-limited
applications. Beyond these predictive capabilities, TxGemma features
conversational models that bridge the gap between general LLMs and specialized
property predictors. These allow scientists to interact in natural language,
provide mechanistic reasoning for predictions based on molecular structure, and
engage in scientific discussions. Building on this, we further introduce
Agentic-Tx, a generalist therapeutic agentic system powered by Gemini 2.5 that
reasons, acts, manages diverse workflows, and acquires external domain
knowledge. Agentic-Tx surpasses prior leading models on the Humanity's Last
Exam benchmark (Chemistry & Biology) with 52.3% relative improvement over
o3-mini (high) and 26.7% over o3-mini (high) on GPQA (Chemistry) and excels
with improvements of 6.3% (ChemBench-Preference) and 2.4% (ChemBench-Mini) over
o3-mini (high).
| no_new_dataset | 0.948965 |
2504.06207 | Moncef Garouani | Moncef Garouani | An experimental survey and Perspective View on Meta-Learning for
Automated Algorithms Selection and Parametrization | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Considerable progress has been made in the recent literature studies to
tackle the Algorithms Selection and Parametrization (ASP) problem, which is
diversified in multiple meta-learning setups. Yet there is a lack of surveys
and comparative evaluations that critically analyze, summarize and assess the
performance of existing methods. In this paper, we provide an overview of the
state of the art in this continuously evolving field. The survey sheds light on
the motivational reasons for pursuing classifiers selection through
meta-learning. In this regard, Automated Machine Learning (AutoML) is usually
treated as an ASP problem under the umbrella of the democratization of machine
learning. Accordingly, AutoML makes machine learning techniques accessible to
domain scientists who are interested in applying advanced analytics but lack
the required expertise. It can ease the task of manually selecting ML
algorithms and tuning related hyperparameters. We comprehensively discuss the
different phases of classifiers selection based on a generic framework that is
formed as an outcome of reviewing prior works. Subsequently, we propose a
benchmark knowledge base of 4 millions previously learned models and present
extensive comparative evaluations of the prominent methods for classifiers
selection based on 08 classification algorithms and 400 benchmark datasets. The
comparative study quantitatively assesses the performance of algorithms
selection methods along while emphasizing the strengths and limitations of
existing studies.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 16:51:22 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Garouani",
"Moncef",
""
]
] | TITLE: An experimental survey and Perspective View on Meta-Learning for
Automated Algorithms Selection and Parametrization
ABSTRACT: Considerable progress has been made in the recent literature studies to
tackle the Algorithms Selection and Parametrization (ASP) problem, which is
diversified in multiple meta-learning setups. Yet there is a lack of surveys
and comparative evaluations that critically analyze, summarize and assess the
performance of existing methods. In this paper, we provide an overview of the
state of the art in this continuously evolving field. The survey sheds light on
the motivational reasons for pursuing classifiers selection through
meta-learning. In this regard, Automated Machine Learning (AutoML) is usually
treated as an ASP problem under the umbrella of the democratization of machine
learning. Accordingly, AutoML makes machine learning techniques accessible to
domain scientists who are interested in applying advanced analytics but lack
the required expertise. It can ease the task of manually selecting ML
algorithms and tuning related hyperparameters. We comprehensively discuss the
different phases of classifiers selection based on a generic framework that is
formed as an outcome of reviewing prior works. Subsequently, we propose a
benchmark knowledge base of 4 millions previously learned models and present
extensive comparative evaluations of the prominent methods for classifiers
selection based on 08 classification algorithms and 400 benchmark datasets. The
comparative study quantitatively assesses the performance of algorithms
selection methods along while emphasizing the strengths and limitations of
existing studies.
| no_new_dataset | 0.941277 |
2504.06219 | Dongyang Fan | Dongyang Fan, Vinko Sabol\v{c}ec, Matin Ansaripour, Ayush Kumar Tarun,
Martin Jaggi, Antoine Bosselut, Imanol Schlag | Can Performant LLMs Be Ethical? Quantifying the Impact of Web Crawling
Opt-Outs | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | The increasing adoption of web crawling opt-outs by copyright holders of
online content raises critical questions about the impact of data compliance on
large language model (LLM) performance. However, little is known about how
these restrictions (and the resultant filtering of pretraining datasets) affect
the capabilities of models trained using these corpora. In this work, we
conceptualize this effect as the $\textit{data compliance gap}$ (DCG), which
quantifies the performance difference between models trained on datasets that
comply with web crawling opt-outs, and those that do not. We measure the data
compliance gap in two settings: pretraining models from scratch and continual
pretraining from existing compliant models (simulating a setting where
copyrighted data could be integrated later in pretraining). Our experiments
with 1.5B models show that, as of January 2025, compliance with web data
opt-outs does not degrade general knowledge acquisition (close to 0\% DCG).
However, in specialized domains such as biomedical research, excluding major
publishers leads to performance declines. These findings suggest that while
general-purpose LLMs can be trained to perform equally well using fully open
data, performance in specialized domains may benefit from access to
high-quality copyrighted sources later in training. Our study provides
empirical insights into the long-debated trade-off between data compliance and
downstream model performance, informing future discussions on AI training
practices and policy decisions.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 17:08:06 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Fan",
"Dongyang",
""
],
[
"Sabolčec",
"Vinko",
""
],
[
"Ansaripour",
"Matin",
""
],
[
"Tarun",
"Ayush Kumar",
""
],
[
"Jaggi",
"Martin",
""
],
[
"Bosselut",
"Antoine",
""
],
[
"Schlag",
"Imanol",
""
]
] | TITLE: Can Performant LLMs Be Ethical? Quantifying the Impact of Web Crawling
Opt-Outs
ABSTRACT: The increasing adoption of web crawling opt-outs by copyright holders of
online content raises critical questions about the impact of data compliance on
large language model (LLM) performance. However, little is known about how
these restrictions (and the resultant filtering of pretraining datasets) affect
the capabilities of models trained using these corpora. In this work, we
conceptualize this effect as the $\textit{data compliance gap}$ (DCG), which
quantifies the performance difference between models trained on datasets that
comply with web crawling opt-outs, and those that do not. We measure the data
compliance gap in two settings: pretraining models from scratch and continual
pretraining from existing compliant models (simulating a setting where
copyrighted data could be integrated later in pretraining). Our experiments
with 1.5B models show that, as of January 2025, compliance with web data
opt-outs does not degrade general knowledge acquisition (close to 0\% DCG).
However, in specialized domains such as biomedical research, excluding major
publishers leads to performance declines. These findings suggest that while
general-purpose LLMs can be trained to perform equally well using fully open
data, performance in specialized domains may benefit from access to
high-quality copyrighted sources later in training. Our study provides
empirical insights into the long-debated trade-off between data compliance and
downstream model performance, informing future discussions on AI training
practices and policy decisions.
| no_new_dataset | 0.944944 |
2504.06227 | Krithi Shailya | Krithi Shailya, Shreya Rajpal, Gokul S Krishnan, Balaraman Ravindran | LExT: Towards Evaluating Trustworthiness of Natural Language
Explanations | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | As Large Language Models (LLMs) become increasingly integrated into
high-stakes domains, there have been several approaches proposed toward
generating natural language explanations. These explanations are crucial for
enhancing the interpretability of a model, especially in sensitive domains like
healthcare, where transparency and reliability are key. In light of such
explanations being generated by LLMs and its known concerns, there is a growing
need for robust evaluation frameworks to assess model-generated explanations.
Natural Language Generation metrics like BLEU and ROUGE capture syntactic and
semantic accuracies but overlook other crucial aspects such as factual
accuracy, consistency, and faithfulness. To address this gap, we propose a
general framework for quantifying trustworthiness of natural language
explanations, balancing Plausibility and Faithfulness, to derive a
comprehensive Language Explanation Trustworthiness Score (LExT) (The code and
set up to reproduce our experiments are publicly available at
https://github.com/cerai-iitm/LExT). Applying our domain-agnostic framework to
the healthcare domain using public medical datasets, we evaluate six models,
including domain-specific and general-purpose models. Our findings demonstrate
significant differences in their ability to generate trustworthy explanations.
On comparing these explanations, we make interesting observations such as
inconsistencies in Faithfulness demonstrated by general-purpose models and
their tendency to outperform domain-specific fine-tuned models. This work
further highlights the importance of using a tailored evaluation framework to
assess natural language explanations in sensitive fields, providing a
foundation for improving the trustworthiness and transparency of language
models in healthcare and beyond.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 17:16:52 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Shailya",
"Krithi",
""
],
[
"Rajpal",
"Shreya",
""
],
[
"Krishnan",
"Gokul S",
""
],
[
"Ravindran",
"Balaraman",
""
]
] | TITLE: LExT: Towards Evaluating Trustworthiness of Natural Language
Explanations
ABSTRACT: As Large Language Models (LLMs) become increasingly integrated into
high-stakes domains, there have been several approaches proposed toward
generating natural language explanations. These explanations are crucial for
enhancing the interpretability of a model, especially in sensitive domains like
healthcare, where transparency and reliability are key. In light of such
explanations being generated by LLMs and its known concerns, there is a growing
need for robust evaluation frameworks to assess model-generated explanations.
Natural Language Generation metrics like BLEU and ROUGE capture syntactic and
semantic accuracies but overlook other crucial aspects such as factual
accuracy, consistency, and faithfulness. To address this gap, we propose a
general framework for quantifying trustworthiness of natural language
explanations, balancing Plausibility and Faithfulness, to derive a
comprehensive Language Explanation Trustworthiness Score (LExT) (The code and
set up to reproduce our experiments are publicly available at
https://github.com/cerai-iitm/LExT). Applying our domain-agnostic framework to
the healthcare domain using public medical datasets, we evaluate six models,
including domain-specific and general-purpose models. Our findings demonstrate
significant differences in their ability to generate trustworthy explanations.
On comparing these explanations, we make interesting observations such as
inconsistencies in Faithfulness demonstrated by general-purpose models and
their tendency to outperform domain-specific fine-tuned models. This work
further highlights the importance of using a tailored evaluation framework to
assess natural language explanations in sensitive fields, providing a
foundation for improving the trustworthiness and transparency of language
models in healthcare and beyond.
| no_new_dataset | 0.956391 |
2504.06235 | Shahryar Zehtabi | Shahryar Zehtabi, Dong-Jun Han, Seyyedali Hosseinalipour, Christopher
G. Brinton | Decentralized Federated Domain Generalization with Style Sharing: A
Formal Modeling and Convergence Analysis | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Much of the federated learning (FL) literature focuses on settings where
local dataset statistics remain the same between training and testing time.
Recent advances in domain generalization (DG) aim to use data from source
(training) domains to train a model that generalizes well to data from unseen
target (testing) domains. In this paper, we are motivated by two major gaps in
existing work on FL and DG: (1) the lack of formal mathematical analysis of DG
objectives and training processes; and (2) DG research in FL being limited to
the conventional star-topology architecture. Addressing the second gap, we
develop $\textit{Decentralized Federated Domain Generalization with Style
Sharing}$ ($\texttt{StyleDDG}$), a fully decentralized DG algorithm designed to
allow devices in a peer-to-peer network to achieve DG based on sharing style
information inferred from their datasets. Additionally, we fill the first gap
by providing the first systematic approach to mathematically analyzing
style-based DG training optimization. We cast existing centralized DG
algorithms within our framework, and employ their formalisms to model
$\texttt{StyleDDG}$. Based on this, we obtain analytical conditions under which
a sub-linear convergence rate of $\texttt{StyleDDG}$ can be obtained. Through
experiments on two popular DG datasets, we demonstrate that $\texttt{StyleDDG}$
can obtain significant improvements in accuracy across target domains with
minimal added communication overhead compared to decentralized gradient methods
that do not employ style sharing.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 17:32:56 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Zehtabi",
"Shahryar",
""
],
[
"Han",
"Dong-Jun",
""
],
[
"Hosseinalipour",
"Seyyedali",
""
],
[
"Brinton",
"Christopher G.",
""
]
] | TITLE: Decentralized Federated Domain Generalization with Style Sharing: A
Formal Modeling and Convergence Analysis
ABSTRACT: Much of the federated learning (FL) literature focuses on settings where
local dataset statistics remain the same between training and testing time.
Recent advances in domain generalization (DG) aim to use data from source
(training) domains to train a model that generalizes well to data from unseen
target (testing) domains. In this paper, we are motivated by two major gaps in
existing work on FL and DG: (1) the lack of formal mathematical analysis of DG
objectives and training processes; and (2) DG research in FL being limited to
the conventional star-topology architecture. Addressing the second gap, we
develop $\textit{Decentralized Federated Domain Generalization with Style
Sharing}$ ($\texttt{StyleDDG}$), a fully decentralized DG algorithm designed to
allow devices in a peer-to-peer network to achieve DG based on sharing style
information inferred from their datasets. Additionally, we fill the first gap
by providing the first systematic approach to mathematically analyzing
style-based DG training optimization. We cast existing centralized DG
algorithms within our framework, and employ their formalisms to model
$\texttt{StyleDDG}$. Based on this, we obtain analytical conditions under which
a sub-linear convergence rate of $\texttt{StyleDDG}$ can be obtained. Through
experiments on two popular DG datasets, we demonstrate that $\texttt{StyleDDG}$
can obtain significant improvements in accuracy across target domains with
minimal added communication overhead compared to decentralized gradient methods
that do not employ style sharing.
| no_new_dataset | 0.947769 |
2504.06237 | Mina Bishay | Mina Bishay, Graham Page, Waleed Emad, and Mohammad Mavadati | Monitoring Viewer Attention During Online Ads | Presented at the ECCV 2024 Workshops | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays, video ads spread through numerous online platforms, and are being
watched by millions of viewers worldwide. Big brands gauge the liking and
purchase intent of their new ads, by analyzing the facial responses of viewers
recruited online to watch the ads from home or work. Although this approach
captures naturalistic responses, it is susceptible to distractions inherent in
the participants' environments, such as a movie playing on TV, a colleague
speaking, or mobile notifications. Inattentive participants should get flagged
and eliminated to avoid skewing the ad-testing process. In this paper we
introduce an architecture for monitoring viewer attention during online ads.
Leveraging two behavior analysis toolkits; AFFDEX 2.0 and SmartEye SDK, we
extract low-level facial features encompassing facial expressions, head pose,
and gaze direction. These features are then combined to extract high-level
features that include estimated gaze on the screen plane, yawning, speaking,
etc -- this enables the identification of four primary distractors; off-screen
gaze, drowsiness, speaking, and unattended screen. Our architecture tailors the
gaze settings according to the device type (desktop or mobile). We validate our
architecture first on datasets annotated for specific distractors, and then on
a real-world ad testing dataset with various distractors. The proposed
architecture shows promising results in detecting distraction across both
desktop and mobile devices.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 17:34:02 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Bishay",
"Mina",
""
],
[
"Page",
"Graham",
""
],
[
"Emad",
"Waleed",
""
],
[
"Mavadati",
"Mohammad",
""
]
] | TITLE: Monitoring Viewer Attention During Online Ads
ABSTRACT: Nowadays, video ads spread through numerous online platforms, and are being
watched by millions of viewers worldwide. Big brands gauge the liking and
purchase intent of their new ads, by analyzing the facial responses of viewers
recruited online to watch the ads from home or work. Although this approach
captures naturalistic responses, it is susceptible to distractions inherent in
the participants' environments, such as a movie playing on TV, a colleague
speaking, or mobile notifications. Inattentive participants should get flagged
and eliminated to avoid skewing the ad-testing process. In this paper we
introduce an architecture for monitoring viewer attention during online ads.
Leveraging two behavior analysis toolkits; AFFDEX 2.0 and SmartEye SDK, we
extract low-level facial features encompassing facial expressions, head pose,
and gaze direction. These features are then combined to extract high-level
features that include estimated gaze on the screen plane, yawning, speaking,
etc -- this enables the identification of four primary distractors; off-screen
gaze, drowsiness, speaking, and unattended screen. Our architecture tailors the
gaze settings according to the device type (desktop or mobile). We validate our
architecture first on datasets annotated for specific distractors, and then on
a real-world ad testing dataset with various distractors. The proposed
architecture shows promising results in detecting distraction across both
desktop and mobile devices.
| no_new_dataset | 0.926703 |
2504.06263 | Yiying Yang | Yiying Yang, Wei Cheng, Sijin Chen, Xianfang Zeng, Jiaxu Zhang, Liao
Wang, Gang Yu, Xingjun Ma, Yu-Gang Jiang | OmniSVG: A Unified Scalable Vector Graphics Generation Model | 18 pages; Project Page: https://omnisvg.github.io/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Scalable Vector Graphics (SVG) is an important image format widely adopted in
graphic design because of their resolution independence and editability. The
study of generating high-quality SVG has continuously drawn attention from both
designers and researchers in the AIGC community. However, existing methods
either produces unstructured outputs with huge computational cost or is limited
to generating monochrome icons of over-simplified structures. To produce
high-quality and complex SVG, we propose OmniSVG, a unified framework that
leverages pre-trained Vision-Language Models (VLMs) for end-to-end multimodal
SVG generation. By parameterizing SVG commands and coordinates into discrete
tokens, OmniSVG decouples structural logic from low-level geometry for
efficient training while maintaining the expressiveness of complex SVG
structure. To further advance the development of SVG synthesis, we introduce
MMSVG-2M, a multimodal dataset with two million richly annotated SVG assets,
along with a standardized evaluation protocol for conditional SVG generation
tasks. Extensive experiments show that OmniSVG outperforms existing methods and
demonstrates its potential for integration into professional SVG design
workflows.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 17:59:49 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Yang",
"Yiying",
""
],
[
"Cheng",
"Wei",
""
],
[
"Chen",
"Sijin",
""
],
[
"Zeng",
"Xianfang",
""
],
[
"Zhang",
"Jiaxu",
""
],
[
"Wang",
"Liao",
""
],
[
"Yu",
"Gang",
""
],
[
"Ma",
"Xingjun",
""
],
[
"Jiang",
"Yu-Gang",
""
]
] | TITLE: OmniSVG: A Unified Scalable Vector Graphics Generation Model
ABSTRACT: Scalable Vector Graphics (SVG) is an important image format widely adopted in
graphic design because of their resolution independence and editability. The
study of generating high-quality SVG has continuously drawn attention from both
designers and researchers in the AIGC community. However, existing methods
either produces unstructured outputs with huge computational cost or is limited
to generating monochrome icons of over-simplified structures. To produce
high-quality and complex SVG, we propose OmniSVG, a unified framework that
leverages pre-trained Vision-Language Models (VLMs) for end-to-end multimodal
SVG generation. By parameterizing SVG commands and coordinates into discrete
tokens, OmniSVG decouples structural logic from low-level geometry for
efficient training while maintaining the expressiveness of complex SVG
structure. To further advance the development of SVG synthesis, we introduce
MMSVG-2M, a multimodal dataset with two million richly annotated SVG assets,
along with a standardized evaluation protocol for conditional SVG generation
tasks. Extensive experiments show that OmniSVG outperforms existing methods and
demonstrates its potential for integration into professional SVG design
workflows.
| new_dataset | 0.957952 |
2504.06264 | Jisang Han | Jisang Han, Honggyu An, Jaewoo Jung, Takuya Narihira, Junyoung Seo,
Kazumi Fukuda, Chaehyun Kim, Sunghwan Hong, Yuki Mitsufuji, Seungryong Kim | D^2USt3R: Enhancing 3D Reconstruction with 4D Pointmaps for Dynamic
Scenes | project page: https://cvlab-kaist.github.io/DDUSt3R/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We address the task of 3D reconstruction in dynamic scenes, where object
motions degrade the quality of previous 3D pointmap regression methods, such as
DUSt3R, originally designed for static 3D scene reconstruction. Although these
methods provide an elegant and powerful solution in static settings, they
struggle in the presence of dynamic motions that disrupt alignment based solely
on camera poses. To overcome this, we propose D^2USt3R that regresses 4D
pointmaps that simultaneiously capture both static and dynamic 3D scene
geometry in a feed-forward manner. By explicitly incorporating both spatial and
temporal aspects, our approach successfully encapsulates spatio-temporal dense
correspondence to the proposed 4D pointmaps, enhancing downstream tasks.
Extensive experimental evaluations demonstrate that our proposed approach
consistently achieves superior reconstruction performance across various
datasets featuring complex motions.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 17:59:50 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Han",
"Jisang",
""
],
[
"An",
"Honggyu",
""
],
[
"Jung",
"Jaewoo",
""
],
[
"Narihira",
"Takuya",
""
],
[
"Seo",
"Junyoung",
""
],
[
"Fukuda",
"Kazumi",
""
],
[
"Kim",
"Chaehyun",
""
],
[
"Hong",
"Sunghwan",
""
],
[
"Mitsufuji",
"Yuki",
""
],
[
"Kim",
"Seungryong",
""
]
] | TITLE: D^2USt3R: Enhancing 3D Reconstruction with 4D Pointmaps for Dynamic
Scenes
ABSTRACT: We address the task of 3D reconstruction in dynamic scenes, where object
motions degrade the quality of previous 3D pointmap regression methods, such as
DUSt3R, originally designed for static 3D scene reconstruction. Although these
methods provide an elegant and powerful solution in static settings, they
struggle in the presence of dynamic motions that disrupt alignment based solely
on camera poses. To overcome this, we propose D^2USt3R that regresses 4D
pointmaps that simultaneiously capture both static and dynamic 3D scene
geometry in a feed-forward manner. By explicitly incorporating both spatial and
temporal aspects, our approach successfully encapsulates spatio-temporal dense
correspondence to the proposed 4D pointmaps, enhancing downstream tasks.
Extensive experimental evaluations demonstrate that our proposed approach
consistently achieves superior reconstruction performance across various
datasets featuring complex motions.
| no_new_dataset | 0.946101 |
2108.11328 | Shibal Ibrahim | Shibal Ibrahim, Peter Radchenko, Emanuel Ben-David, Rahul Mazumder | Predicting Census Survey Response Rates With Parsimonious Additive
Models and Structured Interactions | Published in Annals of Applied Statistics | The Annals of Applied Statistics 2025, Vol. 19, No. 1, 94-120 | 10.1214/24-AOAS1929 | null | stat.ML cs.LG stat.AP stat.CO | http://creativecommons.org/licenses/by/4.0/ | In this paper, we consider the problem of predicting survey response rates
using a family of flexible and interpretable nonparametric models. The study is
motivated by the US Census Bureau's well-known ROAM application, which uses a
linear regression model trained on the US Census Planning Database data to
identify hard-to-survey areas. A crowdsourcing competition (Erdman and Bates,
2016) organized more than ten years ago revealed that machine learning methods
based on ensembles of regression trees led to the best performance in
predicting survey response rates; however, the corresponding models could not
be adopted for the intended application due to their black-box nature. We
consider nonparametric additive models with a small number of main and pairwise
interaction effects using $\ell_0$-based penalization. From a methodological
viewpoint, we study our estimator's computational and statistical aspects and
discuss variants incorporating strong hierarchical interactions. Our algorithms
(open-sourced on GitHub) extend the computational frontiers of existing
algorithms for sparse additive models to be able to handle datasets relevant to
the application we consider. We discuss and interpret findings from our model
on the US Census Planning Database. In addition to being useful from an
interpretability standpoint, our models lead to predictions comparable to
popular black-box machine learning methods based on gradient boosting and
feedforward neural networks - suggesting that it is possible to have models
that have the best of both worlds: good model accuracy and interpretability.
| [
{
"version": "v1",
"created": "Tue, 24 Aug 2021 17:49:55 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jun 2022 16:09:18 GMT"
},
{
"version": "v3",
"created": "Fri, 26 May 2023 17:10:01 GMT"
},
{
"version": "v4",
"created": "Thu, 7 Dec 2023 19:05:08 GMT"
},
{
"version": "v5",
"created": "Sun, 6 Apr 2025 02:27:46 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ibrahim",
"Shibal",
""
],
[
"Radchenko",
"Peter",
""
],
[
"Ben-David",
"Emanuel",
""
],
[
"Mazumder",
"Rahul",
""
]
] | TITLE: Predicting Census Survey Response Rates With Parsimonious Additive
Models and Structured Interactions
ABSTRACT: In this paper, we consider the problem of predicting survey response rates
using a family of flexible and interpretable nonparametric models. The study is
motivated by the US Census Bureau's well-known ROAM application, which uses a
linear regression model trained on the US Census Planning Database data to
identify hard-to-survey areas. A crowdsourcing competition (Erdman and Bates,
2016) organized more than ten years ago revealed that machine learning methods
based on ensembles of regression trees led to the best performance in
predicting survey response rates; however, the corresponding models could not
be adopted for the intended application due to their black-box nature. We
consider nonparametric additive models with a small number of main and pairwise
interaction effects using $\ell_0$-based penalization. From a methodological
viewpoint, we study our estimator's computational and statistical aspects and
discuss variants incorporating strong hierarchical interactions. Our algorithms
(open-sourced on GitHub) extend the computational frontiers of existing
algorithms for sparse additive models to be able to handle datasets relevant to
the application we consider. We discuss and interpret findings from our model
on the US Census Planning Database. In addition to being useful from an
interpretability standpoint, our models lead to predictions comparable to
popular black-box machine learning methods based on gradient boosting and
feedforward neural networks - suggesting that it is possible to have models
that have the best of both worlds: good model accuracy and interpretability.
| no_new_dataset | 0.944944 |
2201.12577 | John Chiang | John Chiang | Volley Revolver: A Novel Matrix-Encoding Method for Privacy-Preserving
Neural Networks (Inference) | The encoding method we proposed in this work, $\texttt{Volley
Revolver}$, is particularly tailored for privacy-preserving neural networks.
There is a great chance that it can be used to assist the private neural
networks training, in which case for the backpropagation algorithm of the
fully-connected layer the first matrix $A$ is revolved while the second
matrix $B$ is settled to be still | null | null | null | cs.CR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we present a novel matrix-encoding method that is particularly
convenient for neural networks to make predictions in a privacy-preserving
manner using homomorphic encryption. Based on this encoding method, we
implement a convolutional neural network for handwritten image classification
over encryption. For two matrices $A$ and $B$ to perform homomorphic
multiplication, the main idea behind it, in a simple version, is to encrypt
matrix $A$ and the transpose of matrix $B$ into two ciphertexts respectively.
With additional operations, the homomorphic matrix multiplication can be
calculated over encrypted matrices efficiently. For the convolution operation,
we in advance span each convolution kernel to a matrix space of the same size
as the input image so as to generate several ciphertexts, each of which is
later used together with the ciphertext encrypting input images for calculating
some of the final convolution results. We accumulate all these intermediate
results and thus complete the convolution operation.
In a public cloud with 40 vCPUs, our convolutional neural network
implementation on the MNIST testing dataset takes $\sim$ 287 seconds to compute
ten likelihoods of 32 encrypted images of size $28 \times 28$ simultaneously.
The data owner only needs to upload one ciphertext ($\sim 19.8$ MB) encrypting
these 32 images to the public cloud.
| [
{
"version": "v1",
"created": "Sat, 29 Jan 2022 12:40:19 GMT"
},
{
"version": "v2",
"created": "Sun, 14 Aug 2022 06:44:34 GMT"
},
{
"version": "v3",
"created": "Wed, 29 Mar 2023 12:14:21 GMT"
},
{
"version": "v4",
"created": "Tue, 9 Jan 2024 00:52:21 GMT"
},
{
"version": "v5",
"created": "Wed, 14 Aug 2024 13:07:13 GMT"
},
{
"version": "v6",
"created": "Thu, 24 Oct 2024 09:05:36 GMT"
},
{
"version": "v7",
"created": "Sun, 6 Apr 2025 11:57:26 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Chiang",
"John",
""
]
] | TITLE: Volley Revolver: A Novel Matrix-Encoding Method for Privacy-Preserving
Neural Networks (Inference)
ABSTRACT: In this work, we present a novel matrix-encoding method that is particularly
convenient for neural networks to make predictions in a privacy-preserving
manner using homomorphic encryption. Based on this encoding method, we
implement a convolutional neural network for handwritten image classification
over encryption. For two matrices $A$ and $B$ to perform homomorphic
multiplication, the main idea behind it, in a simple version, is to encrypt
matrix $A$ and the transpose of matrix $B$ into two ciphertexts respectively.
With additional operations, the homomorphic matrix multiplication can be
calculated over encrypted matrices efficiently. For the convolution operation,
we in advance span each convolution kernel to a matrix space of the same size
as the input image so as to generate several ciphertexts, each of which is
later used together with the ciphertext encrypting input images for calculating
some of the final convolution results. We accumulate all these intermediate
results and thus complete the convolution operation.
In a public cloud with 40 vCPUs, our convolutional neural network
implementation on the MNIST testing dataset takes $\sim$ 287 seconds to compute
ten likelihoods of 32 encrypted images of size $28 \times 28$ simultaneously.
The data owner only needs to upload one ciphertext ($\sim 19.8$ MB) encrypting
these 32 images to the public cloud.
| no_new_dataset | 0.936634 |
2305.12352 | Wenzhi Gao | Yanguang Chen, Wenzhi Gao, Wanyu Zhang, Dongdong Ge, Huikang Liu,
Yinyu Ye | Data-driven Mixed Integer Optimization through Probabilistic
Multi-variable Branching | null | null | null | null | math.OC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a Pre-trained Mixed Integer Optimization framework
(PreMIO) that accelerates online mixed integer program (MIP) solving with
offline datasets and machine learning models. Our method is based on a
data-driven multi-variable cardinality branching procedure that splits the MIP
feasible region using hyperplanes chosen by the concentration inequalities.
Unlike most previous ML+MIP approaches that either require complicated
implementation or suffer from a lack of theoretical justification, our method
is simple, flexible, provable, and explainable. Numerical experiments on both
classical OR benchmark datasets and real-life instances validate the efficiency
of our proposed method.
| [
{
"version": "v1",
"created": "Sun, 21 May 2023 05:11:30 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Nov 2024 21:46:50 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Apr 2025 18:09:21 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Chen",
"Yanguang",
""
],
[
"Gao",
"Wenzhi",
""
],
[
"Zhang",
"Wanyu",
""
],
[
"Ge",
"Dongdong",
""
],
[
"Liu",
"Huikang",
""
],
[
"Ye",
"Yinyu",
""
]
] | TITLE: Data-driven Mixed Integer Optimization through Probabilistic
Multi-variable Branching
ABSTRACT: In this paper, we propose a Pre-trained Mixed Integer Optimization framework
(PreMIO) that accelerates online mixed integer program (MIP) solving with
offline datasets and machine learning models. Our method is based on a
data-driven multi-variable cardinality branching procedure that splits the MIP
feasible region using hyperplanes chosen by the concentration inequalities.
Unlike most previous ML+MIP approaches that either require complicated
implementation or suffer from a lack of theoretical justification, our method
is simple, flexible, provable, and explainable. Numerical experiments on both
classical OR benchmark datasets and real-life instances validate the efficiency
of our proposed method.
| no_new_dataset | 0.949435 |
2307.00976 | Ruitao Xie | Ruimin Ma, Ruitao Xie, Yanlin Wang, Jintao Meng, Yanjie Wei, Wenhui
Xi, Yi Pan | Autism Spectrum Disorder Classification with Interpretability in
Children based on Structural MRI Features Extracted using Contrastive
Variational Autoencoder | null | Big Data Mining and Analytics, 2024, 7(3): 781-793 | 10.26599/BDMA.2024.9020004 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Autism spectrum disorder (ASD) is a highly disabling mental disease that
brings significant impairments of social interaction ability to the patients,
making early screening and intervention of ASD critical. With the development
of the machine learning and neuroimaging technology, extensive research has
been conducted on machine classification of ASD based on structural Magnetic
Resonance Imaging (s-MRI). However, most studies involve with datasets where
participants' age are above 5 and lack interpretability. In this paper, we
propose a machine learning method for ASD classification in children with age
range from 0.92 to 4.83 years, based on s-MRI features extracted using
contrastive variational autoencoder (CVAE). 78 s-MRIs, collected from Shenzhen
Children's Hospital, are used for training CVAE, which consists of both
ASD-specific feature channel and common shared feature channel. The ASD
participants represented by ASD-specific features can be easily discriminated
from TC participants represented by the common shared features. In case of
degraded predictive accuracy when data size is extremely small, a transfer
learning strategy is proposed here as a potential solution. Finally, we conduct
neuroanatomical interpretation based on the correlation between s-MRI features
extracted from CVAE and surface area of different cortical regions, which
discloses potential biomarkers that could help target treatments of ASD in the
future.
| [
{
"version": "v1",
"created": "Mon, 3 Jul 2023 12:46:19 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 08:32:48 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ma",
"Ruimin",
""
],
[
"Xie",
"Ruitao",
""
],
[
"Wang",
"Yanlin",
""
],
[
"Meng",
"Jintao",
""
],
[
"Wei",
"Yanjie",
""
],
[
"Xi",
"Wenhui",
""
],
[
"Pan",
"Yi",
""
]
] | TITLE: Autism Spectrum Disorder Classification with Interpretability in
Children based on Structural MRI Features Extracted using Contrastive
Variational Autoencoder
ABSTRACT: Autism spectrum disorder (ASD) is a highly disabling mental disease that
brings significant impairments of social interaction ability to the patients,
making early screening and intervention of ASD critical. With the development
of the machine learning and neuroimaging technology, extensive research has
been conducted on machine classification of ASD based on structural Magnetic
Resonance Imaging (s-MRI). However, most studies involve with datasets where
participants' age are above 5 and lack interpretability. In this paper, we
propose a machine learning method for ASD classification in children with age
range from 0.92 to 4.83 years, based on s-MRI features extracted using
contrastive variational autoencoder (CVAE). 78 s-MRIs, collected from Shenzhen
Children's Hospital, are used for training CVAE, which consists of both
ASD-specific feature channel and common shared feature channel. The ASD
participants represented by ASD-specific features can be easily discriminated
from TC participants represented by the common shared features. In case of
degraded predictive accuracy when data size is extremely small, a transfer
learning strategy is proposed here as a potential solution. Finally, we conduct
neuroanatomical interpretation based on the correlation between s-MRI features
extracted from CVAE and surface area of different cortical regions, which
discloses potential biomarkers that could help target treatments of ASD in the
future.
| no_new_dataset | 0.947962 |
2307.14591 | Junchao Huang | Junchao Huang, Xiaoqi He Yebo Wu and Sheng Zhao | The detection and rectification for identity-switch based on unfalsified
control | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The purpose of multi-object tracking (MOT) is to continuously track and
identify objects detected in videos. Currently, most methods for multi-object
tracking model the motion information and combine it with appearance
information to determine and track objects. In this paper, unfalsified control
is employed to address the ID-switch problem in multi-object tracking. We
establish sequences of appearance information variations for the trajectories
during the tracking process and design a detection and rectification module
specifically for ID-switch detection and recovery. We also propose a simple and
effective strategy to address the issue of ambiguous matching of appearance
information during the data association process. Experimental results on
publicly available MOT datasets demonstrate that the tracker exhibits excellent
effectiveness and robustness in handling tracking errors caused by occlusions
and rapid movements.
| [
{
"version": "v1",
"created": "Thu, 27 Jul 2023 02:30:12 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Apr 2025 13:11:14 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Huang",
"Junchao",
""
],
[
"Wu",
"Xiaoqi He Yebo",
""
],
[
"Zhao",
"Sheng",
""
]
] | TITLE: The detection and rectification for identity-switch based on unfalsified
control
ABSTRACT: The purpose of multi-object tracking (MOT) is to continuously track and
identify objects detected in videos. Currently, most methods for multi-object
tracking model the motion information and combine it with appearance
information to determine and track objects. In this paper, unfalsified control
is employed to address the ID-switch problem in multi-object tracking. We
establish sequences of appearance information variations for the trajectories
during the tracking process and design a detection and rectification module
specifically for ID-switch detection and recovery. We also propose a simple and
effective strategy to address the issue of ambiguous matching of appearance
information during the data association process. Experimental results on
publicly available MOT datasets demonstrate that the tracker exhibits excellent
effectiveness and robustness in handling tracking errors caused by occlusions
and rapid movements.
| no_new_dataset | 0.948202 |
2307.16082 | Mohammadali Sefidi Esfahani | Mohammadali Sefidi Esfahani, Mohammad Akbari | EnrichEvent: Enriching Social Data with Contextual Information for
Emerging Event Extraction | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Social platforms have emerged as crucial platforms for distributing
information and discussing social events, offering researchers an excellent
opportunity to design and implement novel event detection frameworks.
Identifying unspecified events and detecting events without prior knowledge
enables governments, aid agencies, and experts to respond swiftly and
effectively to unfolding situations, such as natural disasters, by assessing
severity and optimizing aid delivery. Social data is characterized by
misspellings, incompleteness, word sense ambiguation, and irregular language.
While discussing an ongoing event, users share different opinions and
perspectives based on their prior experience, background, and knowledge. Prior
works primarily leverage tweets' lexical and structural patterns to capture
users' opinions and views about events. In this study, we propose an end-to-end
novel framework, EnrichEvent, to identify unspecified events from streaming
social data. In addition to lexical and structural patterns, we leverage
contextual knowledge of the tweets to enrich their representation and gain a
better perspective on users' opinions about events. Compared to our baselines,
the EnrichEvent framework achieves the highest values for Consolidation outcome
with an average of 87% vs. 67% and the lowest for Discrimination outcome with
an average of 10% vs. 16%. Moreover, the Trending Data Extraction module in the
EnrichEvent framework improves efficiency by reducing Runtime by up to 50% by
identifying and discarding irrelevant tweets within message blocks, making the
framework highly scalable for processing streaming data. Our source code and
dataset are available in our official replication package.
| [
{
"version": "v1",
"created": "Sat, 29 Jul 2023 21:37:55 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Aug 2023 09:00:25 GMT"
},
{
"version": "v3",
"created": "Mon, 25 Dec 2023 14:27:55 GMT"
},
{
"version": "v4",
"created": "Wed, 27 Dec 2023 09:58:25 GMT"
},
{
"version": "v5",
"created": "Wed, 27 Nov 2024 15:19:51 GMT"
},
{
"version": "v6",
"created": "Tue, 3 Dec 2024 10:18:20 GMT"
},
{
"version": "v7",
"created": "Sat, 5 Apr 2025 18:22:29 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Esfahani",
"Mohammadali Sefidi",
""
],
[
"Akbari",
"Mohammad",
""
]
] | TITLE: EnrichEvent: Enriching Social Data with Contextual Information for
Emerging Event Extraction
ABSTRACT: Social platforms have emerged as crucial platforms for distributing
information and discussing social events, offering researchers an excellent
opportunity to design and implement novel event detection frameworks.
Identifying unspecified events and detecting events without prior knowledge
enables governments, aid agencies, and experts to respond swiftly and
effectively to unfolding situations, such as natural disasters, by assessing
severity and optimizing aid delivery. Social data is characterized by
misspellings, incompleteness, word sense ambiguation, and irregular language.
While discussing an ongoing event, users share different opinions and
perspectives based on their prior experience, background, and knowledge. Prior
works primarily leverage tweets' lexical and structural patterns to capture
users' opinions and views about events. In this study, we propose an end-to-end
novel framework, EnrichEvent, to identify unspecified events from streaming
social data. In addition to lexical and structural patterns, we leverage
contextual knowledge of the tweets to enrich their representation and gain a
better perspective on users' opinions about events. Compared to our baselines,
the EnrichEvent framework achieves the highest values for Consolidation outcome
with an average of 87% vs. 67% and the lowest for Discrimination outcome with
an average of 10% vs. 16%. Moreover, the Trending Data Extraction module in the
EnrichEvent framework improves efficiency by reducing Runtime by up to 50% by
identifying and discarding irrelevant tweets within message blocks, making the
framework highly scalable for processing streaming data. Our source code and
dataset are available in our official replication package.
| no_new_dataset | 0.95018 |
2309.02712 | Amir H Gandomi | Shams Forruque Ahmed, Md. Sakib Bin Alam, Maliha Kabir, Shaila Afrin,
Sabiha Jannat Rafa, Aanushka Mehjabin, Amir H. Gandomi | Unveiling the frontiers of deep learning: innovations shaping diverse
domains | 88 pages, 11 figures, 7 tables | Applied Intelligence, 55(7), 573 (2025) | 10.1007/s10489-025-06259-x | null | cs.LG cs.AI cs.NE | http://creativecommons.org/licenses/by/4.0/ | Deep learning (DL) allows computer models to learn, visualize, optimize,
refine, and predict data. To understand its present state, examining the most
recent advancements and applications of deep learning across various domains is
essential. However, prior reviews focused on DL applications in only one or two
domains. The current review thoroughly investigates the use of DL in four
different broad fields due to the plenty of relevant research literature in
these domains. This wide range of coverage provides a comprehensive and
interconnected understanding of DL's influence and opportunities, which is
lacking in other reviews. The study also discusses DL frameworks and addresses
the benefits and challenges of utilizing DL in each field, which is only
occasionally available in other reviews. DL frameworks like TensorFlow and
PyTorch make it easy to develop innovative DL applications across diverse
domains by providing model development and deployment platforms. This helps
bridge theoretical progress and practical implementation. Deep learning solves
complex problems and advances technology in many fields, demonstrating its
revolutionary potential and adaptability. CNN LSTM models with attention
mechanisms can forecast traffic with 99 percent accuracy. Fungal diseased mango
leaves can be classified with 97.13 percent accuracy by the multi layer CNN
model. However, deep learning requires rigorous data collection to analyze and
process large amounts of data because it is independent of training data. Thus,
large scale medical, research, healthcare, and environmental data compilation
are challenging, reducing deep learning effectiveness. Future research should
address data volume, privacy, domain complexity, and data quality issues in DL
datasets.
| [
{
"version": "v1",
"created": "Wed, 6 Sep 2023 04:50:39 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 01:29:03 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ahmed",
"Shams Forruque",
""
],
[
"Alam",
"Md. Sakib Bin",
""
],
[
"Kabir",
"Maliha",
""
],
[
"Afrin",
"Shaila",
""
],
[
"Rafa",
"Sabiha Jannat",
""
],
[
"Mehjabin",
"Aanushka",
""
],
[
"Gandomi",
"Amir H.",
""
]
] | TITLE: Unveiling the frontiers of deep learning: innovations shaping diverse
domains
ABSTRACT: Deep learning (DL) allows computer models to learn, visualize, optimize,
refine, and predict data. To understand its present state, examining the most
recent advancements and applications of deep learning across various domains is
essential. However, prior reviews focused on DL applications in only one or two
domains. The current review thoroughly investigates the use of DL in four
different broad fields due to the plenty of relevant research literature in
these domains. This wide range of coverage provides a comprehensive and
interconnected understanding of DL's influence and opportunities, which is
lacking in other reviews. The study also discusses DL frameworks and addresses
the benefits and challenges of utilizing DL in each field, which is only
occasionally available in other reviews. DL frameworks like TensorFlow and
PyTorch make it easy to develop innovative DL applications across diverse
domains by providing model development and deployment platforms. This helps
bridge theoretical progress and practical implementation. Deep learning solves
complex problems and advances technology in many fields, demonstrating its
revolutionary potential and adaptability. CNN LSTM models with attention
mechanisms can forecast traffic with 99 percent accuracy. Fungal diseased mango
leaves can be classified with 97.13 percent accuracy by the multi layer CNN
model. However, deep learning requires rigorous data collection to analyze and
process large amounts of data because it is independent of training data. Thus,
large scale medical, research, healthcare, and environmental data compilation
are challenging, reducing deep learning effectiveness. Future research should
address data volume, privacy, domain complexity, and data quality issues in DL
datasets.
| no_new_dataset | 0.940463 |
2309.14770 | Haotian Li | Haotian Li, Bin Yu, Yuliang Wei, Kai Wang, Richard Yi Da Xu, Bailing
Wang | KERMIT: Knowledge Graph Completion of Enhanced Relation Modeling with
Inverse Transformation | Accepted to Knowledge-Based Systems | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge graph completion (KGC) revolves around populating missing triples
in a knowledge graph using available information. Text-based methods, which
depend on textual descriptions of triples, often encounter difficulties when
these descriptions lack sufficient information for accurate prediction-an issue
inherent to the datasets and not easily resolved through modeling alone. To
address this and ensure data consistency, we first use large language models
(LLMs) to generate coherent descriptions, bridging the semantic gap between
queries and answers. Secondly, we utilize inverse relations to create a
symmetric graph, thereby providing augmented training samples for KGC.
Additionally, we employ the label information inherent in knowledge graphs
(KGs) to enhance the existing contrastive framework, making it fully
supervised. These efforts have led to significant performance improvements on
the WN18RR and FB15k-237 datasets. According to standard evaluation metrics,
our approach achieves a 4.2% improvement in Hit@1 on WN18RR and a 3.4%
improvement in Hit@3 on FB15k-237, demonstrating superior performance.
| [
{
"version": "v1",
"created": "Tue, 26 Sep 2023 09:03:25 GMT"
},
{
"version": "v2",
"created": "Sat, 3 Aug 2024 13:34:24 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Apr 2025 03:07:25 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Li",
"Haotian",
""
],
[
"Yu",
"Bin",
""
],
[
"Wei",
"Yuliang",
""
],
[
"Wang",
"Kai",
""
],
[
"Da Xu",
"Richard Yi",
""
],
[
"Wang",
"Bailing",
""
]
] | TITLE: KERMIT: Knowledge Graph Completion of Enhanced Relation Modeling with
Inverse Transformation
ABSTRACT: Knowledge graph completion (KGC) revolves around populating missing triples
in a knowledge graph using available information. Text-based methods, which
depend on textual descriptions of triples, often encounter difficulties when
these descriptions lack sufficient information for accurate prediction-an issue
inherent to the datasets and not easily resolved through modeling alone. To
address this and ensure data consistency, we first use large language models
(LLMs) to generate coherent descriptions, bridging the semantic gap between
queries and answers. Secondly, we utilize inverse relations to create a
symmetric graph, thereby providing augmented training samples for KGC.
Additionally, we employ the label information inherent in knowledge graphs
(KGs) to enhance the existing contrastive framework, making it fully
supervised. These efforts have led to significant performance improvements on
the WN18RR and FB15k-237 datasets. According to standard evaluation metrics,
our approach achieves a 4.2% improvement in Hit@1 on WN18RR and a 3.4%
improvement in Hit@3 on FB15k-237, demonstrating superior performance.
| no_new_dataset | 0.947672 |
2310.08453 | Jian Wu | Jian Wu, Carol Flannagan, Ulrich Sander, and Jonas B\"argman | Modeling Lead-vehicle Kinematics For Rear-end Crash Scenario Generation | null | IEEETrans.Intell.Transp.Syst. 25 (2024) 3176-3186 | 10.1109/TITS.2024.3369097 | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The use of virtual safety assessment as the primary method for evaluating
vehicle safety technologies has emphasized the importance of crash scenario
generation. One of the most common crash types is the rear-end crash, which
involves a lead vehicle and a following vehicle. Most studies have focused on
the following vehicle, assuming that the lead vehicle maintains a constant
acceleration/deceleration before the crash. However, there is no evidence for
this premise in the literature. This study aims to address this knowledge gap
by thoroughly analyzing and modeling the lead vehicle's behavior as a first
step in generating rear-end crash scenarios. Accordingly, the study employed a
piecewise linear model to parameterize the speed profiles of lead vehicles,
utilizing two rear-end pre-crash/near-crash datasets. These datasets were
merged and categorized into multiple sub-datasets; for each one, a multivariate
distribution was constructed to represent the corresponding parameters.
Subsequently, a synthetic dataset was generated using these distribution models
and validated by comparison with the original combined dataset. The results
highlight diverse lead-vehicle speed patterns, indicating that a more accurate
model, such as the proposed piecewise linear model, is required instead of the
conventional constant acceleration/deceleration model. Crashes generated with
the proposed models accurately match crash data across the full severity range,
surpassing existing lead-vehicle kinematics models in both severity range and
accuracy. By providing more realistic speed profiles for the lead vehicle, the
model developed in the study contributes to creating realistic rear-end crash
scenarios and reconstructing real-life crashes.
| [
{
"version": "v1",
"created": "Fri, 22 Sep 2023 10:21:17 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Oct 2023 07:16:21 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wu",
"Jian",
""
],
[
"Flannagan",
"Carol",
""
],
[
"Sander",
"Ulrich",
""
],
[
"Bärgman",
"Jonas",
""
]
] | TITLE: Modeling Lead-vehicle Kinematics For Rear-end Crash Scenario Generation
ABSTRACT: The use of virtual safety assessment as the primary method for evaluating
vehicle safety technologies has emphasized the importance of crash scenario
generation. One of the most common crash types is the rear-end crash, which
involves a lead vehicle and a following vehicle. Most studies have focused on
the following vehicle, assuming that the lead vehicle maintains a constant
acceleration/deceleration before the crash. However, there is no evidence for
this premise in the literature. This study aims to address this knowledge gap
by thoroughly analyzing and modeling the lead vehicle's behavior as a first
step in generating rear-end crash scenarios. Accordingly, the study employed a
piecewise linear model to parameterize the speed profiles of lead vehicles,
utilizing two rear-end pre-crash/near-crash datasets. These datasets were
merged and categorized into multiple sub-datasets; for each one, a multivariate
distribution was constructed to represent the corresponding parameters.
Subsequently, a synthetic dataset was generated using these distribution models
and validated by comparison with the original combined dataset. The results
highlight diverse lead-vehicle speed patterns, indicating that a more accurate
model, such as the proposed piecewise linear model, is required instead of the
conventional constant acceleration/deceleration model. Crashes generated with
the proposed models accurately match crash data across the full severity range,
surpassing existing lead-vehicle kinematics models in both severity range and
accuracy. By providing more realistic speed profiles for the lead vehicle, the
model developed in the study contributes to creating realistic rear-end crash
scenarios and reconstructing real-life crashes.
| new_dataset | 0.594169 |
2310.11439 | Quentin Bouniot | Quentin Bouniot, Ievgen Redko, Anton Mallasto, Charlotte Laclau,
Oliver Struckmeier, Karol Arndt, Markus Heinonen, Ville Kyrki, Samuel Kaski | From Alexnet to Transformers: Measuring the Non-linearity of Deep Neural
Networks with Affine Optimal Transport | Code available at https://github.com/qbouniot/AffScoreDeep | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the last decade, we have witnessed the introduction of several novel deep
neural network (DNN) architectures exhibiting ever-increasing performance
across diverse tasks. Explaining the upward trend of their performance,
however, remains difficult as different DNN architectures of comparable depth
and width -- common factors associated with their expressive power -- may
exhibit a drastically different performance even when trained on the same
dataset. In this paper, we introduce the concept of the non-linearity signature
of DNN, the first theoretically sound solution for approximately measuring the
non-linearity of deep neural networks. Built upon a score derived from
closed-form optimal transport mappings, this signature provides a better
understanding of the inner workings of a wide range of DNN architectures and
learning paradigms, with a particular emphasis on the computer vision task. We
provide extensive experimental results that highlight the practical usefulness
of the proposed non-linearity signature and its potential for long-reaching
implications. The code for our work is available at
https://github.com/qbouniot/AffScoreDeep
| [
{
"version": "v1",
"created": "Tue, 17 Oct 2023 17:50:22 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Jun 2024 09:29:21 GMT"
},
{
"version": "v3",
"created": "Mon, 1 Jul 2024 14:39:54 GMT"
},
{
"version": "v4",
"created": "Sun, 6 Apr 2025 16:31:38 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Bouniot",
"Quentin",
""
],
[
"Redko",
"Ievgen",
""
],
[
"Mallasto",
"Anton",
""
],
[
"Laclau",
"Charlotte",
""
],
[
"Struckmeier",
"Oliver",
""
],
[
"Arndt",
"Karol",
""
],
[
"Heinonen",
"Markus",
""
],
[
"Kyrki",
"Ville",
""
],
[
"Kaski",
"Samuel",
""
]
] | TITLE: From Alexnet to Transformers: Measuring the Non-linearity of Deep Neural
Networks with Affine Optimal Transport
ABSTRACT: In the last decade, we have witnessed the introduction of several novel deep
neural network (DNN) architectures exhibiting ever-increasing performance
across diverse tasks. Explaining the upward trend of their performance,
however, remains difficult as different DNN architectures of comparable depth
and width -- common factors associated with their expressive power -- may
exhibit a drastically different performance even when trained on the same
dataset. In this paper, we introduce the concept of the non-linearity signature
of DNN, the first theoretically sound solution for approximately measuring the
non-linearity of deep neural networks. Built upon a score derived from
closed-form optimal transport mappings, this signature provides a better
understanding of the inner workings of a wide range of DNN architectures and
learning paradigms, with a particular emphasis on the computer vision task. We
provide extensive experimental results that highlight the practical usefulness
of the proposed non-linearity signature and its potential for long-reaching
implications. The code for our work is available at
https://github.com/qbouniot/AffScoreDeep
| no_new_dataset | 0.946001 |
2310.14778 | Jinzheng Zhao | Jinzheng Zhao, Yong Xu, Xinyuan Qian, Davide Berghi, Peipei Wu, Meng
Cui, Jianyuan Sun, Philip J.B. Jackson and Wenwu Wang | Audio-Visual Speaker Tracking: Progress, Challenges, and Future
Directions | null | null | null | null | cs.MM cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | Audio-visual speaker tracking has drawn increasing attention over the past
few years due to its academic values and wide applications. Audio and visual
modalities can provide complementary information for localization and tracking.
With audio and visual information, the Bayesian-based filter and deep
learning-based methods can solve the problem of data association, audio-visual
fusion and track management. In this paper, we conduct a comprehensive overview
of audio-visual speaker tracking. To our knowledge, this is the first extensive
survey over the past five years. We introduce the family of Bayesian filters
and summarize the methods for obtaining audio-visual measurements. In addition,
the existing trackers and their performance on the AV16.3 dataset are
summarized. In the past few years, deep learning techniques have thrived, which
also boost the development of audio-visual speaker tracking. The influence of
deep learning techniques in terms of measurement extraction and state
estimation is also discussed. Finally, we discuss the connections between
audio-visual speaker tracking and other areas such as speech separation and
distributed speaker tracking.
| [
{
"version": "v1",
"created": "Mon, 23 Oct 2023 10:29:33 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Dec 2023 08:35:04 GMT"
},
{
"version": "v3",
"created": "Thu, 19 Dec 2024 11:49:06 GMT"
},
{
"version": "v4",
"created": "Sun, 6 Apr 2025 03:02:18 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zhao",
"Jinzheng",
""
],
[
"Xu",
"Yong",
""
],
[
"Qian",
"Xinyuan",
""
],
[
"Berghi",
"Davide",
""
],
[
"Wu",
"Peipei",
""
],
[
"Cui",
"Meng",
""
],
[
"Sun",
"Jianyuan",
""
],
[
"Jackson",
"Philip J. B.",
""
],
[
"Wang",
"Wenwu",
""
]
] | TITLE: Audio-Visual Speaker Tracking: Progress, Challenges, and Future
Directions
ABSTRACT: Audio-visual speaker tracking has drawn increasing attention over the past
few years due to its academic values and wide applications. Audio and visual
modalities can provide complementary information for localization and tracking.
With audio and visual information, the Bayesian-based filter and deep
learning-based methods can solve the problem of data association, audio-visual
fusion and track management. In this paper, we conduct a comprehensive overview
of audio-visual speaker tracking. To our knowledge, this is the first extensive
survey over the past five years. We introduce the family of Bayesian filters
and summarize the methods for obtaining audio-visual measurements. In addition,
the existing trackers and their performance on the AV16.3 dataset are
summarized. In the past few years, deep learning techniques have thrived, which
also boost the development of audio-visual speaker tracking. The influence of
deep learning techniques in terms of measurement extraction and state
estimation is also discussed. Finally, we discuss the connections between
audio-visual speaker tracking and other areas such as speech separation and
distributed speaker tracking.
| no_new_dataset | 0.94625 |
2310.18542 | Shibal Ibrahim | Shibal Ibrahim and Kayhan Behdin and Rahul Mazumder | End-to-end Feature Selection Approach for Learning Skinny Trees | Published in AISTATS 2024 | International Conference on Artificial Intelligence and Statistics
(AISTATS) 2024 | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | We propose a new optimization-based approach for feature selection in tree
ensembles, an important problem in statistics and machine learning. Popular
tree ensemble toolkits e.g., Gradient Boosted Trees and Random Forests support
feature selection post-training based on feature importance scores, while very
popular, they are known to have drawbacks. We propose Skinny Trees: an
end-to-end toolkit for feature selection in tree ensembles where we train a
tree ensemble while controlling the number of selected features. Our
optimization-based approach learns an ensemble of differentiable trees, and
simultaneously performs feature selection using a grouped $\ell_0$-regularizer.
We use first-order methods for optimization and present convergence guarantees
for our approach. We use a dense-to-sparse regularization scheduling scheme
that can lead to more expressive and sparser tree ensembles. On 15 synthetic
and real-world datasets, Skinny Trees can achieve $1.5\!\times\!
-~620~\!\times\!$ feature compression rates, leading up to $10\times$ faster
inference over dense trees, without any loss in performance. Skinny Trees lead
to superior feature selection than many existing toolkits e.g., in terms of AUC
performance for 25\% feature budget, Skinny Trees outperforms LightGBM by
$10.2\%$ (up to $37.7\%$), and Random Forests by $3\%$ (up to $12.5\%$).
| [
{
"version": "v1",
"created": "Sat, 28 Oct 2023 00:15:10 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Sep 2024 07:34:54 GMT"
},
{
"version": "v3",
"created": "Sun, 6 Apr 2025 03:10:53 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ibrahim",
"Shibal",
""
],
[
"Behdin",
"Kayhan",
""
],
[
"Mazumder",
"Rahul",
""
]
] | TITLE: End-to-end Feature Selection Approach for Learning Skinny Trees
ABSTRACT: We propose a new optimization-based approach for feature selection in tree
ensembles, an important problem in statistics and machine learning. Popular
tree ensemble toolkits e.g., Gradient Boosted Trees and Random Forests support
feature selection post-training based on feature importance scores, while very
popular, they are known to have drawbacks. We propose Skinny Trees: an
end-to-end toolkit for feature selection in tree ensembles where we train a
tree ensemble while controlling the number of selected features. Our
optimization-based approach learns an ensemble of differentiable trees, and
simultaneously performs feature selection using a grouped $\ell_0$-regularizer.
We use first-order methods for optimization and present convergence guarantees
for our approach. We use a dense-to-sparse regularization scheduling scheme
that can lead to more expressive and sparser tree ensembles. On 15 synthetic
and real-world datasets, Skinny Trees can achieve $1.5\!\times\!
-~620~\!\times\!$ feature compression rates, leading up to $10\times$ faster
inference over dense trees, without any loss in performance. Skinny Trees lead
to superior feature selection than many existing toolkits e.g., in terms of AUC
performance for 25\% feature budget, Skinny Trees outperforms LightGBM by
$10.2\%$ (up to $37.7\%$), and Random Forests by $3\%$ (up to $12.5\%$).
| no_new_dataset | 0.947235 |
2310.18651 | Ali Javidani | Ali Javidani, Mohammad Amin Sadeghi, Babak Nadjar Araabi | Patch-Wise Self-Supervised Visual Representation Learning: A
Fine-Grained Approach | 15 pages | null | 10.1007/s11760-025-04020-y | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Self-supervised visual representation learning traditionally focuses on
image-level instance discrimination. Our study introduces an innovative,
fine-grained dimension by integrating patch-level discrimination into these
methodologies. This integration allows for the simultaneous analysis of local
and global visual features, thereby enriching the quality of the learned
representations. Initially, the original images undergo spatial augmentation.
Subsequently, we employ a distinctive photometric patch-level augmentation,
where each patch is individually augmented, independent from other patches
within the same view. This approach generates a diverse training dataset with
distinct color variations in each segment. The augmented images are then
processed through a self-distillation learning framework, utilizing the Vision
Transformer (ViT) as its backbone. The proposed method minimizes the
representation distances across both image and patch levels to capture details
from macro to micro perspectives. To this end, we present a simple yet
effective patch-matching algorithm to find the corresponding patches across the
augmented views. Thanks to the efficient structure of the patch-matching
algorithm, our method reduces computational complexity compared to similar
approaches. Consequently, we achieve an advanced understanding of the model
without adding significant computational requirements. We have extensively
pretrained our method on datasets of varied scales, such as Cifar10,
ImageNet-100, and ImageNet-1K. It demonstrates superior performance over
state-of-the-art self-supervised representation learning methods in image
classification and downstream tasks, such as copy detection and image
retrieval. The implementation of our method is accessible on GitHub.
| [
{
"version": "v1",
"created": "Sat, 28 Oct 2023 09:35:30 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Nov 2023 07:52:31 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Nov 2023 07:02:59 GMT"
},
{
"version": "v4",
"created": "Sat, 16 Dec 2023 10:50:45 GMT"
},
{
"version": "v5",
"created": "Mon, 3 Jun 2024 13:02:54 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Javidani",
"Ali",
""
],
[
"Sadeghi",
"Mohammad Amin",
""
],
[
"Araabi",
"Babak Nadjar",
""
]
] | TITLE: Patch-Wise Self-Supervised Visual Representation Learning: A
Fine-Grained Approach
ABSTRACT: Self-supervised visual representation learning traditionally focuses on
image-level instance discrimination. Our study introduces an innovative,
fine-grained dimension by integrating patch-level discrimination into these
methodologies. This integration allows for the simultaneous analysis of local
and global visual features, thereby enriching the quality of the learned
representations. Initially, the original images undergo spatial augmentation.
Subsequently, we employ a distinctive photometric patch-level augmentation,
where each patch is individually augmented, independent from other patches
within the same view. This approach generates a diverse training dataset with
distinct color variations in each segment. The augmented images are then
processed through a self-distillation learning framework, utilizing the Vision
Transformer (ViT) as its backbone. The proposed method minimizes the
representation distances across both image and patch levels to capture details
from macro to micro perspectives. To this end, we present a simple yet
effective patch-matching algorithm to find the corresponding patches across the
augmented views. Thanks to the efficient structure of the patch-matching
algorithm, our method reduces computational complexity compared to similar
approaches. Consequently, we achieve an advanced understanding of the model
without adding significant computational requirements. We have extensively
pretrained our method on datasets of varied scales, such as Cifar10,
ImageNet-100, and ImageNet-1K. It demonstrates superior performance over
state-of-the-art self-supervised representation learning methods in image
classification and downstream tasks, such as copy detection and image
retrieval. The implementation of our method is accessible on GitHub.
| no_new_dataset | 0.946794 |
2311.00635 | Andrea Giuseppe Di Francesco | Andrea Giuseppe Di Francesco, Giuliano Giampietro, Indro Spinelli and
Danilo Comminiello | GATSY: Graph Attention Network for Music Artist Similarity | Accepted at IJCNN 2025 | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The artist similarity quest has become a crucial subject in social and
scientific contexts, driven by the desire to enhance music discovery according
to user preferences. Modern research solutions facilitate music discovery
according to user tastes. However, defining similarity among artists remains
challenging due to its inherently subjective nature, which can impact
recommendation accuracy. This paper introduces GATSY, a novel recommendation
system built upon graph attention networks and driven by a clusterized
embedding of artists. The proposed framework leverages the graph topology of
the input data to achieve outstanding performance results without relying
heavily on hand-crafted features. This flexibility allows us the inclusion of
fictitious artists within a music dataset, facilitating connections between
previously unlinked artists and enabling diverse recommendations from various
and heterogeneous sources. Experimental results prove the effectiveness of the
proposed method with respect to state-of-the-art solutions while maintaining
flexibility. The code to reproduce these experiments is available at
https://anonymous.4open.science/r/GATSY-Music_Artist_Similarity-4807/README.md.
| [
{
"version": "v1",
"created": "Wed, 1 Nov 2023 16:36:19 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 18:14:41 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Di Francesco",
"Andrea Giuseppe",
""
],
[
"Giampietro",
"Giuliano",
""
],
[
"Spinelli",
"Indro",
""
],
[
"Comminiello",
"Danilo",
""
]
] | TITLE: GATSY: Graph Attention Network for Music Artist Similarity
ABSTRACT: The artist similarity quest has become a crucial subject in social and
scientific contexts, driven by the desire to enhance music discovery according
to user preferences. Modern research solutions facilitate music discovery
according to user tastes. However, defining similarity among artists remains
challenging due to its inherently subjective nature, which can impact
recommendation accuracy. This paper introduces GATSY, a novel recommendation
system built upon graph attention networks and driven by a clusterized
embedding of artists. The proposed framework leverages the graph topology of
the input data to achieve outstanding performance results without relying
heavily on hand-crafted features. This flexibility allows us the inclusion of
fictitious artists within a music dataset, facilitating connections between
previously unlinked artists and enabling diverse recommendations from various
and heterogeneous sources. Experimental results prove the effectiveness of the
proposed method with respect to state-of-the-art solutions while maintaining
flexibility. The code to reproduce these experiments is available at
https://anonymous.4open.science/r/GATSY-Music_Artist_Similarity-4807/README.md.
| no_new_dataset | 0.952486 |
2311.08176 | Jingru Fu | Jingru Fu, Daniel Ferreira, \"Orjan Smedby, Rodrigo Moreno | A deformation-based morphometry framework for disentangling Alzheimer's
disease from normal aging using learned normal aging templates | 21 pages, 8 figures | null | 10.1038/s41598-025-96234-w | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Alzheimer's Disease and normal aging are both characterized by brain atrophy.
The question of whether AD-related brain atrophy represents accelerated aging
or a neurodegeneration process distinct from that in normal aging remains
unresolved. Moreover, precisely disentangling AD-related brain atrophy from
normal aging in a clinical context is complex. In this study, we propose a
deformation-based morphometry framework to estimate normal aging and
AD-specific atrophy patterns of subjects from morphological MRI scans. We first
leverage deep-learning-based methods to create age-dependent templates of
cognitively normal (CN) subjects. These templates model the normal aging
atrophy patterns in a CN population. Then, we use the learned diffeomorphic
registration to estimate the one-year normal aging pattern at the voxel level.
We register the testing image to the 60-year-old CN template in the second
step. Finally, normal aging and AD-specific scores are estimated by measuring
the alignment of this registration with the one-year normal aging pattern. The
methodology was developed and evaluated on the OASIS3 dataset with 1,014
T1-weighted MRI scans. Of these, 326 scans were from CN subjects, and 688 scans
were from individuals clinically diagnosed with AD at different stages of
clinical severity defined by clinical dementia rating (CDR) scores. The results
show that ventricles predominantly follow an accelerated normal aging pattern
in subjects with AD. In turn, hippocampi and amygdala regions were affected by
both normal aging and AD-specific factors. Interestingly, hippocampi and
amygdala regions showed more of an accelerated normal aging pattern for
subjects during the early clinical stages of the disease, while the AD-specific
score increases in later clinical stages. Our code is freely available at
https://github.com/Fjr9516/DBM_with_DL.
| [
{
"version": "v1",
"created": "Tue, 14 Nov 2023 14:04:35 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Fu",
"Jingru",
""
],
[
"Ferreira",
"Daniel",
""
],
[
"Smedby",
"Örjan",
""
],
[
"Moreno",
"Rodrigo",
""
]
] | TITLE: A deformation-based morphometry framework for disentangling Alzheimer's
disease from normal aging using learned normal aging templates
ABSTRACT: Alzheimer's Disease and normal aging are both characterized by brain atrophy.
The question of whether AD-related brain atrophy represents accelerated aging
or a neurodegeneration process distinct from that in normal aging remains
unresolved. Moreover, precisely disentangling AD-related brain atrophy from
normal aging in a clinical context is complex. In this study, we propose a
deformation-based morphometry framework to estimate normal aging and
AD-specific atrophy patterns of subjects from morphological MRI scans. We first
leverage deep-learning-based methods to create age-dependent templates of
cognitively normal (CN) subjects. These templates model the normal aging
atrophy patterns in a CN population. Then, we use the learned diffeomorphic
registration to estimate the one-year normal aging pattern at the voxel level.
We register the testing image to the 60-year-old CN template in the second
step. Finally, normal aging and AD-specific scores are estimated by measuring
the alignment of this registration with the one-year normal aging pattern. The
methodology was developed and evaluated on the OASIS3 dataset with 1,014
T1-weighted MRI scans. Of these, 326 scans were from CN subjects, and 688 scans
were from individuals clinically diagnosed with AD at different stages of
clinical severity defined by clinical dementia rating (CDR) scores. The results
show that ventricles predominantly follow an accelerated normal aging pattern
in subjects with AD. In turn, hippocampi and amygdala regions were affected by
both normal aging and AD-specific factors. Interestingly, hippocampi and
amygdala regions showed more of an accelerated normal aging pattern for
subjects during the early clinical stages of the disease, while the AD-specific
score increases in later clinical stages. Our code is freely available at
https://github.com/Fjr9516/DBM_with_DL.
| no_new_dataset | 0.947721 |
2312.00502 | Aristotelis Ballas | Aristotelis Ballas, Vasileios Papapanagiotou and Christos Diou | Which Augmentation Should I Use? An Empirical Investigation of
Augmentations for Self-Supervised Phonocardiogram Representation Learning | Accepted in IEEE ACCESS: https://doi.org/10.1109/ACCESS.2024.3519297 | null | 10.1109/ACCESS.2024.3519297 | null | cs.LG cs.SD eess.AS q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Despite recent advancements in deep learning, its application in real-world
medical settings, such as phonocardiogram (PCG) classification, remains
limited. A significant barrier is the lack of high-quality annotated datasets,
which hampers the development of robust, generalizable models that can perform
well on newly collected, out-of-distribution (OOD) data. Self-Supervised
Learning (SSL) contrastive learning, has shown promise in mitigating the issue
of data scarcity by using unlabeled data to enhance model robustness. Even
though SSL methods have been proposed and researched in other domains, works
focusing on the impact of data augmentations on model robustness for PCG
classification are limited. In particular, while augmentations are a key
component in SSL, selecting the most suitable policy during training is highly
challenging. Improper augmentations can lead to substantial performance
degradation and even hinder a network's ability to learn meaningful
representations. Addressing this gap, our research aims to explore and evaluate
a wide range of audio-based augmentations and uncover combinations that enhance
SSL model performance in PCG classification. We conduct a comprehensive
comparative analysis across multiple datasets, assessing the impact of various
augmentations on model performance. Our findings reveal that depending on the
training distribution, augmentation choice significantly influences model
robustness, with fully-supervised models experiencing up to a 32\% drop in
effectiveness when evaluated on unseen data, while SSL models demonstrate
greater resilience, losing only 10\% or even improving in some cases. This
study also highlights the most promising and appropriate augmentations for PCG
signal processing, by calculating their effect size on training. These insights
equip researchers with valuable guidelines for developing reliable models in
PCG signal processing.
| [
{
"version": "v1",
"created": "Fri, 1 Dec 2023 11:06:00 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Mar 2024 10:32:01 GMT"
},
{
"version": "v3",
"created": "Fri, 5 Apr 2024 11:19:12 GMT"
},
{
"version": "v4",
"created": "Wed, 11 Dec 2024 09:53:49 GMT"
},
{
"version": "v5",
"created": "Mon, 16 Dec 2024 13:32:52 GMT"
},
{
"version": "v6",
"created": "Sat, 4 Jan 2025 17:36:39 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ballas",
"Aristotelis",
""
],
[
"Papapanagiotou",
"Vasileios",
""
],
[
"Diou",
"Christos",
""
]
] | TITLE: Which Augmentation Should I Use? An Empirical Investigation of
Augmentations for Self-Supervised Phonocardiogram Representation Learning
ABSTRACT: Despite recent advancements in deep learning, its application in real-world
medical settings, such as phonocardiogram (PCG) classification, remains
limited. A significant barrier is the lack of high-quality annotated datasets,
which hampers the development of robust, generalizable models that can perform
well on newly collected, out-of-distribution (OOD) data. Self-Supervised
Learning (SSL) contrastive learning, has shown promise in mitigating the issue
of data scarcity by using unlabeled data to enhance model robustness. Even
though SSL methods have been proposed and researched in other domains, works
focusing on the impact of data augmentations on model robustness for PCG
classification are limited. In particular, while augmentations are a key
component in SSL, selecting the most suitable policy during training is highly
challenging. Improper augmentations can lead to substantial performance
degradation and even hinder a network's ability to learn meaningful
representations. Addressing this gap, our research aims to explore and evaluate
a wide range of audio-based augmentations and uncover combinations that enhance
SSL model performance in PCG classification. We conduct a comprehensive
comparative analysis across multiple datasets, assessing the impact of various
augmentations on model performance. Our findings reveal that depending on the
training distribution, augmentation choice significantly influences model
robustness, with fully-supervised models experiencing up to a 32\% drop in
effectiveness when evaluated on unseen data, while SSL models demonstrate
greater resilience, losing only 10\% or even improving in some cases. This
study also highlights the most promising and appropriate augmentations for PCG
signal processing, by calculating their effect size on training. These insights
equip researchers with valuable guidelines for developing reliable models in
PCG signal processing.
| no_new_dataset | 0.94474 |
2312.08034 | Mushfiqur Rahman | Mushfiqur Rahman, Runze Liu, Chau-Wai Wong, Huaiyu Dai | Individualized Deepfake Detection Exploiting Traces Due to Double
Neural-Network Operations | null | null | null | null | eess.IV cs.CR cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In today's digital landscape, journalists urgently require tools to verify
the authenticity of facial images and videos depicting specific public figures
before incorporating them into news stories. Existing deepfake detectors are
not optimized for this detection task when an image is associated with a
specific and identifiable individual. This study focuses on the deepfake
detection of facial images of individual public figures. We propose to
condition the proposed detector on the identity of an identified individual,
given the advantages revealed by our theory-driven simulations. While most
detectors in the literature rely on perceptible or imperceptible artifacts
present in deepfake facial images, we demonstrate that the detection
performance can be improved by exploiting the idempotency property of neural
networks. In our approach, the training process involves double neural-network
operations where we pass an authentic image through a deepfake simulating
network twice. Experimental results show that the proposed method improves the
area under the curve (AUC) from 0.92 to 0.94 and reduces its standard deviation
by 17%. To address the need for evaluating detection performance for individual
public figures, we curated and publicly released a dataset of ~32k images
featuring 45 public figures, as existing deepfake datasets do not meet this
criterion.
| [
{
"version": "v1",
"created": "Wed, 13 Dec 2023 10:21:00 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 21:05:01 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Rahman",
"Mushfiqur",
""
],
[
"Liu",
"Runze",
""
],
[
"Wong",
"Chau-Wai",
""
],
[
"Dai",
"Huaiyu",
""
]
] | TITLE: Individualized Deepfake Detection Exploiting Traces Due to Double
Neural-Network Operations
ABSTRACT: In today's digital landscape, journalists urgently require tools to verify
the authenticity of facial images and videos depicting specific public figures
before incorporating them into news stories. Existing deepfake detectors are
not optimized for this detection task when an image is associated with a
specific and identifiable individual. This study focuses on the deepfake
detection of facial images of individual public figures. We propose to
condition the proposed detector on the identity of an identified individual,
given the advantages revealed by our theory-driven simulations. While most
detectors in the literature rely on perceptible or imperceptible artifacts
present in deepfake facial images, we demonstrate that the detection
performance can be improved by exploiting the idempotency property of neural
networks. In our approach, the training process involves double neural-network
operations where we pass an authentic image through a deepfake simulating
network twice. Experimental results show that the proposed method improves the
area under the curve (AUC) from 0.92 to 0.94 and reduces its standard deviation
by 17%. To address the need for evaluating detection performance for individual
public figures, we curated and publicly released a dataset of ~32k images
featuring 45 public figures, as existing deepfake datasets do not meet this
criterion.
| new_dataset | 0.960063 |
2312.11952 | Collin Leiber | Collin Leiber and Dominik Mautz and Claudia Plant and Christian B\"ohm | Automatic Parameter Selection for Non-Redundant Clustering | null | Proceedings of the 2022 SIAM International Conference on Data
Mining (SDM) (pp. 226-234). Society for Industrial and Applied Mathematics | 10.1137/1.9781611977172.26 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-dimensional datasets often contain multiple meaningful clusterings in
different subspaces. For example, objects can be clustered either by color,
weight, or size, revealing different interpretations of the given dataset. A
variety of approaches are able to identify such non-redundant clusterings.
However, most of these methods require the user to specify the expected number
of subspaces and clusters for each subspace. Stating these values is a
non-trivial problem and usually requires detailed knowledge of the input
dataset. In this paper, we propose a framework that utilizes the Minimum
Description Length Principle (MDL) to detect the number of subspaces and
clusters per subspace automatically. We describe an efficient procedure that
greedily searches the parameter space by splitting and merging subspaces and
clusters within subspaces. Additionally, an encoding strategy is introduced
that allows us to detect outliers in each subspace. Extensive experiments show
that our approach is highly competitive to state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 19 Dec 2023 08:53:00 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 07:13:36 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Leiber",
"Collin",
""
],
[
"Mautz",
"Dominik",
""
],
[
"Plant",
"Claudia",
""
],
[
"Böhm",
"Christian",
""
]
] | TITLE: Automatic Parameter Selection for Non-Redundant Clustering
ABSTRACT: High-dimensional datasets often contain multiple meaningful clusterings in
different subspaces. For example, objects can be clustered either by color,
weight, or size, revealing different interpretations of the given dataset. A
variety of approaches are able to identify such non-redundant clusterings.
However, most of these methods require the user to specify the expected number
of subspaces and clusters for each subspace. Stating these values is a
non-trivial problem and usually requires detailed knowledge of the input
dataset. In this paper, we propose a framework that utilizes the Minimum
Description Length Principle (MDL) to detect the number of subspaces and
clusters per subspace automatically. We describe an efficient procedure that
greedily searches the parameter space by splitting and merging subspaces and
clusters within subspaces. Additionally, an encoding strategy is introduced
that allows us to detect outliers in each subspace. Extensive experiments show
that our approach is highly competitive to state-of-the-art methods.
| no_new_dataset | 0.950227 |
2401.07702 | Christopher Davis | Christopher Davis, Andrew Caines, {\O}istein Andersen, Shiva
Taslimipoor, Helen Yannakoudakis, Zheng Yuan, Christopher Bryant, Marek Rei,
Paula Buttery | Prompting open-source and commercial language models for grammatical
error correction of English learner text | 8 pages with appendices; accepted to ACL Findings 2024 | null | 10.18653/v1/2024.findings-acl.711 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Thanks to recent advances in generative AI, we are able to prompt large
language models (LLMs) to produce texts which are fluent and grammatical. In
addition, it has been shown that we can elicit attempts at grammatical error
correction (GEC) from LLMs when prompted with ungrammatical input sentences. We
evaluate how well LLMs can perform at GEC by measuring their performance on
established benchmark datasets. We go beyond previous studies, which only
examined GPT* models on a selection of English GEC datasets, by evaluating
seven open-source and three commercial LLMs on four established GEC benchmarks.
We investigate model performance and report results against individual error
types. Our results indicate that LLMs do not always outperform supervised
English GEC models except in specific contexts -- namely commercial LLMs on
benchmarks annotated with fluency corrections as opposed to minimal edits. We
find that several open-source models outperform commercial ones on minimal edit
benchmarks, and that in some settings zero-shot prompting is just as
competitive as few-shot prompting.
| [
{
"version": "v1",
"created": "Mon, 15 Jan 2024 14:19:47 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Apr 2025 11:25:39 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Davis",
"Christopher",
""
],
[
"Caines",
"Andrew",
""
],
[
"Andersen",
"Øistein",
""
],
[
"Taslimipoor",
"Shiva",
""
],
[
"Yannakoudakis",
"Helen",
""
],
[
"Yuan",
"Zheng",
""
],
[
"Bryant",
"Christopher",
""
],
[
"Rei",
"Marek",
""
],
[
"Buttery",
"Paula",
""
]
] | TITLE: Prompting open-source and commercial language models for grammatical
error correction of English learner text
ABSTRACT: Thanks to recent advances in generative AI, we are able to prompt large
language models (LLMs) to produce texts which are fluent and grammatical. In
addition, it has been shown that we can elicit attempts at grammatical error
correction (GEC) from LLMs when prompted with ungrammatical input sentences. We
evaluate how well LLMs can perform at GEC by measuring their performance on
established benchmark datasets. We go beyond previous studies, which only
examined GPT* models on a selection of English GEC datasets, by evaluating
seven open-source and three commercial LLMs on four established GEC benchmarks.
We investigate model performance and report results against individual error
types. Our results indicate that LLMs do not always outperform supervised
English GEC models except in specific contexts -- namely commercial LLMs on
benchmarks annotated with fluency corrections as opposed to minimal edits. We
find that several open-source models outperform commercial ones on minimal edit
benchmarks, and that in some settings zero-shot prompting is just as
competitive as few-shot prompting.
| no_new_dataset | 0.932883 |
2401.09234 | Alfredo Go\~ni Sarriguren | Alfredo Go\~ni Sarriguren | SARRIGUREN: a polynomial-time complete algorithm for random $k$-SAT with
relatively dense clauses | 24 pages, 2 figures, 8 tables, algorithms, results and data in
https://goo.su/zV3Pt6E | null | null | null | cs.DS cs.CC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | SARRIGUREN, a new complete algorithm for SAT based on counting clauses (which
is valid also for Unique-SAT and #SAT) is described, analyzed and tested.
Although existing complete algorithms for SAT perform slower with clauses with
many literals, that is an advantage for SARRIGUREN, because the more literals
are in the clauses the bigger is the probability of overlapping among clauses,
a property that makes the clause counting process more efficient. Actually, it
provides a $O(m^2 \times n/k)$ time complexity for random $k$-SAT instances of
$n$ variables and $m$ relatively dense clauses, where that density level is
relative to the number of variables $n$, that is, clauses are relatively dense
when $k\geq7\sqrt{n}$. Although theoretically there could be worst-cases with
exponential complexity, the probability of those cases to happen in random
$k$-SAT with relatively dense clauses is practically zero. The algorithm has
been empirically tested and that polynomial time complexity maintains also for
$k$-SAT instances with less dense clauses ($k\geq5\sqrt{n}$). That density
could, for example, be of only 0.049 working with $n=20000$ variables and
$k=989$ literals. In addition, they are presented two more complementary
algorithms that provide the solutions to $k$-SAT instances and valuable
information about number of solutions for each literal. Although this algorithm
does not solve the NP=P problem (it is not a polynomial algorithm for 3-SAT),
it broads the knowledge about that subject, because $k$-SAT with $k>3$ and
dense clauses is not harder than 3-SAT. Moreover, the Python implementation of
the algorithms, and all the input datasets and obtained results in the
experiments are made available.
| [
{
"version": "v1",
"created": "Wed, 17 Jan 2024 14:23:55 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 08:42:46 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Sarriguren",
"Alfredo Goñi",
""
]
] | TITLE: SARRIGUREN: a polynomial-time complete algorithm for random $k$-SAT with
relatively dense clauses
ABSTRACT: SARRIGUREN, a new complete algorithm for SAT based on counting clauses (which
is valid also for Unique-SAT and #SAT) is described, analyzed and tested.
Although existing complete algorithms for SAT perform slower with clauses with
many literals, that is an advantage for SARRIGUREN, because the more literals
are in the clauses the bigger is the probability of overlapping among clauses,
a property that makes the clause counting process more efficient. Actually, it
provides a $O(m^2 \times n/k)$ time complexity for random $k$-SAT instances of
$n$ variables and $m$ relatively dense clauses, where that density level is
relative to the number of variables $n$, that is, clauses are relatively dense
when $k\geq7\sqrt{n}$. Although theoretically there could be worst-cases with
exponential complexity, the probability of those cases to happen in random
$k$-SAT with relatively dense clauses is practically zero. The algorithm has
been empirically tested and that polynomial time complexity maintains also for
$k$-SAT instances with less dense clauses ($k\geq5\sqrt{n}$). That density
could, for example, be of only 0.049 working with $n=20000$ variables and
$k=989$ literals. In addition, they are presented two more complementary
algorithms that provide the solutions to $k$-SAT instances and valuable
information about number of solutions for each literal. Although this algorithm
does not solve the NP=P problem (it is not a polynomial algorithm for 3-SAT),
it broads the knowledge about that subject, because $k$-SAT with $k>3$ and
dense clauses is not harder than 3-SAT. Moreover, the Python implementation of
the algorithms, and all the input datasets and obtained results in the
experiments are made available.
| no_new_dataset | 0.937812 |
2402.02085 | Long Ma | Long Ma, Zhiyuan Yan, Qinglang Guo, Yong Liao, Haiyang Yu, Pengyuan
Zhou | Detecting AI-Generated Video via Frame Consistency | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | The escalating quality of video generated by advanced video generation
methods results in new security challenges, while there have been few relevant
research efforts: 1) There is no open-source dataset for generated video
detection, 2) No generated video detection method has been proposed so far. To
this end, we propose an open-source dataset and a detection method for
generated video for the first time. First, we propose a scalable dataset
consisting of 964 prompts, covering various forgery targets, scenes, behaviors,
and actions, as well as various generation models with different architectures
and generation methods, including the most popular commercial models like
OpenAI's Sora and Google's Veo. Second, we found via probing experiments that
spatial artifact-based detectors lack generalizability. Hence, we propose a
simple yet effective \textbf{de}tection model based on \textbf{f}rame
\textbf{co}nsistency (\textbf{DeCoF}), which focuses on temporal artifacts by
eliminating the impact of spatial artifacts during feature learning. Extensive
experiments demonstrate the efficacy of DeCoF in detecting videos generated by
unseen video generation models and confirm its powerful generalizability across
several commercially proprietary models. Our code and dataset will be released
at https://github.com/wuwuwuyue/DeCoF.
| [
{
"version": "v1",
"created": "Sat, 3 Feb 2024 08:52:06 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Feb 2024 02:51:00 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Jun 2024 11:00:25 GMT"
},
{
"version": "v4",
"created": "Wed, 26 Jun 2024 03:32:50 GMT"
},
{
"version": "v5",
"created": "Sat, 13 Jul 2024 18:20:32 GMT"
},
{
"version": "v6",
"created": "Tue, 20 Aug 2024 07:17:31 GMT"
},
{
"version": "v7",
"created": "Mon, 7 Apr 2025 02:01:27 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ma",
"Long",
""
],
[
"Yan",
"Zhiyuan",
""
],
[
"Guo",
"Qinglang",
""
],
[
"Liao",
"Yong",
""
],
[
"Yu",
"Haiyang",
""
],
[
"Zhou",
"Pengyuan",
""
]
] | TITLE: Detecting AI-Generated Video via Frame Consistency
ABSTRACT: The escalating quality of video generated by advanced video generation
methods results in new security challenges, while there have been few relevant
research efforts: 1) There is no open-source dataset for generated video
detection, 2) No generated video detection method has been proposed so far. To
this end, we propose an open-source dataset and a detection method for
generated video for the first time. First, we propose a scalable dataset
consisting of 964 prompts, covering various forgery targets, scenes, behaviors,
and actions, as well as various generation models with different architectures
and generation methods, including the most popular commercial models like
OpenAI's Sora and Google's Veo. Second, we found via probing experiments that
spatial artifact-based detectors lack generalizability. Hence, we propose a
simple yet effective \textbf{de}tection model based on \textbf{f}rame
\textbf{co}nsistency (\textbf{DeCoF}), which focuses on temporal artifacts by
eliminating the impact of spatial artifacts during feature learning. Extensive
experiments demonstrate the efficacy of DeCoF in detecting videos generated by
unseen video generation models and confirm its powerful generalizability across
several commercially proprietary models. Our code and dataset will be released
at https://github.com/wuwuwuyue/DeCoF.
| new_dataset | 0.952397 |
2402.05675 | Tong Chen | Tong Chen, Raghavendra Selvan | Is Adversarial Training with Compressed Datasets Effective? | 22 pages, 10 figures, 3 tables, accepted at Scandinavian Conference
on Image Analysis 2025 (SCIA 2025) | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Dataset Condensation (DC) refers to the recent class of dataset compression
methods that generate a smaller, synthetic, dataset from a larger dataset. This
synthetic dataset aims to retain the essential information of the original
dataset, enabling models trained on it to achieve performance levels comparable
to those trained on the full dataset. Most current DC methods have mainly
concerned with achieving high test performance with limited data budget, and
have not directly addressed the question of adversarial robustness. In this
work, we investigate the impact of adversarial robustness on models trained
with compressed datasets. We show that the compressed datasets obtained from DC
methods are not effective in transferring adversarial robustness to models. As
a solution to improve dataset compression efficiency and adversarial robustness
simultaneously, we present a robustness-aware dataset compression method based
on finding the Minimal Finite Covering (MFC) of the dataset. The proposed
method is (1) provably robust by minimizing the generalized adversarial loss,
(2) more effective than DC methods when applying adversarial training over MFC,
(3) obtained by a one-time computation and is applicable for any model.
| [
{
"version": "v1",
"created": "Thu, 8 Feb 2024 13:53:11 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 17:31:31 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Chen",
"Tong",
""
],
[
"Selvan",
"Raghavendra",
""
]
] | TITLE: Is Adversarial Training with Compressed Datasets Effective?
ABSTRACT: Dataset Condensation (DC) refers to the recent class of dataset compression
methods that generate a smaller, synthetic, dataset from a larger dataset. This
synthetic dataset aims to retain the essential information of the original
dataset, enabling models trained on it to achieve performance levels comparable
to those trained on the full dataset. Most current DC methods have mainly
concerned with achieving high test performance with limited data budget, and
have not directly addressed the question of adversarial robustness. In this
work, we investigate the impact of adversarial robustness on models trained
with compressed datasets. We show that the compressed datasets obtained from DC
methods are not effective in transferring adversarial robustness to models. As
a solution to improve dataset compression efficiency and adversarial robustness
simultaneously, we present a robustness-aware dataset compression method based
on finding the Minimal Finite Covering (MFC) of the dataset. The proposed
method is (1) provably robust by minimizing the generalized adversarial loss,
(2) more effective than DC methods when applying adversarial training over MFC,
(3) obtained by a one-time computation and is applicable for any model.
| no_new_dataset | 0.9434 |
2402.09081 | Dan Garber | Dan Garber, Atara Kaplan | Low-Rank Extragradient Methods for Scalable Semidefinite Optimization | This version corrects an error in the previous version, as well as in
the short version published in \textit{Operations Research Letters}
\cite{garber2025low}: while in those versions we reported $\mathcal{O}(1/T)$
rates for the \textbf{best iterate}, in this corrected version these rates
hold only w.r.t. the \textbf{average iterate} | null | null | null | math.OC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider several classes of highly important semidefinite optimization
problems that involve both a convex objective function (smooth or nonsmooth)
and additional linear or nonlinear smooth and convex constraints, which are
ubiquitous in statistics, machine learning, combinatorial optimization, and
other domains. We focus on high-dimensional and plausible settings in which the
problem admits a low-rank solution which also satisfies a low-rank
complementarity condition. We provide several theoretical results proving that,
under these circumstances, the well-known Extragradient method, when
initialized in the proximity of an optimal primal-dual solution, converges to a
solution of the constrained optimization problem with its standard convergence
rates guarantees, using only low-rank singular value decompositions (SVD) to
project onto the positive semidefinite cone, as opposed to
computationally-prohibitive full-rank SVDs required in worst-case. Our approach
is supported by numerical experiments conducted with a dataset of Max-Cut
instances.
| [
{
"version": "v1",
"created": "Wed, 14 Feb 2024 10:48:00 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 09:36:31 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Garber",
"Dan",
""
],
[
"Kaplan",
"Atara",
""
]
] | TITLE: Low-Rank Extragradient Methods for Scalable Semidefinite Optimization
ABSTRACT: We consider several classes of highly important semidefinite optimization
problems that involve both a convex objective function (smooth or nonsmooth)
and additional linear or nonlinear smooth and convex constraints, which are
ubiquitous in statistics, machine learning, combinatorial optimization, and
other domains. We focus on high-dimensional and plausible settings in which the
problem admits a low-rank solution which also satisfies a low-rank
complementarity condition. We provide several theoretical results proving that,
under these circumstances, the well-known Extragradient method, when
initialized in the proximity of an optimal primal-dual solution, converges to a
solution of the constrained optimization problem with its standard convergence
rates guarantees, using only low-rank singular value decompositions (SVD) to
project onto the positive semidefinite cone, as opposed to
computationally-prohibitive full-rank SVDs required in worst-case. Our approach
is supported by numerical experiments conducted with a dataset of Max-Cut
instances.
| no_new_dataset | 0.942771 |
2402.14802 | Andrea Giuseppe Di Francesco | Andrea Giuseppe Di Francesco, Francesco Caso, Maria Sofia Bucarelli
and Fabrizio Silvestri | Link Prediction with Physics-Inspired Graph Neural Networks | Accepted at IJCNN 2025 | null | null | null | cs.LG cs.IR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The message-passing mechanism underlying Graph Neural Networks (GNNs) is not
naturally suited for heterophilic datasets, where adjacent nodes often have
different labels. Most solutions to this problem remain confined to the task of
node classification. In this article, we focus on the valuable task of link
prediction under heterophily, an interesting problem for recommendation
systems, social network analysis, and other applications. GNNs like GRAFF have
improved node classification under heterophily by incorporating physics biases
in the architecture. Similarly, we propose GRAFF-LP, an extension of GRAFF for
link prediction. We show that GRAFF-LP effectively discriminates existing from
non-existing edges by learning implicitly to separate the edge gradients. Based
on this information, we propose a new readout function inspired by physics.
Remarkably, this new function not only enhances the performance of GRAFF-LP but
also improves that of other baseline models, leading us to reconsider how every
link prediction experiment has been conducted so far. Finally, we provide
evidence that even simple GNNs did not experience greater difficulty in
predicting heterophilic links compared to homophilic ones. This leads us to
believe in the necessity for heterophily measures specifically tailored for
link prediction, distinct from those used in node classification. The code for
reproducing our experiments is available at this URL
https://anonymous.4open.science/r/Link_Prediction_with_PIGNN_IJCNN-F03F/.
| [
{
"version": "v1",
"created": "Thu, 22 Feb 2024 18:56:31 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 18:19:08 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Di Francesco",
"Andrea Giuseppe",
""
],
[
"Caso",
"Francesco",
""
],
[
"Bucarelli",
"Maria Sofia",
""
],
[
"Silvestri",
"Fabrizio",
""
]
] | TITLE: Link Prediction with Physics-Inspired Graph Neural Networks
ABSTRACT: The message-passing mechanism underlying Graph Neural Networks (GNNs) is not
naturally suited for heterophilic datasets, where adjacent nodes often have
different labels. Most solutions to this problem remain confined to the task of
node classification. In this article, we focus on the valuable task of link
prediction under heterophily, an interesting problem for recommendation
systems, social network analysis, and other applications. GNNs like GRAFF have
improved node classification under heterophily by incorporating physics biases
in the architecture. Similarly, we propose GRAFF-LP, an extension of GRAFF for
link prediction. We show that GRAFF-LP effectively discriminates existing from
non-existing edges by learning implicitly to separate the edge gradients. Based
on this information, we propose a new readout function inspired by physics.
Remarkably, this new function not only enhances the performance of GRAFF-LP but
also improves that of other baseline models, leading us to reconsider how every
link prediction experiment has been conducted so far. Finally, we provide
evidence that even simple GNNs did not experience greater difficulty in
predicting heterophilic links compared to homophilic ones. This leads us to
believe in the necessity for heterophily measures specifically tailored for
link prediction, distinct from those used in node classification. The code for
reproducing our experiments is available at this URL
https://anonymous.4open.science/r/Link_Prediction_with_PIGNN_IJCNN-F03F/.
| no_new_dataset | 0.948537 |
2403.08462 | Andrea Nini | Andrea Nini, Oren Halvani, Lukas Graner, Valerio Gherardi, Shunichi
Ishihara | Grammar as a Behavioral Biometric: Using Cognitively Motivated Grammar
Models for Authorship Verification | null | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Authorship Verification (AV) is a key area of research in digital text
forensics, which addresses the fundamental question of whether two texts were
written by the same person. Numerous computational approaches have been
proposed over the last two decades in an attempt to address this challenge.
However, existing AV methods often suffer from high complexity, low
explainability and especially from a lack of clear scientific justification. We
propose a simpler method based on modeling the grammar of an author following
Cognitive Linguistics principles. These models are used to calculate
$\lambda_G$ (LambdaG): the ratio of the likelihoods of a document given the
candidate's grammar versus given a reference population's grammar. Our
empirical evaluation, conducted on twelve datasets and compared against seven
baseline methods, demonstrates that LambdaG achieves superior performance,
including against several neural network-based AV methods. LambdaG is also
robust to small variations in the composition of the reference population and
provides interpretable visualizations, enhancing its explainability. We argue
that its effectiveness is due to the method's compatibility with Cognitive
Linguistics theories predicting that a person's grammar is a behavioral
biometric.
| [
{
"version": "v1",
"created": "Wed, 13 Mar 2024 12:25:47 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 11:12:57 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Nini",
"Andrea",
""
],
[
"Halvani",
"Oren",
""
],
[
"Graner",
"Lukas",
""
],
[
"Gherardi",
"Valerio",
""
],
[
"Ishihara",
"Shunichi",
""
]
] | TITLE: Grammar as a Behavioral Biometric: Using Cognitively Motivated Grammar
Models for Authorship Verification
ABSTRACT: Authorship Verification (AV) is a key area of research in digital text
forensics, which addresses the fundamental question of whether two texts were
written by the same person. Numerous computational approaches have been
proposed over the last two decades in an attempt to address this challenge.
However, existing AV methods often suffer from high complexity, low
explainability and especially from a lack of clear scientific justification. We
propose a simpler method based on modeling the grammar of an author following
Cognitive Linguistics principles. These models are used to calculate
$\lambda_G$ (LambdaG): the ratio of the likelihoods of a document given the
candidate's grammar versus given a reference population's grammar. Our
empirical evaluation, conducted on twelve datasets and compared against seven
baseline methods, demonstrates that LambdaG achieves superior performance,
including against several neural network-based AV methods. LambdaG is also
robust to small variations in the composition of the reference population and
provides interpretable visualizations, enhancing its explainability. We argue
that its effectiveness is due to the method's compatibility with Cognitive
Linguistics theories predicting that a person's grammar is a behavioral
biometric.
| no_new_dataset | 0.9463 |
2403.10045 | Eric Xue | Eric Xue, Yijiang Li, Haoyang Liu, Peiran Wang, Yifan Shen, Haohan
Wang | Towards Adversarially Robust Dataset Distillation by Curvature
Regularization | AAAI 2025 | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dataset distillation (DD) allows datasets to be distilled to fractions of
their original size while preserving the rich distributional information, so
that models trained on the distilled datasets can achieve a comparable accuracy
while saving significant computational loads. Recent research in this area has
been focusing on improving the accuracy of models trained on distilled
datasets. In this paper, we aim to explore a new perspective of DD. We study
how to embed adversarial robustness in distilled datasets, so that models
trained on these datasets maintain the high accuracy and meanwhile acquire
better adversarial robustness. We propose a new method that achieves this goal
by incorporating curvature regularization into the distillation process with
much less computational overhead than standard adversarial training. Extensive
empirical experiments suggest that our method not only outperforms standard
adversarial training on both accuracy and robustness with less computation
overhead but is also capable of generating robust distilled datasets that can
withstand various adversarial attacks. Our implementation is available at:
https://github.com/yumozi/GUARD.
| [
{
"version": "v1",
"created": "Fri, 15 Mar 2024 06:31:03 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Dec 2024 21:39:24 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Mar 2025 21:23:30 GMT"
},
{
"version": "v4",
"created": "Fri, 4 Apr 2025 20:27:58 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Xue",
"Eric",
""
],
[
"Li",
"Yijiang",
""
],
[
"Liu",
"Haoyang",
""
],
[
"Wang",
"Peiran",
""
],
[
"Shen",
"Yifan",
""
],
[
"Wang",
"Haohan",
""
]
] | TITLE: Towards Adversarially Robust Dataset Distillation by Curvature
Regularization
ABSTRACT: Dataset distillation (DD) allows datasets to be distilled to fractions of
their original size while preserving the rich distributional information, so
that models trained on the distilled datasets can achieve a comparable accuracy
while saving significant computational loads. Recent research in this area has
been focusing on improving the accuracy of models trained on distilled
datasets. In this paper, we aim to explore a new perspective of DD. We study
how to embed adversarial robustness in distilled datasets, so that models
trained on these datasets maintain the high accuracy and meanwhile acquire
better adversarial robustness. We propose a new method that achieves this goal
by incorporating curvature regularization into the distillation process with
much less computational overhead than standard adversarial training. Extensive
empirical experiments suggest that our method not only outperforms standard
adversarial training on both accuracy and robustness with less computation
overhead but is also capable of generating robust distilled datasets that can
withstand various adversarial attacks. Our implementation is available at:
https://github.com/yumozi/GUARD.
| no_new_dataset | 0.949576 |
2403.12529 | Brian Godwin Lim | Brian Godwin Lim, Galvin Brice Sy Lim, Renzo Roel Tan, Kazushi Ikeda | Contextualized Messages Boost Graph Representations | Published in Transactions on Machine Learning Research | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Graph neural networks (GNNs) have gained significant attention in recent
years for their ability to process data that may be represented as graphs. This
has prompted several studies to explore their representational capability based
on the graph isomorphism task. Notably, these works inherently assume a
countable node feature representation, potentially limiting their
applicability. Interestingly, only a few study GNNs with uncountable node
feature representation. In the paper, a new perspective on the representational
capability of GNNs is investigated across all
levels$\unicode{x2014}$node-level, neighborhood-level, and
graph-level$\unicode{x2014}$when the space of node feature representation is
uncountable. Specifically, the injective and metric requirements of previous
works are softly relaxed by employing a pseudometric distance on the space of
input to create a soft-injective function such that distinct inputs may produce
similar outputs if and only if the pseudometric deems the inputs to be
sufficiently similar on some representation. As a consequence, a simple and
computationally efficient soft-isomorphic relational graph convolution network
(SIR-GCN) that emphasizes the contextualized transformation of neighborhood
feature representations via anisotropic and dynamic message functions is
proposed. Furthermore, a mathematical discussion on the relationship between
SIR-GCN and key GNNs in literature is laid out to put the contribution into
context, establishing SIR-GCN as a generalization of classical GNN
methodologies. To close, experiments on synthetic and benchmark datasets
demonstrate the relative superiority of SIR-GCN, outperforming comparable
models in node and graph property prediction tasks.
| [
{
"version": "v1",
"created": "Tue, 19 Mar 2024 08:05:49 GMT"
},
{
"version": "v2",
"created": "Wed, 22 May 2024 09:02:33 GMT"
},
{
"version": "v3",
"created": "Mon, 30 Sep 2024 12:56:50 GMT"
},
{
"version": "v4",
"created": "Mon, 7 Apr 2025 11:27:48 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Lim",
"Brian Godwin",
""
],
[
"Lim",
"Galvin Brice Sy",
""
],
[
"Tan",
"Renzo Roel",
""
],
[
"Ikeda",
"Kazushi",
""
]
] | TITLE: Contextualized Messages Boost Graph Representations
ABSTRACT: Graph neural networks (GNNs) have gained significant attention in recent
years for their ability to process data that may be represented as graphs. This
has prompted several studies to explore their representational capability based
on the graph isomorphism task. Notably, these works inherently assume a
countable node feature representation, potentially limiting their
applicability. Interestingly, only a few study GNNs with uncountable node
feature representation. In the paper, a new perspective on the representational
capability of GNNs is investigated across all
levels$\unicode{x2014}$node-level, neighborhood-level, and
graph-level$\unicode{x2014}$when the space of node feature representation is
uncountable. Specifically, the injective and metric requirements of previous
works are softly relaxed by employing a pseudometric distance on the space of
input to create a soft-injective function such that distinct inputs may produce
similar outputs if and only if the pseudometric deems the inputs to be
sufficiently similar on some representation. As a consequence, a simple and
computationally efficient soft-isomorphic relational graph convolution network
(SIR-GCN) that emphasizes the contextualized transformation of neighborhood
feature representations via anisotropic and dynamic message functions is
proposed. Furthermore, a mathematical discussion on the relationship between
SIR-GCN and key GNNs in literature is laid out to put the contribution into
context, establishing SIR-GCN as a generalization of classical GNN
methodologies. To close, experiments on synthetic and benchmark datasets
demonstrate the relative superiority of SIR-GCN, outperforming comparable
models in node and graph property prediction tasks.
| no_new_dataset | 0.949949 |
2403.15304 | Yahya Badran | Yahya Badran, Christine Preisach | Addressing Label Leakage in Knowledge Tracing Models | null | Proceedings of the 17th International Conference on Computer
Supported Education (CSEDU) - Volume 2, 2025, pp. 85-95 | 10.5220/0013275200003932 | null | cs.CY cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge Tracing (KT) is concerned with predicting students' future
performance on learning items in intelligent tutoring systems. Learning items
are tagged with skill labels called knowledge concepts (KCs). Many KT models
expand the sequence of item-student interactions into KC-student interactions
by replacing learning items with their constituting KCs. This approach
addresses the issue of sparse item-student interactions and minimises the
number of model parameters. However, we identified a label leakage problem with
this approach. The model's ability to learn correlations between KCs belonging
to the same item can result in the leakage of ground truth labels, which leads
to decreased performance, particularly on datasets with a high number of KCs
per item.
In this paper, we present methods to prevent label leakage in knowledge
tracing (KT) models. Our model variants that utilize these methods consistently
outperform their original counterparts. This further underscores the impact of
label leakage on model performance. Additionally, these methods enhance the
overall performance of KT models, with one model variant surpassing all tested
baselines on different benchmarks. Notably, our methods are versatile and can
be applied to a wide range of KT models.
| [
{
"version": "v1",
"created": "Fri, 22 Mar 2024 15:54:30 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Apr 2024 16:39:54 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Apr 2025 15:00:58 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Badran",
"Yahya",
""
],
[
"Preisach",
"Christine",
""
]
] | TITLE: Addressing Label Leakage in Knowledge Tracing Models
ABSTRACT: Knowledge Tracing (KT) is concerned with predicting students' future
performance on learning items in intelligent tutoring systems. Learning items
are tagged with skill labels called knowledge concepts (KCs). Many KT models
expand the sequence of item-student interactions into KC-student interactions
by replacing learning items with their constituting KCs. This approach
addresses the issue of sparse item-student interactions and minimises the
number of model parameters. However, we identified a label leakage problem with
this approach. The model's ability to learn correlations between KCs belonging
to the same item can result in the leakage of ground truth labels, which leads
to decreased performance, particularly on datasets with a high number of KCs
per item.
In this paper, we present methods to prevent label leakage in knowledge
tracing (KT) models. Our model variants that utilize these methods consistently
outperform their original counterparts. This further underscores the impact of
label leakage on model performance. Additionally, these methods enhance the
overall performance of KT models, with one model variant surpassing all tested
baselines on different benchmarks. Notably, our methods are versatile and can
be applied to a wide range of KT models.
| no_new_dataset | 0.947039 |
2404.05014 | Jinfa Huang | Shenghai Yuan, Jinfa Huang, Yujun Shi, Yongqi Xu, Ruijie Zhu, Bin Lin,
Xinhua Cheng, Li Yuan, Jiebo Luo | MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators | TPAMI 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent advances in Text-to-Video generation (T2V) have achieved remarkable
success in synthesizing high-quality general videos from textual descriptions.
A largely overlooked problem in T2V is that existing models have not adequately
encoded physical knowledge of the real world, thus generated videos tend to
have limited motion and poor variations. In this paper, we propose
\textbf{MagicTime}, a metamorphic time-lapse video generation model, which
learns real-world physics knowledge from time-lapse videos and implements
metamorphic generation. First, we design a MagicAdapter scheme to decouple
spatial and temporal training, encode more physical knowledge from metamorphic
videos, and transform pre-trained T2V models to generate metamorphic videos.
Second, we introduce a Dynamic Frames Extraction strategy to adapt to
metamorphic time-lapse videos, which have a wider variation range and cover
dramatic object metamorphic processes, thus embodying more physical knowledge
than general videos. Finally, we introduce a Magic Text-Encoder to improve the
understanding of metamorphic video prompts. Furthermore, we create a time-lapse
video-text dataset called \textbf{ChronoMagic}, specifically curated to unlock
the metamorphic video generation ability. Extensive experiments demonstrate the
superiority and effectiveness of MagicTime for generating high-quality and
dynamic metamorphic videos, suggesting time-lapse video generation is a
promising path toward building metamorphic simulators of the physical world.
Code: https://github.com/PKU-YuanGroup/MagicTime
| [
{
"version": "v1",
"created": "Sun, 7 Apr 2024 16:49:07 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Apr 2025 03:43:41 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Yuan",
"Shenghai",
""
],
[
"Huang",
"Jinfa",
""
],
[
"Shi",
"Yujun",
""
],
[
"Xu",
"Yongqi",
""
],
[
"Zhu",
"Ruijie",
""
],
[
"Lin",
"Bin",
""
],
[
"Cheng",
"Xinhua",
""
],
[
"Yuan",
"Li",
""
],
[
"Luo",
"Jiebo",
""
]
] | TITLE: MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators
ABSTRACT: Recent advances in Text-to-Video generation (T2V) have achieved remarkable
success in synthesizing high-quality general videos from textual descriptions.
A largely overlooked problem in T2V is that existing models have not adequately
encoded physical knowledge of the real world, thus generated videos tend to
have limited motion and poor variations. In this paper, we propose
\textbf{MagicTime}, a metamorphic time-lapse video generation model, which
learns real-world physics knowledge from time-lapse videos and implements
metamorphic generation. First, we design a MagicAdapter scheme to decouple
spatial and temporal training, encode more physical knowledge from metamorphic
videos, and transform pre-trained T2V models to generate metamorphic videos.
Second, we introduce a Dynamic Frames Extraction strategy to adapt to
metamorphic time-lapse videos, which have a wider variation range and cover
dramatic object metamorphic processes, thus embodying more physical knowledge
than general videos. Finally, we introduce a Magic Text-Encoder to improve the
understanding of metamorphic video prompts. Furthermore, we create a time-lapse
video-text dataset called \textbf{ChronoMagic}, specifically curated to unlock
the metamorphic video generation ability. Extensive experiments demonstrate the
superiority and effectiveness of MagicTime for generating high-quality and
dynamic metamorphic videos, suggesting time-lapse video generation is a
promising path toward building metamorphic simulators of the physical world.
Code: https://github.com/PKU-YuanGroup/MagicTime
| new_dataset | 0.954393 |
2404.09654 | Junran Wu | Jiaqi Zhu, Shaofeng Cai, Fang Deng, Beng Chin Ooi, Junran Wu | Do LLMs Understand Visual Anomalies? Uncovering LLM's Capabilities in
Zero-shot Anomaly Detection | Accepted by MM'24 (Oral) | null | null | null | cs.CV cs.MM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Large vision-language models (LVLMs) are markedly proficient in deriving
visual representations guided by natural language. Recent explorations have
utilized LVLMs to tackle zero-shot visual anomaly detection (VAD) challenges by
pairing images with textual descriptions indicative of normal and abnormal
conditions, referred to as anomaly prompts. However, existing approaches depend
on static anomaly prompts that are prone to cross-semantic ambiguity, and
prioritize global image-level representations over crucial local pixel-level
image-to-text alignment that is necessary for accurate anomaly localization. In
this paper, we present ALFA, a training-free approach designed to address these
challenges via a unified model. We propose a run-time prompt adaptation
strategy, which first generates informative anomaly prompts to leverage the
capabilities of a large language model (LLM). This strategy is enhanced by a
contextual scoring mechanism for per-image anomaly prompt adaptation and
cross-semantic ambiguity mitigation. We further introduce a novel fine-grained
aligner to fuse local pixel-level semantics for precise anomaly localization,
by projecting the image-text alignment from global to local semantic spaces.
Extensive evaluations on MVTec and VisA datasets confirm ALFA's effectiveness
in harnessing the language potential for zero-shot VAD, achieving significant
PRO improvements of 12.1% on MVTec and 8.9% on VisA compared to
state-of-the-art approaches.
| [
{
"version": "v1",
"created": "Mon, 15 Apr 2024 10:42:22 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Sep 2024 11:58:23 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Apr 2025 05:18:12 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zhu",
"Jiaqi",
""
],
[
"Cai",
"Shaofeng",
""
],
[
"Deng",
"Fang",
""
],
[
"Ooi",
"Beng Chin",
""
],
[
"Wu",
"Junran",
""
]
] | TITLE: Do LLMs Understand Visual Anomalies? Uncovering LLM's Capabilities in
Zero-shot Anomaly Detection
ABSTRACT: Large vision-language models (LVLMs) are markedly proficient in deriving
visual representations guided by natural language. Recent explorations have
utilized LVLMs to tackle zero-shot visual anomaly detection (VAD) challenges by
pairing images with textual descriptions indicative of normal and abnormal
conditions, referred to as anomaly prompts. However, existing approaches depend
on static anomaly prompts that are prone to cross-semantic ambiguity, and
prioritize global image-level representations over crucial local pixel-level
image-to-text alignment that is necessary for accurate anomaly localization. In
this paper, we present ALFA, a training-free approach designed to address these
challenges via a unified model. We propose a run-time prompt adaptation
strategy, which first generates informative anomaly prompts to leverage the
capabilities of a large language model (LLM). This strategy is enhanced by a
contextual scoring mechanism for per-image anomaly prompt adaptation and
cross-semantic ambiguity mitigation. We further introduce a novel fine-grained
aligner to fuse local pixel-level semantics for precise anomaly localization,
by projecting the image-text alignment from global to local semantic spaces.
Extensive evaluations on MVTec and VisA datasets confirm ALFA's effectiveness
in harnessing the language potential for zero-shot VAD, achieving significant
PRO improvements of 12.1% on MVTec and 8.9% on VisA compared to
state-of-the-art approaches.
| no_new_dataset | 0.949716 |
2404.13659 | Tong Wang | Tong Wang, Guanzhou Chen, Xiaodong Zhang, Chenxi Liu, Xiaoliang Tan,
Jiaqi Wang, Chanjuan He, Wenlin Zhou | LMFNet: An Efficient Multimodal Fusion Approach for Semantic
Segmentation in High-Resolution Remote Sensing | null | null | 10.1016/j.patcog.2025.111579 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the rapid evolution of semantic segmentation for land cover
classification in high-resolution remote sensing imagery, integrating multiple
data modalities such as Digital Surface Model (DSM), RGB, and Near-infrared
(NIR) remains a challenge. Current methods often process only two types of
data, missing out on the rich information that additional modalities can
provide. Addressing this gap, we propose a novel \textbf{L}ightweight
\textbf{M}ultimodal data \textbf{F}usion \textbf{Net}work (LMFNet) to
accomplish the tasks of fusion and semantic segmentation of multimodal remote
sensing images. LMFNet uniquely accommodates various data types simultaneously,
including RGB, NirRG, and DSM, through a weight-sharing, multi-branch vision
transformer that minimizes parameter count while ensuring robust feature
extraction. Our proposed multimodal fusion module integrates a
\textit{Multimodal Feature Fusion Reconstruction Layer} and \textit{Multimodal
Feature Self-Attention Fusion Layer}, which can reconstruct and fuse multimodal
features. Extensive testing on public datasets such as US3D, ISPRS Potsdam, and
ISPRS Vaihingen demonstrates the effectiveness of LMFNet. Specifically, it
achieves a mean Intersection over Union ($mIoU$) of 85.09\% on the US3D
dataset, marking a significant improvement over existing methods. Compared to
unimodal approaches, LMFNet shows a 10\% enhancement in $mIoU$ with only a 0.5M
increase in parameter count. Furthermore, against bimodal methods, our approach
with trilateral inputs enhances $mIoU$ by 0.46 percentage points.
| [
{
"version": "v1",
"created": "Sun, 21 Apr 2024 13:29:42 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Tong",
""
],
[
"Chen",
"Guanzhou",
""
],
[
"Zhang",
"Xiaodong",
""
],
[
"Liu",
"Chenxi",
""
],
[
"Tan",
"Xiaoliang",
""
],
[
"Wang",
"Jiaqi",
""
],
[
"He",
"Chanjuan",
""
],
[
"Zhou",
"Wenlin",
""
]
] | TITLE: LMFNet: An Efficient Multimodal Fusion Approach for Semantic
Segmentation in High-Resolution Remote Sensing
ABSTRACT: Despite the rapid evolution of semantic segmentation for land cover
classification in high-resolution remote sensing imagery, integrating multiple
data modalities such as Digital Surface Model (DSM), RGB, and Near-infrared
(NIR) remains a challenge. Current methods often process only two types of
data, missing out on the rich information that additional modalities can
provide. Addressing this gap, we propose a novel \textbf{L}ightweight
\textbf{M}ultimodal data \textbf{F}usion \textbf{Net}work (LMFNet) to
accomplish the tasks of fusion and semantic segmentation of multimodal remote
sensing images. LMFNet uniquely accommodates various data types simultaneously,
including RGB, NirRG, and DSM, through a weight-sharing, multi-branch vision
transformer that minimizes parameter count while ensuring robust feature
extraction. Our proposed multimodal fusion module integrates a
\textit{Multimodal Feature Fusion Reconstruction Layer} and \textit{Multimodal
Feature Self-Attention Fusion Layer}, which can reconstruct and fuse multimodal
features. Extensive testing on public datasets such as US3D, ISPRS Potsdam, and
ISPRS Vaihingen demonstrates the effectiveness of LMFNet. Specifically, it
achieves a mean Intersection over Union ($mIoU$) of 85.09\% on the US3D
dataset, marking a significant improvement over existing methods. Compared to
unimodal approaches, LMFNet shows a 10\% enhancement in $mIoU$ with only a 0.5M
increase in parameter count. Furthermore, against bimodal methods, our approach
with trilateral inputs enhances $mIoU$ by 0.46 percentage points.
| no_new_dataset | 0.951549 |
2404.15451 | Hongyi Cai | Hongyi Cai, Mohammad Mahdinur Rahman, Wenzhen Dong and Jingyu Wu | CFPFormer: Feature-pyramid like Transformer Decoder for Segmentation and
Detection | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Feature pyramids have been widely adopted in convolutional neural networks
and transformers for tasks in medical image segmentation. However, existing
models generally focus on the Encoder-side Transformer for feature extraction.
We further explore the potential in improving the feature decoder with a
well-designed architecture. We propose Cross Feature Pyramid Transformer
decoder (CFPFormer), a novel decoder block that integrates feature pyramids and
transformers. Even though transformer-like architecture impress with
outstanding performance in segmentation, the concerns to reduce the redundancy
and training costs still exist. Specifically, by leveraging patch embedding,
cross-layer feature concatenation mechanisms, CFPFormer enhances feature
extraction capabilities while complexity issue is mitigated by our Gaussian
Attention. Benefiting from Transformer structure and U-shaped connections, our
work is capable of capturing long-range dependencies and effectively up-sample
feature maps. Experimental results are provided to evaluate CFPFormer on
medical image segmentation datasets, demonstrating the efficacy and
effectiveness. With a ResNet50 backbone, our method achieves 92.02\% Dice
Score, highlighting the efficacy of our methods. Notably, our VGG-based model
outperformed baselines with more complex ViT and Swin Transformer backbone.
| [
{
"version": "v1",
"created": "Tue, 23 Apr 2024 18:46:07 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 23:18:49 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Cai",
"Hongyi",
""
],
[
"Rahman",
"Mohammad Mahdinur",
""
],
[
"Dong",
"Wenzhen",
""
],
[
"Wu",
"Jingyu",
""
]
] | TITLE: CFPFormer: Feature-pyramid like Transformer Decoder for Segmentation and
Detection
ABSTRACT: Feature pyramids have been widely adopted in convolutional neural networks
and transformers for tasks in medical image segmentation. However, existing
models generally focus on the Encoder-side Transformer for feature extraction.
We further explore the potential in improving the feature decoder with a
well-designed architecture. We propose Cross Feature Pyramid Transformer
decoder (CFPFormer), a novel decoder block that integrates feature pyramids and
transformers. Even though transformer-like architecture impress with
outstanding performance in segmentation, the concerns to reduce the redundancy
and training costs still exist. Specifically, by leveraging patch embedding,
cross-layer feature concatenation mechanisms, CFPFormer enhances feature
extraction capabilities while complexity issue is mitigated by our Gaussian
Attention. Benefiting from Transformer structure and U-shaped connections, our
work is capable of capturing long-range dependencies and effectively up-sample
feature maps. Experimental results are provided to evaluate CFPFormer on
medical image segmentation datasets, demonstrating the efficacy and
effectiveness. With a ResNet50 backbone, our method achieves 92.02\% Dice
Score, highlighting the efficacy of our methods. Notably, our VGG-based model
outperformed baselines with more complex ViT and Swin Transformer backbone.
| no_new_dataset | 0.942876 |
2405.00543 | Kiet Nguyen | Quy Hoang Nguyen, Minh-Van Truong Nguyen, Kiet Van Nguyen | New Benchmark Dataset and Fine-Grained Cross-Modal Fusion Framework for
Vietnamese Multimodal Aspect-Category Sentiment Analysis | null | Multimedia Systems 31, 4 (2025) | 10.1007/s00530-024-01558-8 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The emergence of multimodal data on social media platforms presents new
opportunities to better understand user sentiments toward a given aspect.
However, existing multimodal datasets for Aspect-Category Sentiment Analysis
(ACSA) often focus on textual annotations, neglecting fine-grained information
in images. Consequently, these datasets fail to fully exploit the richness
inherent in multimodal. To address this, we introduce a new Vietnamese
multimodal dataset, named ViMACSA, which consists of 4,876 text-image pairs
with 14,618 fine-grained annotations for both text and image in the hotel
domain. Additionally, we propose a Fine-Grained Cross-Modal Fusion Framework
(FCMF) that effectively learns both intra- and inter-modality interactions and
then fuses these information to produce a unified multimodal representation.
Experimental results show that our framework outperforms SOTA models on the
ViMACSA dataset, achieving the highest F1 score of 79.73%. We also explore
characteristics and challenges in Vietnamese multimodal sentiment analysis,
including misspellings, abbreviations, and the complexities of the Vietnamese
language. This work contributes both a benchmark dataset and a new framework
that leverages fine-grained multimodal information to improve multimodal
aspect-category sentiment analysis. Our dataset is available for research
purposes:
https://github.com/hoangquy18/Multimodal-Aspect-Category-Sentiment-Analysis.
| [
{
"version": "v1",
"created": "Wed, 1 May 2024 14:29:03 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Nguyen",
"Quy Hoang",
""
],
[
"Nguyen",
"Minh-Van Truong",
""
],
[
"Van Nguyen",
"Kiet",
""
]
] | TITLE: New Benchmark Dataset and Fine-Grained Cross-Modal Fusion Framework for
Vietnamese Multimodal Aspect-Category Sentiment Analysis
ABSTRACT: The emergence of multimodal data on social media platforms presents new
opportunities to better understand user sentiments toward a given aspect.
However, existing multimodal datasets for Aspect-Category Sentiment Analysis
(ACSA) often focus on textual annotations, neglecting fine-grained information
in images. Consequently, these datasets fail to fully exploit the richness
inherent in multimodal. To address this, we introduce a new Vietnamese
multimodal dataset, named ViMACSA, which consists of 4,876 text-image pairs
with 14,618 fine-grained annotations for both text and image in the hotel
domain. Additionally, we propose a Fine-Grained Cross-Modal Fusion Framework
(FCMF) that effectively learns both intra- and inter-modality interactions and
then fuses these information to produce a unified multimodal representation.
Experimental results show that our framework outperforms SOTA models on the
ViMACSA dataset, achieving the highest F1 score of 79.73%. We also explore
characteristics and challenges in Vietnamese multimodal sentiment analysis,
including misspellings, abbreviations, and the complexities of the Vietnamese
language. This work contributes both a benchmark dataset and a new framework
that leverages fine-grained multimodal information to improve multimodal
aspect-category sentiment analysis. Our dataset is available for research
purposes:
https://github.com/hoangquy18/Multimodal-Aspect-Category-Sentiment-Analysis.
| new_dataset | 0.958148 |
2405.04804 | Yin Li | Yin Li, Rajalakshmi Nandakumar | WixUp: A General Data Augmentation Framework for Wireless Perception in
Tracking of Humans | SenSys pre-published version | null | 10.1145/3715014.3722084 | null | cs.NI | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in wireless perception technologies, including mmWave,
WiFi, and acoustics, have expanded their application in human motion tracking
and health monitoring. They are promising alternatives to traditional
camera-based perception systems, thanks to their efficacy under diverse
conditions or occlusions, and enhanced privacy. However, the integration of
deep learning within this field introduces new challenges such as the need for
extensive training data and poor model generalization, especially with sparse
and noisy wireless point clouds. As a remedy, data augmentation is one solution
well-explored in other deep learning fields, but they are not directly
applicable to the unique characteristics of wireless signals. This motivates us
to propose a custom data augmentation framework, WixUp, tailored for wireless
perception. Moreover, we aim to make it a general framework supporting various
datasets, model architectures, sensing modalities, and tasks; while previous
wireless data augmentation or generative simulations do not exhibit this
generalizability, only limited to certain use cases. More specifically, WixUp
can reverse-transform lossy coordinates into dense range profiles using
Gaussian mixture and probability tricks, making it capable of in-depth data
diversity enhancement; and its mixing-based method enables unsupervised domain
adaptation via self-training, allowing training of the model with no labels
from new users or environments in practice. In summary, our extensive
evaluation experiments show that WixUp provides consistent performance
improvement across various scenarios and outperforms the baselines.
| [
{
"version": "v1",
"created": "Wed, 8 May 2024 04:26:32 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 20:25:46 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Li",
"Yin",
""
],
[
"Nandakumar",
"Rajalakshmi",
""
]
] | TITLE: WixUp: A General Data Augmentation Framework for Wireless Perception in
Tracking of Humans
ABSTRACT: Recent advancements in wireless perception technologies, including mmWave,
WiFi, and acoustics, have expanded their application in human motion tracking
and health monitoring. They are promising alternatives to traditional
camera-based perception systems, thanks to their efficacy under diverse
conditions or occlusions, and enhanced privacy. However, the integration of
deep learning within this field introduces new challenges such as the need for
extensive training data and poor model generalization, especially with sparse
and noisy wireless point clouds. As a remedy, data augmentation is one solution
well-explored in other deep learning fields, but they are not directly
applicable to the unique characteristics of wireless signals. This motivates us
to propose a custom data augmentation framework, WixUp, tailored for wireless
perception. Moreover, we aim to make it a general framework supporting various
datasets, model architectures, sensing modalities, and tasks; while previous
wireless data augmentation or generative simulations do not exhibit this
generalizability, only limited to certain use cases. More specifically, WixUp
can reverse-transform lossy coordinates into dense range profiles using
Gaussian mixture and probability tricks, making it capable of in-depth data
diversity enhancement; and its mixing-based method enables unsupervised domain
adaptation via self-training, allowing training of the model with no labels
from new users or environments in practice. In summary, our extensive
evaluation experiments show that WixUp provides consistent performance
improvement across various scenarios and outperforms the baselines.
| no_new_dataset | 0.939304 |
2405.07765 | Mubashara Akhtar | Mubashara Akhtar and Chenxi Pang and Andreea Marzoca and Yasemin Altun
and Julian Martin Eisenschlos | TANQ: An open domain dataset of table answered questions | 12 pages, accepted at TACL | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Language models, potentially augmented with tool usage such as retrieval are
becoming the go-to means of answering questions. Understanding and answering
questions in real-world settings often requires retrieving information from
different sources, processing and aggregating data to extract insights, and
presenting complex findings in form of structured artifacts such as novel
tables, charts, or infographics. In this paper, we introduce TANQ, the first
open domain question answering dataset where the answers require building
tables from information across multiple sources. We release the full source
attribution for every cell in the resulting table and benchmark
state-of-the-art language models in open, oracle, and closed book setups. Our
best-performing baseline, Gemini Flash reaches an overall F1 score of 60.7,
lagging behind human performance by 12.3 points. We analyse baselines'
performance across different dataset attributes such as different skills
required for this task, including multi-hop reasoning, math operations, and
unit conversions. We further discuss common failures in model-generated
answers, suggesting that TANQ is a complex task with many challenges ahead.
| [
{
"version": "v1",
"created": "Mon, 13 May 2024 14:07:20 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jan 2025 07:29:20 GMT"
},
{
"version": "v3",
"created": "Sat, 5 Apr 2025 10:44:55 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Akhtar",
"Mubashara",
""
],
[
"Pang",
"Chenxi",
""
],
[
"Marzoca",
"Andreea",
""
],
[
"Altun",
"Yasemin",
""
],
[
"Eisenschlos",
"Julian Martin",
""
]
] | TITLE: TANQ: An open domain dataset of table answered questions
ABSTRACT: Language models, potentially augmented with tool usage such as retrieval are
becoming the go-to means of answering questions. Understanding and answering
questions in real-world settings often requires retrieving information from
different sources, processing and aggregating data to extract insights, and
presenting complex findings in form of structured artifacts such as novel
tables, charts, or infographics. In this paper, we introduce TANQ, the first
open domain question answering dataset where the answers require building
tables from information across multiple sources. We release the full source
attribution for every cell in the resulting table and benchmark
state-of-the-art language models in open, oracle, and closed book setups. Our
best-performing baseline, Gemini Flash reaches an overall F1 score of 60.7,
lagging behind human performance by 12.3 points. We analyse baselines'
performance across different dataset attributes such as different skills
required for this task, including multi-hop reasoning, math operations, and
unit conversions. We further discuss common failures in model-generated
answers, suggesting that TANQ is a complex task with many challenges ahead.
| new_dataset | 0.964321 |
2405.07920 | Ferdinand Schlatt | Ferdinand Schlatt, Maik Fr\"obe, Harrisen Scells, Shengyao Zhuang,
Bevan Koopman, Guido Zuccon, Benno Stein, Martin Potthast, Matthias Hagen | Rank-DistiLLM: Closing the Effectiveness Gap Between Cross-Encoders and
LLMs for Passage Re-Ranking | Accepted at ECIR'25 | null | 10.1007/978-3-031-88714-7_31 | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | Cross-encoders distilled from large language models (LLMs) are often more
effective re-rankers than cross-encoders fine-tuned on manually labeled data.
However, distilled models do not match the effectiveness of their teacher LLMs.
We hypothesize that this effectiveness gap is due to the fact that previous
work has not applied the best-suited methods for fine-tuning cross-encoders on
manually labeled data (e.g., hard-negative sampling, deep sampling, and
listwise loss functions). To close this gap, we create a new dataset,
Rank-DistiLLM. Cross-encoders trained on Rank-DistiLLM achieve the
effectiveness of LLMs while being up to 173 times faster and 24 times more
memory efficient. Our code and data is available at
https://github.com/webis-de/ECIR-25.
| [
{
"version": "v1",
"created": "Mon, 13 May 2024 16:51:53 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Jun 2024 12:43:02 GMT"
},
{
"version": "v3",
"created": "Sat, 22 Mar 2025 09:53:21 GMT"
},
{
"version": "v4",
"created": "Sat, 5 Apr 2025 10:01:30 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Schlatt",
"Ferdinand",
""
],
[
"Fröbe",
"Maik",
""
],
[
"Scells",
"Harrisen",
""
],
[
"Zhuang",
"Shengyao",
""
],
[
"Koopman",
"Bevan",
""
],
[
"Zuccon",
"Guido",
""
],
[
"Stein",
"Benno",
""
],
[
"Potthast",
"Martin",
""
],
[
"Hagen",
"Matthias",
""
]
] | TITLE: Rank-DistiLLM: Closing the Effectiveness Gap Between Cross-Encoders and
LLMs for Passage Re-Ranking
ABSTRACT: Cross-encoders distilled from large language models (LLMs) are often more
effective re-rankers than cross-encoders fine-tuned on manually labeled data.
However, distilled models do not match the effectiveness of their teacher LLMs.
We hypothesize that this effectiveness gap is due to the fact that previous
work has not applied the best-suited methods for fine-tuning cross-encoders on
manually labeled data (e.g., hard-negative sampling, deep sampling, and
listwise loss functions). To close this gap, we create a new dataset,
Rank-DistiLLM. Cross-encoders trained on Rank-DistiLLM achieve the
effectiveness of LLMs while being up to 173 times faster and 24 times more
memory efficient. Our code and data is available at
https://github.com/webis-de/ECIR-25.
| new_dataset | 0.948775 |
2405.08487 | Mian Zou | Mian Zou, Baosheng Yu, Yibing Zhan, Siwei Lyu, and Kede Ma | Semantic Contextualization of Face Forgery: A New Definition, Dataset,
and Detection Method | null | null | null | null | cs.CV cs.CR | http://creativecommons.org/licenses/by/4.0/ | In recent years, deep learning has greatly streamlined the process of
manipulating photographic face images. Aware of the potential dangers,
researchers have developed various tools to spot these counterfeits. Yet, none
asks the fundamental question: What digital manipulations make a real
photographic face image fake, while others do not? In this paper, we put face
forgery in a semantic context and define that computational methods that alter
semantic face attributes to exceed human discrimination thresholds are sources
of face forgery. Following our definition, we construct a large face forgery
image dataset, where each image is associated with a set of labels organized in
a hierarchical graph. Our dataset enables two new testing protocols to probe
the generalizability of face forgery detectors. Moreover, we propose a
semantics-oriented face forgery detection method that captures label relations
and prioritizes the primary task (i.e., real or fake face detection). We show
that the proposed dataset successfully exposes the weaknesses of current
detectors as the test set and consistently improves their generalizability as
the training set. Additionally, we demonstrate the superiority of our
semantics-oriented method over traditional binary and multi-class
classification-based detectors.
| [
{
"version": "v1",
"created": "Tue, 14 May 2024 10:24:19 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 07:00:42 GMT"
},
{
"version": "v3",
"created": "Sat, 5 Apr 2025 09:19:17 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zou",
"Mian",
""
],
[
"Yu",
"Baosheng",
""
],
[
"Zhan",
"Yibing",
""
],
[
"Lyu",
"Siwei",
""
],
[
"Ma",
"Kede",
""
]
] | TITLE: Semantic Contextualization of Face Forgery: A New Definition, Dataset,
and Detection Method
ABSTRACT: In recent years, deep learning has greatly streamlined the process of
manipulating photographic face images. Aware of the potential dangers,
researchers have developed various tools to spot these counterfeits. Yet, none
asks the fundamental question: What digital manipulations make a real
photographic face image fake, while others do not? In this paper, we put face
forgery in a semantic context and define that computational methods that alter
semantic face attributes to exceed human discrimination thresholds are sources
of face forgery. Following our definition, we construct a large face forgery
image dataset, where each image is associated with a set of labels organized in
a hierarchical graph. Our dataset enables two new testing protocols to probe
the generalizability of face forgery detectors. Moreover, we propose a
semantics-oriented face forgery detection method that captures label relations
and prioritizes the primary task (i.e., real or fake face detection). We show
that the proposed dataset successfully exposes the weaknesses of current
detectors as the test set and consistently improves their generalizability as
the training set. Additionally, we demonstrate the superiority of our
semantics-oriented method over traditional binary and multi-class
classification-based detectors.
| new_dataset | 0.959649 |
2405.17238 | Ziyang Li | Ziyang Li, Saikat Dutta, Mayur Naik | IRIS: LLM-Assisted Static Analysis for Detecting Security
Vulnerabilities | null | null | null | null | cs.CR cs.PL cs.SE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Software is prone to security vulnerabilities. Program analysis tools to
detect them have limited effectiveness in practice due to their reliance on
human labeled specifications. Large language models (or LLMs) have shown
impressive code generation capabilities but they cannot do complex reasoning
over code to detect such vulnerabilities especially since this task requires
whole-repository analysis. We propose IRIS, a neuro-symbolic approach that
systematically combines LLMs with static analysis to perform whole-repository
reasoning for security vulnerability detection. Specifically, IRIS leverages
LLMs to infer taint specifications and perform contextual analysis, alleviating
needs for human specifications and inspection. For evaluation, we curate a new
dataset, CWE-Bench-Java, comprising 120 manually validated security
vulnerabilities in real-world Java projects. A state-of-the-art static analysis
tool CodeQL detects only 27 of these vulnerabilities whereas IRIS with GPT-4
detects 55 (+28) and improves upon CodeQL's average false discovery rate by 5%
points. Furthermore, IRIS identifies 4 previously unknown vulnerabilities which
cannot be found by existing tools. IRIS is available publicly at
https://github.com/iris-sast/iris.
| [
{
"version": "v1",
"created": "Mon, 27 May 2024 14:53:35 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Nov 2024 21:05:43 GMT"
},
{
"version": "v3",
"created": "Sun, 6 Apr 2025 23:46:59 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Li",
"Ziyang",
""
],
[
"Dutta",
"Saikat",
""
],
[
"Naik",
"Mayur",
""
]
] | TITLE: IRIS: LLM-Assisted Static Analysis for Detecting Security
Vulnerabilities
ABSTRACT: Software is prone to security vulnerabilities. Program analysis tools to
detect them have limited effectiveness in practice due to their reliance on
human labeled specifications. Large language models (or LLMs) have shown
impressive code generation capabilities but they cannot do complex reasoning
over code to detect such vulnerabilities especially since this task requires
whole-repository analysis. We propose IRIS, a neuro-symbolic approach that
systematically combines LLMs with static analysis to perform whole-repository
reasoning for security vulnerability detection. Specifically, IRIS leverages
LLMs to infer taint specifications and perform contextual analysis, alleviating
needs for human specifications and inspection. For evaluation, we curate a new
dataset, CWE-Bench-Java, comprising 120 manually validated security
vulnerabilities in real-world Java projects. A state-of-the-art static analysis
tool CodeQL detects only 27 of these vulnerabilities whereas IRIS with GPT-4
detects 55 (+28) and improves upon CodeQL's average false discovery rate by 5%
points. Furthermore, IRIS identifies 4 previously unknown vulnerabilities which
cannot be found by existing tools. IRIS is available publicly at
https://github.com/iris-sast/iris.
| new_dataset | 0.951729 |
2405.18902 | Andrea Pugnana | Filippo Palomba and Andrea Pugnana and Jos\'e Manuel Alvarez and
Salvatore Ruggieri | A Causal Framework for Evaluating Deferring Systems | Accepted at AISTATS 2025 | null | null | null | cs.LG cs.AI stat.ML | http://creativecommons.org/licenses/by/4.0/ | Deferring systems extend supervised Machine Learning (ML) models with the
possibility to defer predictions to human experts. However, evaluating the
impact of a deferring strategy on system accuracy is still an overlooked area.
This paper fills this gap by evaluating deferring systems through a causal
lens. We link the potential outcomes framework for causal inference with
deferring systems, which allows to identify the causal impact of the deferring
strategy on predictive accuracy. We distinguish two scenarios. In the first
one, we have access to both the human and ML model predictions for the deferred
instances. Here, we can identify the individual causal effects for deferred
instances and the aggregates of them. In the second one, only human predictions
are available for the deferred instances. Here, we can resort to regression
discontinuity designs to estimate a local causal effect. We evaluate our
approach on synthetic and real datasets for seven deferring systems from the
literature.
| [
{
"version": "v1",
"created": "Wed, 29 May 2024 09:03:44 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 08:54:30 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Palomba",
"Filippo",
""
],
[
"Pugnana",
"Andrea",
""
],
[
"Alvarez",
"José Manuel",
""
],
[
"Ruggieri",
"Salvatore",
""
]
] | TITLE: A Causal Framework for Evaluating Deferring Systems
ABSTRACT: Deferring systems extend supervised Machine Learning (ML) models with the
possibility to defer predictions to human experts. However, evaluating the
impact of a deferring strategy on system accuracy is still an overlooked area.
This paper fills this gap by evaluating deferring systems through a causal
lens. We link the potential outcomes framework for causal inference with
deferring systems, which allows to identify the causal impact of the deferring
strategy on predictive accuracy. We distinguish two scenarios. In the first
one, we have access to both the human and ML model predictions for the deferred
instances. Here, we can identify the individual causal effects for deferred
instances and the aggregates of them. In the second one, only human predictions
are available for the deferred instances. Here, we can resort to regression
discontinuity designs to estimate a local causal effect. We evaluate our
approach on synthetic and real datasets for seven deferring systems from the
literature.
| no_new_dataset | 0.948632 |
2406.02541 | Inkyu Shin | Inkyu Shin, Qihang Yu, Xiaohui Shen, In So Kweon, Kuk-Jin Yoon,
Liang-Chieh Chen | Enhancing Temporal Consistency in Video Editing by Reconstructing Videos
with 3D Gaussian Splatting | Accepted to TMLR 2025. Project page at
https://video-3dgs-project.github.io/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in zero-shot video diffusion models have shown promise
for text-driven video editing, but challenges remain in achieving high temporal
consistency. To address this, we introduce Video-3DGS, a 3D Gaussian Splatting
(3DGS)-based video refiner designed to enhance temporal consistency in
zero-shot video editors. Our approach utilizes a two-stage 3D Gaussian
optimizing process tailored for editing dynamic monocular videos. In the first
stage, Video-3DGS employs an improved version of COLMAP, referred to as
MC-COLMAP, which processes original videos using a Masked and Clipped approach.
For each video clip, MC-COLMAP generates the point clouds for dynamic
foreground objects and complex backgrounds. These point clouds are utilized to
initialize two sets of 3D Gaussians (Frg-3DGS and Bkg-3DGS) aiming to represent
foreground and background views. Both foreground and background views are then
merged with a 2D learnable parameter map to reconstruct full views. In the
second stage, we leverage the reconstruction ability developed in the first
stage to impose the temporal constraints on the video diffusion model. To
demonstrate the efficacy of Video-3DGS on both stages, we conduct extensive
experiments across two related tasks: Video Reconstruction and Video Editing.
Video-3DGS trained with 3k iterations significantly improves video
reconstruction quality (+3 PSNR, +7 PSNR increase) and training efficiency
(x1.9, x4.5 times faster) over NeRF-based and 3DGS-based state-of-art methods
on DAVIS dataset, respectively. Moreover, it enhances video editing by ensuring
temporal consistency across 58 dynamic monocular videos.
| [
{
"version": "v1",
"created": "Tue, 4 Jun 2024 17:57:37 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Jun 2024 05:00:39 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Jun 2024 01:40:56 GMT"
},
{
"version": "v4",
"created": "Fri, 4 Apr 2025 18:48:54 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Shin",
"Inkyu",
""
],
[
"Yu",
"Qihang",
""
],
[
"Shen",
"Xiaohui",
""
],
[
"Kweon",
"In So",
""
],
[
"Yoon",
"Kuk-Jin",
""
],
[
"Chen",
"Liang-Chieh",
""
]
] | TITLE: Enhancing Temporal Consistency in Video Editing by Reconstructing Videos
with 3D Gaussian Splatting
ABSTRACT: Recent advancements in zero-shot video diffusion models have shown promise
for text-driven video editing, but challenges remain in achieving high temporal
consistency. To address this, we introduce Video-3DGS, a 3D Gaussian Splatting
(3DGS)-based video refiner designed to enhance temporal consistency in
zero-shot video editors. Our approach utilizes a two-stage 3D Gaussian
optimizing process tailored for editing dynamic monocular videos. In the first
stage, Video-3DGS employs an improved version of COLMAP, referred to as
MC-COLMAP, which processes original videos using a Masked and Clipped approach.
For each video clip, MC-COLMAP generates the point clouds for dynamic
foreground objects and complex backgrounds. These point clouds are utilized to
initialize two sets of 3D Gaussians (Frg-3DGS and Bkg-3DGS) aiming to represent
foreground and background views. Both foreground and background views are then
merged with a 2D learnable parameter map to reconstruct full views. In the
second stage, we leverage the reconstruction ability developed in the first
stage to impose the temporal constraints on the video diffusion model. To
demonstrate the efficacy of Video-3DGS on both stages, we conduct extensive
experiments across two related tasks: Video Reconstruction and Video Editing.
Video-3DGS trained with 3k iterations significantly improves video
reconstruction quality (+3 PSNR, +7 PSNR increase) and training efficiency
(x1.9, x4.5 times faster) over NeRF-based and 3DGS-based state-of-art methods
on DAVIS dataset, respectively. Moreover, it enhances video editing by ensuring
temporal consistency across 58 dynamic monocular videos.
| no_new_dataset | 0.954774 |
2406.04928 | Torben Peters | Ghjulia Sialelli, Torben Peters, Jan D. Wegner, Konrad Schindler | AGBD: A Global-scale Biomass Dataset | null | null | null | null | cs.CV cs.LG eess.IV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Accurate estimates of Above Ground Biomass (AGB) are essential in addressing
two of humanity's biggest challenges: climate change and biodiversity loss.
Existing datasets for AGB estimation from satellite imagery are limited. Either
they focus on specific, local regions at high resolution, or they offer global
coverage at low resolution. There is a need for a machine learning-ready,
globally representative, high-resolution benchmark dataset. Our findings
indicate significant variability in biomass estimates across different
vegetation types, emphasizing the necessity for a dataset that accurately
captures global diversity. To address these gaps, we introduce a comprehensive
new dataset that is globally distributed, covers a range of vegetation types,
and spans several years. This dataset combines AGB reference data from the GEDI
mission with data from Sentinel-2 and PALSAR-2 imagery. Additionally, it
includes pre-processed high-level features such as a dense canopy height map,
an elevation map, and a land-cover classification map. We also produce a dense,
high-resolution (10m) map of AGB predictions for the entire area covered by the
dataset. Rigorously tested, our dataset is accompanied by several benchmark
models and is publicly available. It can be easily accessed using a single line
of code, offering a solid basis for efforts towards global AGB estimation. The
GitHub repository github.com/ghjuliasialelli/AGBD serves as a one-stop shop for
all code and data.
| [
{
"version": "v1",
"created": "Fri, 7 Jun 2024 13:34:17 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Dec 2024 11:08:35 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Apr 2025 11:19:12 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Sialelli",
"Ghjulia",
""
],
[
"Peters",
"Torben",
""
],
[
"Wegner",
"Jan D.",
""
],
[
"Schindler",
"Konrad",
""
]
] | TITLE: AGBD: A Global-scale Biomass Dataset
ABSTRACT: Accurate estimates of Above Ground Biomass (AGB) are essential in addressing
two of humanity's biggest challenges: climate change and biodiversity loss.
Existing datasets for AGB estimation from satellite imagery are limited. Either
they focus on specific, local regions at high resolution, or they offer global
coverage at low resolution. There is a need for a machine learning-ready,
globally representative, high-resolution benchmark dataset. Our findings
indicate significant variability in biomass estimates across different
vegetation types, emphasizing the necessity for a dataset that accurately
captures global diversity. To address these gaps, we introduce a comprehensive
new dataset that is globally distributed, covers a range of vegetation types,
and spans several years. This dataset combines AGB reference data from the GEDI
mission with data from Sentinel-2 and PALSAR-2 imagery. Additionally, it
includes pre-processed high-level features such as a dense canopy height map,
an elevation map, and a land-cover classification map. We also produce a dense,
high-resolution (10m) map of AGB predictions for the entire area covered by the
dataset. Rigorously tested, our dataset is accompanied by several benchmark
models and is publicly available. It can be easily accessed using a single line
of code, offering a solid basis for efforts towards global AGB estimation. The
GitHub repository github.com/ghjuliasialelli/AGBD serves as a one-stop shop for
all code and data.
| new_dataset | 0.966315 |
2406.09067 | Tarun Khajuria | Tarun Khajuria, Braian Olmiro Dias, Marharyta Domnich, Jaan Aru | Interpreting the structure of multi-object representations in vision
encoders | null | null | null | null | cs.CV cs.CL q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | In this work, we interpret the representations of multi-object scenes in
vision encoders through the lens of structured representations. Structured
representations allow modeling of individual objects distinctly and their
flexible use based on the task context for both scene-level and object-specific
tasks. These capabilities play a central role in human reasoning and
generalization, allowing us to abstract away irrelevant details and focus on
relevant information in a compact and usable form. We define structured
representations as those that adhere to two specific properties: binding
specific object information into discrete representation units and segregating
object representations into separate sets of tokens to minimize cross-object
entanglement. Based on these properties, we evaluated and compared image
encoders pre-trained on classification (ViT), large vision-language models
(CLIP, BLIP, FLAVA), and self-supervised methods (DINO, DINOv2). We examine the
token representations by creating object-decoding tasks that measure the
ability of specific tokens to capture individual objects in multi-object scenes
from the COCO dataset. This analysis provides insights into how object-wise
representations are distributed across tokens and layers within these vision
encoders. Our findings highlight significant differences in the representation
of objects depending on their relevance to the pre-training objective, with
this effect particularly pronounced in the CLS token (often used for downstream
tasks). Meanwhile, networks and layers that exhibit more structured
representations retain better information about individual objects. To guide
practical applications, we propose formal measures to quantify the two
properties of structured representations, aiding in selecting and adapting
vision encoders for downstream tasks.
| [
{
"version": "v1",
"created": "Thu, 13 Jun 2024 12:54:20 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Jun 2024 12:27:36 GMT"
},
{
"version": "v3",
"created": "Sun, 6 Apr 2025 13:44:02 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Khajuria",
"Tarun",
""
],
[
"Dias",
"Braian Olmiro",
""
],
[
"Domnich",
"Marharyta",
""
],
[
"Aru",
"Jaan",
""
]
] | TITLE: Interpreting the structure of multi-object representations in vision
encoders
ABSTRACT: In this work, we interpret the representations of multi-object scenes in
vision encoders through the lens of structured representations. Structured
representations allow modeling of individual objects distinctly and their
flexible use based on the task context for both scene-level and object-specific
tasks. These capabilities play a central role in human reasoning and
generalization, allowing us to abstract away irrelevant details and focus on
relevant information in a compact and usable form. We define structured
representations as those that adhere to two specific properties: binding
specific object information into discrete representation units and segregating
object representations into separate sets of tokens to minimize cross-object
entanglement. Based on these properties, we evaluated and compared image
encoders pre-trained on classification (ViT), large vision-language models
(CLIP, BLIP, FLAVA), and self-supervised methods (DINO, DINOv2). We examine the
token representations by creating object-decoding tasks that measure the
ability of specific tokens to capture individual objects in multi-object scenes
from the COCO dataset. This analysis provides insights into how object-wise
representations are distributed across tokens and layers within these vision
encoders. Our findings highlight significant differences in the representation
of objects depending on their relevance to the pre-training objective, with
this effect particularly pronounced in the CLS token (often used for downstream
tasks). Meanwhile, networks and layers that exhibit more structured
representations retain better information about individual objects. To guide
practical applications, we propose formal measures to quantify the two
properties of structured representations, aiding in selecting and adapting
vision encoders for downstream tasks.
| no_new_dataset | 0.952397 |
2406.09564 | Ziyan Wang | Ziyan Wang, Xiaoming Huo, Hao Wang | Towards Domain Adaptive Neural Contextual Bandits | Accepted at ICLR 2025 | null | null | null | cs.LG cs.AI cs.CE cs.CV stat.ML | http://creativecommons.org/licenses/by/4.0/ | Contextual bandit algorithms are essential for solving real-world decision
making problems. In practice, collecting a contextual bandit's feedback from
different domains may involve different costs. For example, measuring drug
reaction from mice (as a source domain) and humans (as a target domain).
Unfortunately, adapting a contextual bandit algorithm from a source domain to a
target domain with distribution shift still remains a major challenge and
largely unexplored. In this paper, we introduce the first general domain
adaptation method for contextual bandits. Our approach learns a bandit model
for the target domain by collecting feedback from the source domain. Our
theoretical analysis shows that our algorithm maintains a sub-linear regret
bound even adapting across domains. Empirical results show that our approach
outperforms the state-of-the-art contextual bandit algorithms on real-world
datasets.
| [
{
"version": "v1",
"created": "Thu, 13 Jun 2024 20:12:46 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Oct 2024 02:14:24 GMT"
},
{
"version": "v3",
"created": "Sun, 6 Apr 2025 18:23:33 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Ziyan",
""
],
[
"Huo",
"Xiaoming",
""
],
[
"Wang",
"Hao",
""
]
] | TITLE: Towards Domain Adaptive Neural Contextual Bandits
ABSTRACT: Contextual bandit algorithms are essential for solving real-world decision
making problems. In practice, collecting a contextual bandit's feedback from
different domains may involve different costs. For example, measuring drug
reaction from mice (as a source domain) and humans (as a target domain).
Unfortunately, adapting a contextual bandit algorithm from a source domain to a
target domain with distribution shift still remains a major challenge and
largely unexplored. In this paper, we introduce the first general domain
adaptation method for contextual bandits. Our approach learns a bandit model
for the target domain by collecting feedback from the source domain. Our
theoretical analysis shows that our algorithm maintains a sub-linear regret
bound even adapting across domains. Empirical results show that our approach
outperforms the state-of-the-art contextual bandit algorithms on real-world
datasets.
| no_new_dataset | 0.94366 |
2406.19774 | Yixing Li | Yixing Li, Yuxian Gu, Li Dong, Dequan Wang, Yu Cheng, Furu Wei | Direct Preference Knowledge Distillation for Large Language Models | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the field of large language models (LLMs), Knowledge Distillation (KD) is
a critical technique for transferring capabilities from teacher models to
student models. However, existing KD methods face limitations and challenges in
distillation of LLMs, including efficiency and insufficient measurement
capabilities of traditional KL divergence. It is shown that LLMs can serve as
an implicit reward function, which we define as a supplement to KL divergence.
In this work, we propose Direct Preference Knowledge Distillation (DPKD) for
LLMs. DPKD utilizes distribution divergence to represent the preference loss
and implicit reward function. We re-formulate KD of LLMs into two stages: first
optimizing and objective consisting of implicit reward and reverse KL
divergence and then improving the preference probability of teacher outputs
over student outputs. We conducted experiments and analysis on various datasets
with LLM parameters ranging from 120M to 13B and demonstrate the broad
applicability and effectiveness of our DPKD approach. Meanwhile, we prove the
value and effectiveness of the introduced implicit reward and output preference
in KD through experiments and theoretical analysis. The DPKD method outperforms
the baseline method in both output response precision and exact match
percentage. Code and data are available at https://aka.ms/dpkd.
| [
{
"version": "v1",
"created": "Fri, 28 Jun 2024 09:23:40 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 06:11:54 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Li",
"Yixing",
""
],
[
"Gu",
"Yuxian",
""
],
[
"Dong",
"Li",
""
],
[
"Wang",
"Dequan",
""
],
[
"Cheng",
"Yu",
""
],
[
"Wei",
"Furu",
""
]
] | TITLE: Direct Preference Knowledge Distillation for Large Language Models
ABSTRACT: In the field of large language models (LLMs), Knowledge Distillation (KD) is
a critical technique for transferring capabilities from teacher models to
student models. However, existing KD methods face limitations and challenges in
distillation of LLMs, including efficiency and insufficient measurement
capabilities of traditional KL divergence. It is shown that LLMs can serve as
an implicit reward function, which we define as a supplement to KL divergence.
In this work, we propose Direct Preference Knowledge Distillation (DPKD) for
LLMs. DPKD utilizes distribution divergence to represent the preference loss
and implicit reward function. We re-formulate KD of LLMs into two stages: first
optimizing and objective consisting of implicit reward and reverse KL
divergence and then improving the preference probability of teacher outputs
over student outputs. We conducted experiments and analysis on various datasets
with LLM parameters ranging from 120M to 13B and demonstrate the broad
applicability and effectiveness of our DPKD approach. Meanwhile, we prove the
value and effectiveness of the introduced implicit reward and output preference
in KD through experiments and theoretical analysis. The DPKD method outperforms
the baseline method in both output response precision and exact match
percentage. Code and data are available at https://aka.ms/dpkd.
| no_new_dataset | 0.946745 |
2407.00342 | Kibeom Nam | Kibeom Nam | KPC-cF: Aspect-Based Sentiment Analysis via Implicit-Feature Alignment
with Corpus Filtering | Work in Progress, DMLR@ICML 2024 | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Investigations into Aspect-Based Sentiment Analysis (ABSA) for Korean
industrial reviews are notably lacking in the existing literature. Our research
proposes an intuitive and effective framework for ABSA in low-resource
languages such as Korean. It optimizes prediction labels by integrating
translated benchmark and unlabeled Korean data. Using a model fine-tuned on
translated data, we pseudo-labeled the actual Korean NLI set. Subsequently, we
applied LaBSE and \MSP{}-based filtering to this pseudo-NLI set as implicit
feature, enhancing Aspect Category Detection and Polarity determination through
additional training. Incorporating dual filtering, this model bridged dataset
gaps and facilitates feature alignment with minimal resources. By implementing
alignment pipelines, our approach aims to leverage high-resource datasets to
develop reliable predictive and refined models within corporate or individual
communities in low-resource language countries. Compared to English ABSA, our
framework showed an approximately 3\% difference in F1 scores and accuracy. We
will release our dataset and code for Korean ABSA, at this link.
| [
{
"version": "v1",
"created": "Sat, 29 Jun 2024 07:01:51 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Jul 2024 17:08:36 GMT"
},
{
"version": "v3",
"created": "Sat, 20 Jul 2024 09:32:01 GMT"
},
{
"version": "v4",
"created": "Fri, 15 Nov 2024 17:59:10 GMT"
},
{
"version": "v5",
"created": "Sat, 8 Mar 2025 07:54:39 GMT"
},
{
"version": "v6",
"created": "Sun, 6 Apr 2025 17:37:44 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Nam",
"Kibeom",
""
]
] | TITLE: KPC-cF: Aspect-Based Sentiment Analysis via Implicit-Feature Alignment
with Corpus Filtering
ABSTRACT: Investigations into Aspect-Based Sentiment Analysis (ABSA) for Korean
industrial reviews are notably lacking in the existing literature. Our research
proposes an intuitive and effective framework for ABSA in low-resource
languages such as Korean. It optimizes prediction labels by integrating
translated benchmark and unlabeled Korean data. Using a model fine-tuned on
translated data, we pseudo-labeled the actual Korean NLI set. Subsequently, we
applied LaBSE and \MSP{}-based filtering to this pseudo-NLI set as implicit
feature, enhancing Aspect Category Detection and Polarity determination through
additional training. Incorporating dual filtering, this model bridged dataset
gaps and facilitates feature alignment with minimal resources. By implementing
alignment pipelines, our approach aims to leverage high-resource datasets to
develop reliable predictive and refined models within corporate or individual
communities in low-resource language countries. Compared to English ABSA, our
framework showed an approximately 3\% difference in F1 scores and accuracy. We
will release our dataset and code for Korean ABSA, at this link.
| new_dataset | 0.528594 |
2407.00923 | Oleg Vasilyev | Oleg Vasilyev, Randy Sawaya, John Bohannon | Preserving Multilingual Quality While Tuning Query Encoder on English
Only | Accepted to NAACL 2025 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | A query encoder of a dual passage retrieval system can be tuned for specific
types of queries or domains, while the precomputed and stored documents
representations are kept intact. Switching from one query encoder to another
when needed is easily feasible, unlike overhauling the embeddings of a whole
knowledge base. In this work we raise a question: Can the generic, original
qualities of the encoder be preserved or at least left not too degraded when it
is tuned on a narrow domain? We conducted experiments on a high quality
multilingual embedding model: Tuning it on a single English-only dataset, we
observe that the tuning not only preserves the multilingual qualities, but even
improves them. The embedding qualities on distinctly different data are also
improved or at least preserved. Drawing on our observations, we suggest a more
general hypothesis: Tuning with intentionally low learning rate can preserve or
improve a system's properties acquired in training, but not specifically
targeted by tuning. We call this adiabatic tuning and provide tentative
explanations.
| [
{
"version": "v1",
"created": "Mon, 1 Jul 2024 03:03:18 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Aug 2024 06:02:12 GMT"
},
{
"version": "v3",
"created": "Sat, 14 Dec 2024 01:23:33 GMT"
},
{
"version": "v4",
"created": "Sat, 5 Apr 2025 23:03:41 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Vasilyev",
"Oleg",
""
],
[
"Sawaya",
"Randy",
""
],
[
"Bohannon",
"John",
""
]
] | TITLE: Preserving Multilingual Quality While Tuning Query Encoder on English
Only
ABSTRACT: A query encoder of a dual passage retrieval system can be tuned for specific
types of queries or domains, while the precomputed and stored documents
representations are kept intact. Switching from one query encoder to another
when needed is easily feasible, unlike overhauling the embeddings of a whole
knowledge base. In this work we raise a question: Can the generic, original
qualities of the encoder be preserved or at least left not too degraded when it
is tuned on a narrow domain? We conducted experiments on a high quality
multilingual embedding model: Tuning it on a single English-only dataset, we
observe that the tuning not only preserves the multilingual qualities, but even
improves them. The embedding qualities on distinctly different data are also
improved or at least preserved. Drawing on our observations, we suggest a more
general hypothesis: Tuning with intentionally low learning rate can preserve or
improve a system's properties acquired in training, but not specifically
targeted by tuning. We call this adiabatic tuning and provide tentative
explanations.
| no_new_dataset | 0.947527 |
2407.05952 | Nikhil Abhyankar | Nikhil Abhyankar, Vivek Gupta, Dan Roth, Chandan K. Reddy | H-STAR: LLM-driven Hybrid SQL-Text Adaptive Reasoning on Tables | NAACL 2025 Main Conference | null | null | null | cs.DB cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Tabular reasoning involves interpreting natural language queries about
tabular data, which presents a unique challenge of combining language
understanding with structured data analysis. Existing methods employ either
textual reasoning, which excels in semantic interpretation but struggles with
mathematical operations, or symbolic reasoning, which handles computations well
but lacks semantic understanding. This paper introduces a novel algorithm
H-STAR that integrates both symbolic and semantic (textual) approaches in a
two-stage process to address these limitations. H-STAR employs: (1) step-wise
table extraction using `multi-view' column retrieval followed by row
extraction, and (2) adaptive reasoning that adapts reasoning strategies based
on question types, utilizing semantic reasoning for direct lookup and complex
lexical queries while augmenting textual reasoning with symbolic reasoning
support for quantitative and logical tasks. Our extensive experiments
demonstrate that H-STAR significantly outperforms state-of-the-art methods
across three tabular question-answering (QA) and fact-verification datasets,
underscoring its effectiveness and efficiency.
| [
{
"version": "v1",
"created": "Sat, 29 Jun 2024 21:24:19 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Oct 2024 23:44:31 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Apr 2025 00:44:34 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Abhyankar",
"Nikhil",
""
],
[
"Gupta",
"Vivek",
""
],
[
"Roth",
"Dan",
""
],
[
"Reddy",
"Chandan K.",
""
]
] | TITLE: H-STAR: LLM-driven Hybrid SQL-Text Adaptive Reasoning on Tables
ABSTRACT: Tabular reasoning involves interpreting natural language queries about
tabular data, which presents a unique challenge of combining language
understanding with structured data analysis. Existing methods employ either
textual reasoning, which excels in semantic interpretation but struggles with
mathematical operations, or symbolic reasoning, which handles computations well
but lacks semantic understanding. This paper introduces a novel algorithm
H-STAR that integrates both symbolic and semantic (textual) approaches in a
two-stage process to address these limitations. H-STAR employs: (1) step-wise
table extraction using `multi-view' column retrieval followed by row
extraction, and (2) adaptive reasoning that adapts reasoning strategies based
on question types, utilizing semantic reasoning for direct lookup and complex
lexical queries while augmenting textual reasoning with symbolic reasoning
support for quantitative and logical tasks. Our extensive experiments
demonstrate that H-STAR significantly outperforms state-of-the-art methods
across three tabular question-answering (QA) and fact-verification datasets,
underscoring its effectiveness and efficiency.
| no_new_dataset | 0.940681 |
2407.13349 | HongHao Li | Honghao Li, Yiwen Zhang, Yi Zhang, Hanwei Li, Lei Sang, and Jieming
Zhu | FCN: Fusing Exponential and Linear Cross Network for Click-Through Rate
Prediction | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As an important modeling paradigm in click-through rate (CTR) prediction, the
Deep & Cross Network (DCN) and its derivative models have gained widespread
recognition primarily due to their success in a trade-off between computational
cost and performance. This paradigm employs a cross network to explicitly model
feature interactions with linear growth, while leveraging deep neural networks
(DNN) to implicitly capture higher-order feature interactions. However, these
models still face several key limitations: (1) The performance of existing
explicit feature interaction methods lags behind that of implicit DNN,
resulting in overall model performance being dominated by the DNN; (2) While
these models claim to capture high-order feature interactions, they often
overlook potential noise within these interactions; (3) The learning process
for different interaction network branches lacks appropriate supervision
signals; and (4) The high-order feature interactions captured by these models
are often implicit and non-interpretable due to their reliance on DNN.
To address the identified limitations, this paper proposes a novel model,
called Fusing Cross Network (FCN), along with two sub-networks: Linear Cross
Network (LCN) and Exponential Cross Network (ECN). FCN explicitly captures
feature interactions with both linear and exponential growth, eliminating the
need to rely on implicit DNN. Moreover, we introduce the Self-Mask operation to
filter noise layer by layer and reduce the number of parameters in the cross
network by half. To effectively train these two cross networks, we propose a
simple yet effective loss function called Tri-BCE, which provides tailored
supervision signals for each network. We evaluate the effectiveness,
efficiency, and interpretability of FCN on six benchmark datasets. Furthermore,
by integrating LCN and ECN, FCN achieves a new state-of-the-art performance.
| [
{
"version": "v1",
"created": "Thu, 18 Jul 2024 09:49:13 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Jul 2024 03:23:01 GMT"
},
{
"version": "v3",
"created": "Mon, 29 Jul 2024 16:30:42 GMT"
},
{
"version": "v4",
"created": "Wed, 31 Jul 2024 15:59:46 GMT"
},
{
"version": "v5",
"created": "Tue, 6 Aug 2024 14:10:16 GMT"
},
{
"version": "v6",
"created": "Fri, 9 Aug 2024 06:31:56 GMT"
},
{
"version": "v7",
"created": "Sat, 5 Apr 2025 07:06:36 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Li",
"Honghao",
""
],
[
"Zhang",
"Yiwen",
""
],
[
"Zhang",
"Yi",
""
],
[
"Li",
"Hanwei",
""
],
[
"Sang",
"Lei",
""
],
[
"Zhu",
"Jieming",
""
]
] | TITLE: FCN: Fusing Exponential and Linear Cross Network for Click-Through Rate
Prediction
ABSTRACT: As an important modeling paradigm in click-through rate (CTR) prediction, the
Deep & Cross Network (DCN) and its derivative models have gained widespread
recognition primarily due to their success in a trade-off between computational
cost and performance. This paradigm employs a cross network to explicitly model
feature interactions with linear growth, while leveraging deep neural networks
(DNN) to implicitly capture higher-order feature interactions. However, these
models still face several key limitations: (1) The performance of existing
explicit feature interaction methods lags behind that of implicit DNN,
resulting in overall model performance being dominated by the DNN; (2) While
these models claim to capture high-order feature interactions, they often
overlook potential noise within these interactions; (3) The learning process
for different interaction network branches lacks appropriate supervision
signals; and (4) The high-order feature interactions captured by these models
are often implicit and non-interpretable due to their reliance on DNN.
To address the identified limitations, this paper proposes a novel model,
called Fusing Cross Network (FCN), along with two sub-networks: Linear Cross
Network (LCN) and Exponential Cross Network (ECN). FCN explicitly captures
feature interactions with both linear and exponential growth, eliminating the
need to rely on implicit DNN. Moreover, we introduce the Self-Mask operation to
filter noise layer by layer and reduce the number of parameters in the cross
network by half. To effectively train these two cross networks, we propose a
simple yet effective loss function called Tri-BCE, which provides tailored
supervision signals for each network. We evaluate the effectiveness,
efficiency, and interpretability of FCN on six benchmark datasets. Furthermore,
by integrating LCN and ECN, FCN achieves a new state-of-the-art performance.
| no_new_dataset | 0.950595 |
2407.19992 | Hao Shu | Hao Shu | Enhancing Edge Detection by Texture Handling Architecture and Noiseless
Training Data | 28 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image edge detection (ED) is a fundamental task in computer vision. While
convolution-based models have significantly advanced ED performance, achieving
high precision under strict error tolerance constraints remains challenging.
Furthermore, the reliance on noisy, human-annotated training data limits model
performance, even when the inputs are edge maps themselves. In this paper, we
address these challenges in two key aspects. First, we propose a novel ED model
incorporating Cascaded Skipping Density Blocks (CSDB) to enhance precision and
robustness. Our model achieves state-of-the-art (SOTA) performance across
multiple datasets, with substantial improvements in average precision (AP), as
demonstrated by extensive experiments. Second, we introduce a novel data
augmentation strategy that enables the integration of noiseless annotations
during training, improving model performance, particularly when processing edge
maps directly. Our findings contribute to a more precise ED architecture and
the first method for integrating noiseless training data into ED tasks,
offering potential directions for improving ED models. Codes can be found on
https://github.com/Hao-B-Shu/SDPED.
| [
{
"version": "v1",
"created": "Mon, 29 Jul 2024 13:24:55 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Oct 2024 12:22:31 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Oct 2024 10:24:45 GMT"
},
{
"version": "v4",
"created": "Sat, 5 Apr 2025 02:17:03 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Shu",
"Hao",
""
]
] | TITLE: Enhancing Edge Detection by Texture Handling Architecture and Noiseless
Training Data
ABSTRACT: Image edge detection (ED) is a fundamental task in computer vision. While
convolution-based models have significantly advanced ED performance, achieving
high precision under strict error tolerance constraints remains challenging.
Furthermore, the reliance on noisy, human-annotated training data limits model
performance, even when the inputs are edge maps themselves. In this paper, we
address these challenges in two key aspects. First, we propose a novel ED model
incorporating Cascaded Skipping Density Blocks (CSDB) to enhance precision and
robustness. Our model achieves state-of-the-art (SOTA) performance across
multiple datasets, with substantial improvements in average precision (AP), as
demonstrated by extensive experiments. Second, we introduce a novel data
augmentation strategy that enables the integration of noiseless annotations
during training, improving model performance, particularly when processing edge
maps directly. Our findings contribute to a more precise ED architecture and
the first method for integrating noiseless training data into ED tasks,
offering potential directions for improving ED models. Codes can be found on
https://github.com/Hao-B-Shu/SDPED.
| no_new_dataset | 0.952397 |
2408.07107 | Wenxuan Yang | Wenxuan Yang, Hanyu Zhang, Weimin Tan, Yuqi Sun, Bo Yan | A Self-Supervised Paradigm for Data-Efficient Medical Foundation Model
Pre-training: V-information Optimization Framework | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Self-supervised pre-training medical foundation models on large-scale
datasets demonstrate exceptional performance. Recent research challenges this
common paradigm by introducing data-effective learning approaches,
demonstrating that merely increasing pre-training data volume does not
necessarily improve model performance. However, current methods still have
unclear standards and the underlying theoretical foundation remains unknown. In
this paper, as the first attempt to address this limitation, we introduce
V-information into self-supervised pre-training of foundation models to provide
a theoretical foundation for sample selection. Our derivation confirms that by
optimizing V-information, sample selection can be framed as an optimization
problem where choosing diverse and challenging samples enhances model
performance even under limited training data. Under this guidance, we develop
an optimized data-effective learning method (OptiDEL) to optimize V-information
in real-world medical domains by generating more diverse and harder samples. We
compare the OptiDEL method with state-of-the-art approaches finding that
OptiDEL consistently outperforms existing approaches across eight different
datasets, with foundation models trained on only 5% of the pre-training data
achieving up to 6.2% higher mIoU than those trained on the full dataset.
Remarkably, OptiDEL demonstrates an average improvement of 4.7% mIoU over
competing methods while using 20x less training data.
| [
{
"version": "v1",
"created": "Tue, 13 Aug 2024 10:28:54 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Aug 2024 12:19:44 GMT"
},
{
"version": "v3",
"created": "Sat, 23 Nov 2024 08:24:19 GMT"
},
{
"version": "v4",
"created": "Sun, 6 Apr 2025 02:50:25 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Yang",
"Wenxuan",
""
],
[
"Zhang",
"Hanyu",
""
],
[
"Tan",
"Weimin",
""
],
[
"Sun",
"Yuqi",
""
],
[
"Yan",
"Bo",
""
]
] | TITLE: A Self-Supervised Paradigm for Data-Efficient Medical Foundation Model
Pre-training: V-information Optimization Framework
ABSTRACT: Self-supervised pre-training medical foundation models on large-scale
datasets demonstrate exceptional performance. Recent research challenges this
common paradigm by introducing data-effective learning approaches,
demonstrating that merely increasing pre-training data volume does not
necessarily improve model performance. However, current methods still have
unclear standards and the underlying theoretical foundation remains unknown. In
this paper, as the first attempt to address this limitation, we introduce
V-information into self-supervised pre-training of foundation models to provide
a theoretical foundation for sample selection. Our derivation confirms that by
optimizing V-information, sample selection can be framed as an optimization
problem where choosing diverse and challenging samples enhances model
performance even under limited training data. Under this guidance, we develop
an optimized data-effective learning method (OptiDEL) to optimize V-information
in real-world medical domains by generating more diverse and harder samples. We
compare the OptiDEL method with state-of-the-art approaches finding that
OptiDEL consistently outperforms existing approaches across eight different
datasets, with foundation models trained on only 5% of the pre-training data
achieving up to 6.2% higher mIoU than those trained on the full dataset.
Remarkably, OptiDEL demonstrates an average improvement of 4.7% mIoU over
competing methods while using 20x less training data.
| no_new_dataset | 0.946745 |
2408.07108 | Marina Zajnulina | Marina Zajnulina | Shannon Entropy Helps Optimize the Performance of a
Frequency-Multiplexed Extreme Learning Machine | null | null | null | null | physics.optics | http://creativecommons.org/licenses/by/4.0/ | Knowing the dynamics of neuromorphic photonic schemes would allow their
optimization for controlled data-processing capability at possibly minimized
energy consumption levels. In nonlinear substrates such as optical fibers or
semiconductors, these dynamics can widely vary depending on each optical input
encoded with information to process and are in general difficult to estimate.
Thus, other approaches are required to optimize the schemes. Using a
frequency-multiplexed fiber-based Extreme Learning Machine as an example for a
classification task, I use Shannon entropy for optical power and introduce
Shannon entropy for optical phase and spectrum, all averaged only over a small
subset of the dataset to process. I show that the maximum and upper moderate
optical power and phase entropies relate to the best data-processing capability
of the Machine and, thus, can be used to find optimal system parameters such as
fiber length, input power, and group-velocity dispersion in a more
time-efficient manner than running numerical simulations emulating the scheme.
Shannon entropy of the spectrum provides the information for what parameter
space broadest possible frequency combs can be expected and is interesting from
the perspective of frequency comb and supercontinuum generation. The introduced
entropies are general and can be easily applied to describe and optimize other
neuromorphic-photonic or nonlinear-optics schemes.
| [
{
"version": "v1",
"created": "Tue, 13 Aug 2024 11:03:54 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Apr 2025 22:21:54 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zajnulina",
"Marina",
""
]
] | TITLE: Shannon Entropy Helps Optimize the Performance of a
Frequency-Multiplexed Extreme Learning Machine
ABSTRACT: Knowing the dynamics of neuromorphic photonic schemes would allow their
optimization for controlled data-processing capability at possibly minimized
energy consumption levels. In nonlinear substrates such as optical fibers or
semiconductors, these dynamics can widely vary depending on each optical input
encoded with information to process and are in general difficult to estimate.
Thus, other approaches are required to optimize the schemes. Using a
frequency-multiplexed fiber-based Extreme Learning Machine as an example for a
classification task, I use Shannon entropy for optical power and introduce
Shannon entropy for optical phase and spectrum, all averaged only over a small
subset of the dataset to process. I show that the maximum and upper moderate
optical power and phase entropies relate to the best data-processing capability
of the Machine and, thus, can be used to find optimal system parameters such as
fiber length, input power, and group-velocity dispersion in a more
time-efficient manner than running numerical simulations emulating the scheme.
Shannon entropy of the spectrum provides the information for what parameter
space broadest possible frequency combs can be expected and is interesting from
the perspective of frequency comb and supercontinuum generation. The introduced
entropies are general and can be easily applied to describe and optimize other
neuromorphic-photonic or nonlinear-optics schemes.
| no_new_dataset | 0.951142 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.