id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2403.16649 | Felton Fang | Feiteng Fang, Liang Zhu, Min Yang, Xi Feng, Jinchang Hou, Qixuan Zhao,
Chengming Li, Xiping Hu and Ruifeng Xu | CLHA: A Simple yet Effective Contrastive Learning Framework for Human
Alignment | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning from human feedback (RLHF) is a crucial technique in
aligning large language models (LLMs) with human preferences, ensuring these
LLMs behave in beneficial and comprehensible ways to users. However, a
longstanding challenge in human alignment techniques based on reinforcement
learning lies in their inherent complexity and difficulty in training. To
address this challenge, we present a simple yet effective Contrastive Learning
Framework for Human Alignment (CLHA) to align LLMs with human preferences
directly. CLHA employs a novel rescoring strategy to evaluate the noise within
the data by considering its inherent quality and dynamically adjusting the
training process. Simultaneously, CLHA utilizes pairwise contrastive loss and
adaptive supervised fine-tuning loss to adaptively modify the likelihood of
generating responses, ensuring enhanced alignment with human preferences. Using
advanced methods, CLHA surpasses other algorithms, showcasing superior
performance in terms of reward model scores, automatic evaluations, and human
assessments on the widely used ``Helpful and Harmless'' dataset.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2024 11:37:15 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Mar 2024 06:08:20 GMT"
}
] | 1,711,497,600,000 | [
[
"Fang",
"Feiteng",
""
],
[
"Zhu",
"Liang",
""
],
[
"Yang",
"Min",
""
],
[
"Feng",
"Xi",
""
],
[
"Hou",
"Jinchang",
""
],
[
"Zhao",
"Qixuan",
""
],
[
"Li",
"Chengming",
""
],
[
"Hu",
"Xiping",
""
],
[
"Xu",
"Ruifeng",
""
]
] |
2403.16667 | Fernando Acero | Fernando Acero, Parisa Zehtabi, Nicolas Marchesotti, Michael Cashmore,
Daniele Magazzeni, Manuela Veloso | Deep Reinforcement Learning and Mean-Variance Strategies for Responsible
Portfolio Optimization | Presented at the AAAI 2024 Workshop on AI in Finance for Social
Impact | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Portfolio optimization involves determining the optimal allocation of
portfolio assets in order to maximize a given investment objective.
Traditionally, some form of mean-variance optimization is used with the aim of
maximizing returns while minimizing risk, however, more recently, deep
reinforcement learning formulations have been explored. Increasingly, investors
have demonstrated an interest in incorporating ESG objectives when making
investment decisions, and modifications to the classical mean-variance
optimization framework have been developed. In this work, we study the use of
deep reinforcement learning for responsible portfolio optimization, by
incorporating ESG states and objectives, and provide comparisons against
modified mean-variance approaches. Our results show that deep reinforcement
learning policies can provide competitive performance against mean-variance
approaches for responsible portfolio allocation across additive and
multiplicative utility functions of financial and ESG responsibility
objectives.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2024 12:04:03 GMT"
}
] | 1,711,411,200,000 | [
[
"Acero",
"Fernando",
""
],
[
"Zehtabi",
"Parisa",
""
],
[
"Marchesotti",
"Nicolas",
""
],
[
"Cashmore",
"Michael",
""
],
[
"Magazzeni",
"Daniele",
""
],
[
"Veloso",
"Manuela",
""
]
] |
2403.16728 | Artem Khrapov | Artem Khrapov, Vadim Popov, Tasnima Sadekova, Assel Yermekova, Mikhail
Kudinov | Improving Diffusion Models's Data-Corruption Resistance using Scheduled
Pseudo-Huber Loss | 13 pages, 16 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Diffusion models are known to be vulnerable to outliers in training data. In
this paper we study an alternative diffusion loss function, which can preserve
the high quality of generated data like the original squared $L_{2}$ loss while
at the same time being robust to outliers. We propose to use pseudo-Huber loss
function with a time-dependent parameter to allow for the trade-off between
robustness on the most vulnerable early reverse-diffusion steps and fine
details restoration on the final steps. We show that pseudo-Huber loss with the
time-dependent parameter exhibits better performance on corrupted datasets in
both image and audio domains. In addition, the loss function we propose can
potentially help diffusion models to resist dataset corruption while not
requiring data filtering or purification compared to conventional training
algorithms.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2024 13:02:43 GMT"
}
] | 1,711,411,200,000 | [
[
"Khrapov",
"Artem",
""
],
[
"Popov",
"Vadim",
""
],
[
"Sadekova",
"Tasnima",
""
],
[
"Yermekova",
"Assel",
""
],
[
"Kudinov",
"Mikhail",
""
]
] |
2403.16732 | Nikita Durasov | Nikita Durasov, Doruk Oner, Jonathan Donier, Hieu Le, Pascal Fua | Enabling Uncertainty Estimation in Iterative Neural Networks | Accepted at ICML 2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Turning pass-through network architectures into iterative ones, which use
their own output as input, is a well-known approach for boosting performance.
In this paper, we argue that such architectures offer an additional benefit:
The convergence rate of their successive outputs is highly correlated with the
accuracy of the value to which they converge. Thus, we can use the convergence
rate as a useful proxy for uncertainty. This results in an approach to
uncertainty estimation that provides state-of-the-art estimates at a much lower
computational cost than techniques like Ensembles, and without requiring any
modifications to the original iterative model. We demonstrate its practical
value by embedding it in two application domains: road detection in aerial
images and the estimation of aerodynamic properties of 2D and 3D shapes.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2024 13:06:31 GMT"
},
{
"version": "v2",
"created": "Thu, 30 May 2024 10:10:19 GMT"
}
] | 1,717,113,600,000 | [
[
"Durasov",
"Nikita",
""
],
[
"Oner",
"Doruk",
""
],
[
"Donier",
"Jonathan",
""
],
[
"Le",
"Hieu",
""
],
[
"Fua",
"Pascal",
""
]
] |
2403.16750 | Aman Kumar | Deepak Narayan Gadde, Aman Kumar, Thomas Nalapat, Evgenii Rezunov and
Fabio Cappellini | All Artificial, Less Intelligence: GenAI through the Lens of Formal
Verification | Published in DVCon U.S. 2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern hardware designs have grown increasingly efficient and complex.
However, they are often susceptible to Common Weakness Enumerations (CWEs).
This paper is focused on the formal verification of CWEs in a dataset of
hardware designs written in SystemVerilog from Regenerative Artificial
Intelligence (AI) powered by Large Language Models (LLMs). We applied formal
verification to categorize each hardware design as vulnerable or CWE-free. This
dataset was generated by 4 different LLMs and features a unique set of designs
for each of the 10 CWEs we target in our paper. We have associated the
identified vulnerabilities with CWE numbers for a dataset of 60,000 generated
SystemVerilog Register Transfer Level (RTL) code. It was also found that most
LLMs are not aware of any hardware CWEs; hence they are usually not considered
when generating the hardware code. Our study reveals that approximately 60% of
the hardware designs generated by LLMs are prone to CWEs, posing potential
safety and security risks. The dataset could be ideal for training LLMs and
Machine Learning (ML) algorithms to abstain from generating CWE-prone hardware
designs.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2024 13:23:24 GMT"
}
] | 1,711,411,200,000 | [
[
"Gadde",
"Deepak Narayan",
""
],
[
"Kumar",
"Aman",
""
],
[
"Nalapat",
"Thomas",
""
],
[
"Rezunov",
"Evgenii",
""
],
[
"Cappellini",
"Fabio",
""
]
] |
2403.16808 | Jessica Kelly | J. Kelly, S. Zafar, L. Heidemann, J. Zacchi, D. Espinoza, N. Mata | Navigating the EU AI Act: A Methodological Approach to Compliance for
Safety-critical Products | To be published in: 2024 IEEE Conference on Artificial Intelligence
(CAI 2024) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In December 2023, the European Parliament provisionally agreed on the EU AI
Act. This unprecedented regulatory framework for AI systems lays out guidelines
to ensure the safety, legality, and trustworthiness of AI products. This paper
presents a methodology for interpreting the EU AI Act requirements for
high-risk AI systems by leveraging product quality models. We first propose an
extended product quality model for AI systems, incorporating attributes
relevant to the Act not covered by current quality models. We map the Act
requirements to relevant quality attributes with the goal of refining them into
measurable characteristics. We then propose a contract-based approach to derive
technical requirements at the stakeholder level. This facilitates the
development and assessment of AI systems that not only adhere to established
quality standards, but also comply with the regulatory requirements outlined in
the Act for high-risk (including safety-critical) AI systems. We demonstrate
the applicability of this methodology on an exemplary automotive supply chain
use case, where several stakeholders interact to achieve EU AI Act compliance.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2024 14:32:18 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Mar 2024 08:59:17 GMT"
}
] | 1,711,497,600,000 | [
[
"Kelly",
"J.",
""
],
[
"Zafar",
"S.",
""
],
[
"Heidemann",
"L.",
""
],
[
"Zacchi",
"J.",
""
],
[
"Espinoza",
"D.",
""
],
[
"Mata",
"N.",
""
]
] |
2403.16824 | Blai Bonet | Blai Bonet, Dominik Drexler, Hector Geffner | On Policy Reuse: An Expressive Language for Representing and Executing
General Policies that Call Other Policies | ICAPS 2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recently, a simple but powerful language for expressing and learning general
policies and problem decompositions (sketches) has been introduced in terms of
rules defined over a set of Boolean and numerical features. In this work, we
consider three extensions of this language aimed at making policies and
sketches more flexible and reusable: internal memory states, as in finite state
controllers; indexical features, whose values are a function of the state and a
number of internal registers that can be loaded with objects; and modules that
wrap up policies and sketches and allow them to call each other by passing
parameters. In addition, unlike general policies that select state transitions
rather than ground actions, the new language allows for the selection of such
actions. The expressive power of the resulting language for policies and
sketches is illustrated through a number of examples.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2024 14:48:54 GMT"
}
] | 1,711,411,200,000 | [
[
"Bonet",
"Blai",
""
],
[
"Drexler",
"Dominik",
""
],
[
"Geffner",
"Hector",
""
]
] |
2403.16858 | Zerui Wang | Zerui Wang, Yan Liu, Abishek Arumugam Thiruselvi, Abdelwahab
Hamou-Lhadj | XAIport: A Service Framework for the Early Adoption of XAI in AI Model
Development | Accepted at the ICSE'24 conference, NIER track | null | 10.1145/3639476.3639759 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this study, we propose the early adoption of Explainable AI (XAI) with a
focus on three properties: Quality of explanation, the explanation summaries
should be consistent across multiple XAI methods; Architectural Compatibility,
for effective integration in XAI, the architecture styles of both the XAI
methods and the models to be explained must be compatible with the framework;
Configurable operations, XAI explanations are operable, akin to machine
learning operations. Thus, an explanation for AI models should be reproducible
and tractable to be trustworthy. We present XAIport, a framework of XAI
microservices encapsulated into Open APIs to deliver early explanations as
observation for learning model quality assurance. XAIport enables configurable
XAI operations along with machine learning development. We quantify the
operational costs of incorporating XAI with three cloud computer vision
services on Microsoft Azure Cognitive Services, Google Cloud Vertex AI, and
Amazon Rekognition. Our findings show comparable operational costs between XAI
and traditional machine learning, with XAIport significantly improving both
cloud AI model performance and explanation stability.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2024 15:22:06 GMT"
}
] | 1,711,411,200,000 | [
[
"Wang",
"Zerui",
""
],
[
"Liu",
"Yan",
""
],
[
"Thiruselvi",
"Abishek Arumugam",
""
],
[
"Hamou-Lhadj",
"Abdelwahab",
""
]
] |
2403.16908 | Helge Spieker | Nassim Belmecheri, Arnaud Gotlieb, Nadjib Lazaar, Helge Spieker | Towards Trustworthy Automated Driving through Qualitative Scene
Understanding and Explanations | SAE International Journal of Connected and Automated Vehicles | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding driving scenes and communicating automated vehicle decisions
are key requirements for trustworthy automated driving. In this article, we
introduce the Qualitative Explainable Graph (QXG), which is a unified symbolic
and qualitative representation for scene understanding in urban mobility. The
QXG enables interpreting an automated vehicle's environment using sensor data
and machine learning models. It utilizes spatio-temporal graphs and qualitative
constraints to extract scene semantics from raw sensor inputs, such as LiDAR
and camera data, offering an interpretable scene model. A QXG can be
incrementally constructed in real-time, making it a versatile tool for
in-vehicle explanations across various sensor types. Our research showcases the
potential of QXG, particularly in the context of automated driving, where it
can rationalize decisions by linking the graph with observed actions. These
explanations can serve diverse purposes, from informing passengers and alerting
vulnerable road users to enabling post-hoc analysis of prior behaviors.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2024 16:19:33 GMT"
}
] | 1,711,411,200,000 | [
[
"Belmecheri",
"Nassim",
""
],
[
"Gotlieb",
"Arnaud",
""
],
[
"Lazaar",
"Nadjib",
""
],
[
"Spieker",
"Helge",
""
]
] |
2403.17101 | Lenore Blum | Lenore Blum and Manuel Blum | AI Consciousness is Inevitable: A Theoretical Computer Science
Perspective | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We look at consciousness through the lens of Theoretical Computer Science, a
branch of mathematics that studies computation under resource limitations. From
this perspective, we develop a formal machine model for consciousness. The
model is inspired by Alan Turing's simple yet powerful model of computation and
Bernard Baars' theater model of consciousness. Though extremely simple, the
model aligns at a high level with many of the major scientific theories of
human and animal consciousness, supporting our claim that machine consciousness
is inevitable.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2024 18:38:54 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Apr 2024 17:28:44 GMT"
},
{
"version": "v3",
"created": "Thu, 16 May 2024 23:07:04 GMT"
}
] | 1,716,163,200,000 | [
[
"Blum",
"Lenore",
""
],
[
"Blum",
"Manuel",
""
]
] |
2403.17108 | Marko Djukanovic Dr. | Marko Djukanovic, Stefan Kapunac, Aleksandar Kartelj, Dragan Matic | Graph Protection under Multiple Simultaneous Attacks: A Heuristic
Approach | 32 pages, 10 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This work focuses on developing an effective meta-heuristic approach to
protect against simultaneous attacks on nodes of a network modeled using a
graph. Specifically, we focus on the $k$-strong Roman domination problem, a
generalization of the well-known Roman domination problem on graphs. This
general problem is about assigning integer weights to nodes that represent the
number of field armies stationed at each node in order to satisfy the
protection constraints while minimizing the total weights. These constraints
concern the protection of a graph against any simultaneous attack consisting of
$k \in \mathbb{N}$ nodes. An attack is considered repelled if each node labeled
0 can be defended by borrowing an army from one of its neighboring nodes,
ensuring that the neighbor retains at least one army for self-defense. The
$k$-SRD problem has practical applications in various areas, such as developing
counter-terrorism strategies or managing supply chain disruptions. The solution
to this problem is notoriously difficult to find, as even checking the
feasibility of the proposed solution requires an exponential number of steps.
We propose a variable neighborhood search algorithm in which the feasibility of
the solution is checked by introducing the concept of quasi-feasibility, which
is realized by careful sampling within the set of all possible attacks.
Extensive experimental evaluations show the scalability and robustness of the
proposed approach compared to the two exact approaches from the literature.
Experiments are conducted with random networks from the literature and newly
introduced random wireless networks as well as with real-world networks. A
practical application scenario, using real-world networks, involves applying
our approach to graphs extracted from GeoJSON files containing geographic
features of hundreds of cities or larger regions.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2024 18:46:13 GMT"
}
] | 1,711,497,600,000 | [
[
"Djukanovic",
"Marko",
""
],
[
"Kapunac",
"Stefan",
""
],
[
"Kartelj",
"Aleksandar",
""
],
[
"Matic",
"Dragan",
""
]
] |
2403.17306 | Anku Rani | Anku Rani, Vipula Rawte, Harshad Sharma, Neeraj Anand, Krishnav
Rajbangshi, Amit Sheth, Amitava Das | Visual Hallucination: Definition, Quantification, and Prescriptive
Remediations | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The troubling rise of hallucination presents perhaps the most significant
impediment to the advancement of responsible AI. In recent times, considerable
research has focused on detecting and mitigating hallucination in Large
Language Models (LLMs). However, it's worth noting that hallucination is also
quite prevalent in Vision-Language models (VLMs). In this paper, we offer a
fine-grained discourse on profiling VLM hallucination based on two tasks: i)
image captioning, and ii) Visual Question Answering (VQA). We delineate eight
fine-grained orientations of visual hallucination: i) Contextual Guessing, ii)
Identity Incongruity, iii) Geographical Erratum, iv) Visual Illusion, v) Gender
Anomaly, vi) VLM as Classifier, vii) Wrong Reading, and viii) Numeric
Discrepancy. We curate Visual HallucInation eLiciTation (VHILT), a publicly
available dataset comprising 2,000 samples generated using eight VLMs across
two tasks of captioning and VQA along with human annotations for the categories
as mentioned earlier.
| [
{
"version": "v1",
"created": "Tue, 26 Mar 2024 01:28:42 GMT"
},
{
"version": "v2",
"created": "Sun, 31 Mar 2024 03:52:14 GMT"
}
] | 1,712,016,000,000 | [
[
"Rani",
"Anku",
""
],
[
"Rawte",
"Vipula",
""
],
[
"Sharma",
"Harshad",
""
],
[
"Anand",
"Neeraj",
""
],
[
"Rajbangshi",
"Krishnav",
""
],
[
"Sheth",
"Amit",
""
],
[
"Das",
"Amitava",
""
]
] |
2403.17358 | Paula Stocco | Paula Stocco, Suhas Chundi, Arec Jamgochian, Mykel J. Kochenderfer | Addressing Myopic Constrained POMDP Planning with Recursive Dual Ascent | Accepted to the 2024 International Conference on Automated Planning
and Scheduling (ICAPS) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lagrangian-guided Monte Carlo tree search with global dual ascent has been
applied to solve large constrained partially observable Markov decision
processes (CPOMDPs) online. In this work, we demonstrate that these global dual
parameters can lead to myopic action selection during exploration, ultimately
leading to suboptimal decision making. To address this, we introduce
history-dependent dual variables that guide local action selection and are
optimized with recursive dual ascent. We empirically compare the performance of
our approach on a motivating toy example and two large CPOMDPs, demonstrating
improved exploration, and ultimately, safer outcomes.
| [
{
"version": "v1",
"created": "Tue, 26 Mar 2024 03:46:33 GMT"
}
] | 1,711,497,600,000 | [
[
"Stocco",
"Paula",
""
],
[
"Chundi",
"Suhas",
""
],
[
"Jamgochian",
"Arec",
""
],
[
"Kochenderfer",
"Mykel J.",
""
]
] |
2403.17395 | Zhen Li | Zhen Li, Kaixiang Zhu, Xuegong Zhou, Lingli Wang | An Open-source End-to-End Logic Optimization Framework for Large-scale
Boolean Network with Reinforcement Learning | 5 pages, 4 figures, 1 table | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an open-source end-to-end logic optimization framework for
large-scale boolean network with reinforcement learning.
| [
{
"version": "v1",
"created": "Tue, 26 Mar 2024 05:25:01 GMT"
}
] | 1,711,497,600,000 | [
[
"Li",
"Zhen",
""
],
[
"Zhu",
"Kaixiang",
""
],
[
"Zhou",
"Xuegong",
""
],
[
"Wang",
"Lingli",
""
]
] |
2403.17426 | Saurav Joshi | Saurav Joshi, Filip Ilievski, Jay Pujara | Knowledge-Powered Recommendation for an Improved Diet Water Footprint | 3 pages, 1 figure, AAAI'24 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | According to WWF, 1.1 billion people lack access to water, and 2.7 billion
experience water scarcity at least one month a year. By 2025, two-thirds of the
world's population may be facing water shortages. This highlights the urgency
of managing water usage efficiently, especially in water-intensive sectors like
food. This paper proposes a recommendation engine, powered by knowledge graphs,
aiming to facilitate sustainable and healthy food consumption. The engine
recommends ingredient substitutes in user recipes that improve nutritional
value and reduce environmental impact, particularly water footprint. The system
architecture includes source identification, information extraction, schema
alignment, knowledge graph construction, and user interface development. The
research offers a promising tool for promoting healthier eating habits and
contributing to water conservation efforts.
| [
{
"version": "v1",
"created": "Tue, 26 Mar 2024 06:47:17 GMT"
}
] | 1,711,497,600,000 | [
[
"Joshi",
"Saurav",
""
],
[
"Ilievski",
"Filip",
""
],
[
"Pujara",
"Jay",
""
]
] |
2403.17532 | Yilin Wang | Yilin Wang, Minghao Hu, Zhen Huang, Dongsheng Li, Dong Yang, Xicheng
Lu | KC-GenRe: A Knowledge-constrained Generative Re-ranking Method Based on
Large Language Models for Knowledge Graph Completion | This paper has been accepted for publication in the proceedings of
LREC-COLING 2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of knowledge graph completion (KGC) is to predict missing facts
among entities. Previous methods for KGC re-ranking are mostly built on
non-generative language models to obtain the probability of each candidate.
Recently, generative large language models (LLMs) have shown outstanding
performance on several tasks such as information extraction and dialog systems.
Leveraging them for KGC re-ranking is beneficial for leveraging the extensive
pre-trained knowledge and powerful generative capabilities. However, it may
encounter new problems when accomplishing the task, namely mismatch,
misordering and omission. To this end, we introduce KC-GenRe, a
knowledge-constrained generative re-ranking method based on LLMs for KGC. To
overcome the mismatch issue, we formulate the KGC re-ranking task as a
candidate identifier sorting generation problem implemented by generative LLMs.
To tackle the misordering issue, we develop a knowledge-guided interactive
training method that enhances the identification and ranking of candidates. To
address the omission issue, we design a knowledge-augmented constrained
inference method that enables contextual prompting and controlled generation,
so as to obtain valid rankings. Experimental results show that KG-GenRe
achieves state-of-the-art performance on four datasets, with gains of up to
6.7% and 7.7% in the MRR and Hits@1 metric compared to previous methods, and
9.0% and 11.1% compared to that without re-ranking. Extensive analysis
demonstrates the effectiveness of components in KG-GenRe.
| [
{
"version": "v1",
"created": "Tue, 26 Mar 2024 09:36:59 GMT"
}
] | 1,711,497,600,000 | [
[
"Wang",
"Yilin",
""
],
[
"Hu",
"Minghao",
""
],
[
"Huang",
"Zhen",
""
],
[
"Li",
"Dongsheng",
""
],
[
"Yang",
"Dong",
""
],
[
"Lu",
"Xicheng",
""
]
] |
2403.17607 | Kai Yuan Dr. | Kai Yuan, Christoph Bauinger, Xiangyi Zhang, Pascal Baehr, Matthias
Kirchhart, Darius Dabert, Adrien Tousnakhoff, Pierre Boudier, Michael
Paulitsch | Fully-fused Multi-Layer Perceptrons on Intel Data Center GPUs | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper presents a SYCL implementation of Multi-Layer Perceptrons (MLPs),
which targets and is optimized for the Intel Data Center GPU Max 1550. To
increase the performance, our implementation minimizes the slow global memory
accesses by maximizing the data reuse within the general register file and the
shared local memory by fusing the operations in each layer of the MLP. We show
with a simple roofline model that this results in a significant increase in the
arithmetic intensity, leading to improved performance, especially for
inference. We compare our approach to a similar CUDA implementation for MLPs
and show that our implementation on the Intel Data Center GPU outperforms the
CUDA implementation on Nvidia's H100 GPU by a factor up to 2.84 in inference
and 1.75 in training. The paper also showcases the efficiency of our SYCL
implementation in three significant areas: Image Compression, Neural Radiance
Fields, and Physics-Informed Machine Learning. In all cases, our implementation
outperforms the off-the-shelf Intel Extension for PyTorch (IPEX) implementation
on the same Intel GPU by up to a factor of 30 and the CUDA PyTorch version on
Nvidia's H100 GPU by up to a factor 19. The code can be found at
https://github.com/intel/tiny-dpcpp-nn.
| [
{
"version": "v1",
"created": "Tue, 26 Mar 2024 11:38:39 GMT"
}
] | 1,711,497,600,000 | [
[
"Yuan",
"Kai",
""
],
[
"Bauinger",
"Christoph",
""
],
[
"Zhang",
"Xiangyi",
""
],
[
"Baehr",
"Pascal",
""
],
[
"Kirchhart",
"Matthias",
""
],
[
"Dabert",
"Darius",
""
],
[
"Tousnakhoff",
"Adrien",
""
],
[
"Boudier",
"Pierre",
""
],
[
"Paulitsch",
"Michael",
""
]
] |
2403.17653 | Quratul-Ain Mahesar | Quratul-ain Mahesar, Nir Oren, Wamberto W. Vasconcelos | An Extension-based Approach for Computing and Verifying Preferences in
Abstract Argumentation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an extension-based approach for computing and verifying
preferences in an abstract argumentation system. Although numerous
argumentation semantics have been developed previously for identifying
acceptable sets of arguments from an argumentation framework, there is a lack
of justification behind their acceptability based on implicit argument
preferences. Preference-based argumentation frameworks allow one to determine
what arguments are justified given a set of preferences. Our research considers
the inverse of the standard reasoning problem, i.e., given an abstract
argumentation framework and a set of justified arguments, we compute what the
possible preferences over arguments are. Furthermore, there is a need to verify
(i.e., assess) that the computed preferences would lead to the acceptable sets
of arguments. This paper presents a novel approach and algorithm for
exhaustively computing and enumerating all possible sets of preferences
(restricted to three identified cases) for a conflict-free set of arguments in
an abstract argumentation framework. We prove the soundness, completeness and
termination of the algorithm. The research establishes that preferences are
determined using an extension-based approach after the evaluation phase
(acceptability of arguments) rather than stated beforehand. In this work, we
focus our research study on grounded, preferred and stable semantics. We show
that the complexity of computing sets of preferences is exponential in the
number of arguments, and thus, describe an approximate approach and algorithm
to compute the preferences. Furthermore, we present novel algorithms for
verifying (i.e., assessing) the computed preferences. We provide details of the
implementation of the algorithms (source code has been made available), various
experiments performed to evaluate the algorithms and the analysis of the
results.
| [
{
"version": "v1",
"created": "Tue, 26 Mar 2024 12:36:11 GMT"
}
] | 1,711,497,600,000 | [
[
"Mahesar",
"Quratul-ain",
""
],
[
"Oren",
"Nir",
""
],
[
"Vasconcelos",
"Wamberto W.",
""
]
] |
2403.17683 | Yang Yang | Shengdong Xu, Zhouyang Chi, Yang Yang | Solution for Emotion Prediction Competition of Workshop on Emotionally
and Culturally Intelligent AI | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This report provide a detailed description of the method that we explored and
proposed in the WECIA Emotion Prediction Competition (EPC), which predicts a
person's emotion through an artistic work with a comment. The dataset of this
competition is ArtELingo, designed to encourage work on diversity across
languages and cultures. The dataset has two main challenges, namely modal
imbalance problem and language-cultural differences problem. In order to
address this issue, we propose a simple yet effective approach called
single-multi modal with Emotion-Cultural specific prompt(ECSP), which focuses
on using the single modal message to enhance the performance of multimodal
models and a well-designed prompt to reduce cultural differences problem. To
clarify, our approach contains two main blocks:
(1)XLM-R\cite{conneau2019unsupervised} based unimodal model and
X$^2$-VLM\cite{zeng2022x} based multimodal model (2) Emotion-Cultural specific
prompt. Our approach ranked first in the final test with a score of 0.627.
| [
{
"version": "v1",
"created": "Tue, 26 Mar 2024 13:14:18 GMT"
},
{
"version": "v2",
"created": "Sun, 31 Mar 2024 14:44:06 GMT"
}
] | 1,712,016,000,000 | [
[
"Xu",
"Shengdong",
""
],
[
"Chi",
"Zhouyang",
""
],
[
"Yang",
"Yang",
""
]
] |
2403.17726 | Qingyuan Wang | Qingyuan Wang, Barry Cardiff, Antoine Frapp\'e, Benoit Larras, Deepu
John | Tiny Models are the Computational Saver for Large Models | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper introduces TinySaver, an early-exit-like dynamic model compression
approach which employs tiny models to substitute large models adaptively.
Distinct from traditional compression techniques, dynamic methods like
TinySaver can leverage the difficulty differences to allow certain inputs to
complete their inference processes early, thereby conserving computational
resources. Most existing early exit designs are implemented by attaching
additional network branches to the model's backbone. Our study, however,
reveals that completely independent tiny models can replace a substantial
portion of the larger models' job with minimal impact on performance. Employing
them as the first exit can remarkably enhance computational efficiency. By
searching and employing the most appropriate tiny model as the computational
saver for a given large model, the proposed approaches work as a novel and
generic method to model compression. This finding will help the research
community in exploring new compression methods to address the escalating
computational demands posed by rapidly evolving AI models. Our evaluation of
this approach in ImageNet-1k classification demonstrates its potential to
reduce the number of compute operations by up to 90%, with only negligible
losses in performance, across various modern vision models. The code of this
work will be available.
| [
{
"version": "v1",
"created": "Tue, 26 Mar 2024 14:14:30 GMT"
}
] | 1,711,497,600,000 | [
[
"Wang",
"Qingyuan",
""
],
[
"Cardiff",
"Barry",
""
],
[
"Frappé",
"Antoine",
""
],
[
"Larras",
"Benoit",
""
],
[
"John",
"Deepu",
""
]
] |
2403.17735 | Xiang Tao | Xiang Tao, Mingqing Zhang, Qiang Liu, Shu Wu, Liang Wang | Out-of-distribution Rumor Detection via Test-Time Adaptation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Due to the rapid spread of rumors on social media, rumor detection has become
an extremely important challenge. Existing methods for rumor detection have
achieved good performance, as they have collected enough corpus from the same
data distribution for model training. However, significant distribution shifts
between the training data and real-world test data occur due to differences in
news topics, social media platforms, languages and the variance in propagation
scale caused by news popularity. This leads to a substantial decline in the
performance of these existing methods in Out-Of-Distribution (OOD) situations.
To address this problem, we propose a simple and efficient method named
Test-time Adaptation for Rumor Detection under distribution shifts (TARD). This
method models the propagation of news in the form of a propagation graph, and
builds propagation graph test-time adaptation framework, enhancing the model's
adaptability and robustness when facing OOD problems. Extensive experiments
conducted on two group datasets collected from real-world social platforms
demonstrate that our framework outperforms the state-of-the-art methods in
performance.
| [
{
"version": "v1",
"created": "Tue, 26 Mar 2024 14:24:01 GMT"
}
] | 1,711,584,000,000 | [
[
"Tao",
"Xiang",
""
],
[
"Zhang",
"Mingqing",
""
],
[
"Liu",
"Qiang",
""
],
[
"Wu",
"Shu",
""
],
[
"Wang",
"Liang",
""
]
] |
2403.17742 | Elvio Amparore | Muhammad Rashid, Elvio G. Amparore, Enrico Ferrari, Damiano Verda | Using Stratified Sampling to Improve LIME Image Explanations | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We investigate the use of a stratified sampling approach for LIME Image, a
popular model-agnostic explainable AI method for computer vision tasks, in
order to reduce the artifacts generated by typical Monte Carlo sampling. Such
artifacts are due to the undersampling of the dependent variable in the
synthetic neighborhood around the image being explained, which may result in
inadequate explanations due to the impossibility of fitting a linear regressor
on the sampled data. We then highlight a connection with the Shapley theory,
where similar arguments about undersampling and sample relevance were suggested
in the past. We derive all the formulas and adjustment factors required for an
unbiased stratified sampling estimator. Experiments show the efficacy of the
proposed approach.
| [
{
"version": "v1",
"created": "Tue, 26 Mar 2024 14:30:23 GMT"
}
] | 1,711,497,600,000 | [
[
"Rashid",
"Muhammad",
""
],
[
"Amparore",
"Elvio G.",
""
],
[
"Ferrari",
"Enrico",
""
],
[
"Verda",
"Damiano",
""
]
] |
2403.17814 | Ling Chen | Xiaobing Yuan and Ling Chen | D-PAD: Deep-Shallow Multi-Frequency Patterns Disentangling for Time
Series Forecasting | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In time series forecasting, effectively disentangling intricate temporal
patterns is crucial. While recent works endeavor to combine decomposition
techniques with deep learning, multiple frequencies may still be mixed in the
decomposed components, e.g., trend and seasonal. Furthermore, frequency domain
analysis methods, e.g., Fourier and wavelet transforms, have limitations in
resolution in the time domain and adaptability. In this paper, we propose
D-PAD, a deep-shallow multi-frequency patterns disentangling neural network for
time series forecasting. Specifically, a multi-component decomposing (MCD)
block is introduced to decompose the series into components with different
frequency ranges, corresponding to the "shallow" aspect. A
decomposition-reconstruction-decomposition (D-R-D) module is proposed to
progressively extract the information of frequencies mixed in the components,
corresponding to the "deep" aspect. After that, an interaction and fusion (IF)
module is used to further analyze the components. Extensive experiments on
seven real-world datasets demonstrate that D-PAD achieves the state-of-the-art
performance, outperforming the best baseline by an average of 9.48% and 7.15%
in MSE and MAE, respectively.
| [
{
"version": "v1",
"created": "Tue, 26 Mar 2024 15:52:36 GMT"
}
] | 1,711,497,600,000 | [
[
"Yuan",
"Xiaobing",
""
],
[
"Chen",
"Ling",
""
]
] |
2403.17826 | Marcel Steinmetz | Gregor Behnke, Marcel Steinmetz | On the Computational Complexity of Stackelberg Planning and
Meta-Operator Verification: Technical Report | Presented at ICAPS24 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Stackelberg planning is a recently introduced single-turn two-player
adversarial planning model, where two players are acting in a joint classical
planning task, the objective of the first player being hampering the second
player from achieving its goal. This places the Stackelberg planning problem
somewhere between classical planning and general combinatorial two-player
games. But, where exactly? All investigations of Stackelberg planning so far
focused on practical aspects. We close this gap by conducting the first
theoretical complexity analysis of Stackelberg planning. We show that in
general Stackelberg planning is actually no harder than classical planning.
Under a polynomial plan-length restriction, however, Stackelberg planning is a
level higher up in the polynomial complexity hierarchy, suggesting that
compilations into classical planning come with a worst-case exponential
plan-length increase. In attempts to identify tractable fragments, we further
study its complexity under various planning task restrictions, showing that
Stackelberg planning remains intractable where classical planning is not. We
finally inspect the complexity of meta-operator verification, a problem that
has been recently connected to Stackelberg planning.
| [
{
"version": "v1",
"created": "Tue, 26 Mar 2024 16:06:33 GMT"
}
] | 1,711,497,600,000 | [
[
"Behnke",
"Gregor",
""
],
[
"Steinmetz",
"Marcel",
""
]
] |
2403.17873 | Andrea Ferrario | Andrea Ferrario, Alberto Termine, Alessandro Facchini | Addressing Social Misattributions of Large Language Models: An
HCXAI-based Approach | Extended version of the manuscript accepted for the ACM CHI Workshop
on Human-Centered Explainable AI 2024 (HCXAI24) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Human-centered explainable AI (HCXAI) advocates for the integration of social
aspects into AI explanations. Central to the HCXAI discourse is the Social
Transparency (ST) framework, which aims to make the socio-organizational
context of AI systems accessible to their users. In this work, we suggest
extending the ST framework to address the risks of social misattributions in
Large Language Models (LLMs), particularly in sensitive areas like mental
health. In fact LLMs, which are remarkably capable of simulating roles and
personas, may lead to mismatches between designers' intentions and users'
perceptions of social attributes, risking to promote emotional manipulation and
dangerous behaviors, cases of epistemic injustice, and unwarranted trust. To
address these issues, we propose enhancing the ST framework with a fifth
'W-question' to clarify the specific social attributions assigned to LLMs by
its designers and users. This addition aims to bridge the gap between LLM
capabilities and user perceptions, promoting the ethically responsible
development and use of LLM-based technology.
| [
{
"version": "v1",
"created": "Tue, 26 Mar 2024 17:02:42 GMT"
}
] | 1,711,497,600,000 | [
[
"Ferrario",
"Andrea",
""
],
[
"Termine",
"Alberto",
""
],
[
"Facchini",
"Alessandro",
""
]
] |
2403.17914 | Hao Yan | Xinyu Zhao, Hao Yan, Yongming Liu | Hierarchical Multi-label Classification for Fine-level Event Extraction
from Aviation Accident Reports | Accepted in INFORMS Journal of Data Science | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | A large volume of accident reports is recorded in the aviation domain, which
greatly values improving aviation safety. To better use those reports, we need
to understand the most important events or impact factors according to the
accident reports. However, the increasing number of accident reports requires
large efforts from domain experts to label those reports. In order to make the
labeling process more efficient, many researchers have started developing
algorithms to identify the underlying events from accident reports
automatically. This article argues that we can identify the events more
accurately by leveraging the event taxonomy. More specifically, we consider the
problem a hierarchical classification task where we first identify the
coarse-level information and then predict the fine-level information. We
achieve this hierarchical classification process by incorporating a novel
hierarchical attention module into BERT. To further utilize the information
from event taxonomy, we regularize the proposed model according to the
relationship and distribution among labels. The effectiveness of our framework
is evaluated with the data collected by National Transportation Safety Board
(NTSB). It has been shown that fine-level prediction accuracy is highly
improved, and the regularization term can be beneficial to the rare event
identification problem.
| [
{
"version": "v1",
"created": "Tue, 26 Mar 2024 17:51:06 GMT"
}
] | 1,711,497,600,000 | [
[
"Zhao",
"Xinyu",
""
],
[
"Yan",
"Hao",
""
],
[
"Liu",
"Yongming",
""
]
] |
2403.17918 | Longtao Zheng | Longtao Zheng, Zhiyuan Huang, Zhenghai Xue, Xinrun Wang, Bo An,
Shuicheng Yan | AgentStudio: A Toolkit for Building General Virtual Agents | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Creating autonomous virtual agents capable of using arbitrary software on any
digital device remains a major challenge for artificial intelligence. Two key
obstacles hinder progress: insufficient infrastructure for building virtual
agents in real-world environments, and the need for in-the-wild evaluation of
fundamental agent abilities. To address this, we introduce AgentStudio, an
online, realistic, and multimodal toolkit that covers the entire lifecycle of
agent development. This includes environment setups, data collection, agent
evaluation, and visualization. The observation and action spaces are highly
generic, supporting both function calling and human-computer interfaces. This
versatility is further enhanced by AgentStudio's graphical user interfaces,
which allow efficient development of datasets and benchmarks in real-world
settings. To illustrate, we introduce a visual grounding dataset and a
real-world benchmark suite, both created with our graphical interfaces.
Furthermore, we present several actionable insights derived from AgentStudio,
e.g., general visual grounding, open-ended tool creation, learning from videos,
etc. We have open-sourced the environments, datasets, benchmarks, and
interfaces to promote research towards developing general virtual agents for
the future.
| [
{
"version": "v1",
"created": "Tue, 26 Mar 2024 17:54:15 GMT"
}
] | 1,711,497,600,000 | [
[
"Zheng",
"Longtao",
""
],
[
"Huang",
"Zhiyuan",
""
],
[
"Xue",
"Zhenghai",
""
],
[
"Wang",
"Xinrun",
""
],
[
"An",
"Bo",
""
],
[
"Yan",
"Shuicheng",
""
]
] |
2403.18056 | Qingxu Fu | Qingxu Fu, Tenghai Qiu, Jianqiang Yi, Zhiqiang Pu, Xiaolin Ai | Self-Clustering Hierarchical Multi-Agent Reinforcement Learning with
Extensible Cooperation Graph | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-Agent Reinforcement Learning (MARL) has been successful in solving many
cooperative challenges. However, classic non-hierarchical MARL algorithms still
cannot address various complex multi-agent problems that require hierarchical
cooperative behaviors. The cooperative knowledge and policies learned in
non-hierarchical algorithms are implicit and not interpretable, thereby
restricting the integration of existing knowledge. This paper proposes a novel
hierarchical MARL model called Hierarchical Cooperation Graph Learning (HCGL)
for solving general multi-agent problems. HCGL has three components: a dynamic
Extensible Cooperation Graph (ECG) for achieving self-clustering cooperation; a
group of graph operators for adjusting the topology of ECG; and an MARL
optimizer for training these graph operators. HCGL's key distinction from other
MARL models is that the behaviors of agents are guided by the topology of ECG
instead of policy neural networks. ECG is a three-layer graph consisting of an
agent node layer, a cluster node layer, and a target node layer. To manipulate
the ECG topology in response to changing environmental conditions, four graph
operators are trained to adjust the edge connections of ECG dynamically. The
hierarchical feature of ECG provides a unique approach to merge primitive
actions (actions executed by the agents) and cooperative actions (actions
executed by the clusters) into a unified action space, allowing us to integrate
fundamental cooperative knowledge into an extensible interface. In our
experiments, the HCGL model has shown outstanding performance in multi-agent
benchmarks with sparse rewards. We also verify that HCGL can easily be
transferred to large-scale scenarios with high zero-shot transfer success
rates.
| [
{
"version": "v1",
"created": "Tue, 26 Mar 2024 19:19:16 GMT"
}
] | 1,711,584,000,000 | [
[
"Fu",
"Qingxu",
""
],
[
"Qiu",
"Tenghai",
""
],
[
"Yi",
"Jianqiang",
""
],
[
"Pu",
"Zhiqiang",
""
],
[
"Ai",
"Xiaolin",
""
]
] |
2403.18057 | Qingxu Fu | Qingxu Fu, Zhiqiang Pu, Min Chen, Tenghai Qiu, Jianqiang Yi | Prioritized League Reinforcement Learning for Large-Scale Heterogeneous
Multiagent Systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large-scale heterogeneous multiagent systems feature various realistic
factors in the real world, such as agents with diverse abilities and overall
system cost. In comparison to homogeneous systems, heterogeneous systems offer
significant practical advantages. Nonetheless, they also present challenges for
multiagent reinforcement learning, including addressing the non-stationary
problem and managing an imbalanced number of agents with different types. We
propose a Prioritized Heterogeneous League Reinforcement Learning (PHLRL)
method to address large-scale heterogeneous cooperation problems. PHLRL
maintains a record of various policies that agents have explored during their
training and establishes a heterogeneous league consisting of diverse policies
to aid in future policy optimization. Furthermore, we design a prioritized
policy gradient approach to compensate for the gap caused by differences in the
number of different types of agents. Next, we use Unreal Engine to design a
large-scale heterogeneous cooperation benchmark named Large-Scale Multiagent
Operation (LSMO), which is a complex two-team competition scenario that
requires collaboration from both ground and airborne agents. We use experiments
to show that PHLRL outperforms state-of-the-art methods, including QTRAN and
QPLEX in LSMO.
| [
{
"version": "v1",
"created": "Tue, 26 Mar 2024 19:21:50 GMT"
}
] | 1,711,584,000,000 | [
[
"Fu",
"Qingxu",
""
],
[
"Pu",
"Zhiqiang",
""
],
[
"Chen",
"Min",
""
],
[
"Qiu",
"Tenghai",
""
],
[
"Yi",
"Jianqiang",
""
]
] |
2403.18203 | Nisha Pillai | Nisha Pillai, Athish Ram Das, Moses Ayoola, Ganga Gireesan, Bindu
Nanduri, Mahalingam Ramkumar | EndToEndML: An Open-Source End-to-End Pipeline for Machine Learning
Applications | 2024 7th International Conference on Information and Computer
Technologies (ICICT) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial intelligence (AI) techniques are widely applied in the life
sciences. However, applying innovative AI techniques to understand and
deconvolute biological complexity is hindered by the learning curve for life
science scientists to understand and use computing languages. An open-source,
user-friendly interface for AI models, that does not require programming skills
to analyze complex biological data will be extremely valuable to the
bioinformatics community. With easy access to different sequencing technologies
and increased interest in different 'omics' studies, the number of biological
datasets being generated has increased and analyzing these high-throughput
datasets is computationally demanding. The majority of AI libraries today
require advanced programming skills as well as machine learning, data
preprocessing, and visualization skills. In this research, we propose a
web-based end-to-end pipeline that is capable of preprocessing, training,
evaluating, and visualizing machine learning (ML) models without manual
intervention or coding expertise. By integrating traditional machine learning
and deep neural network models with visualizations, our library assists in
recognizing, classifying, clustering, and predicting a wide range of
multi-modal, multi-sensor datasets, including images, languages, and
one-dimensional numerical data, for drug discovery, pathogen classification,
and medical diagnostics.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2024 02:24:38 GMT"
}
] | 1,711,584,000,000 | [
[
"Pillai",
"Nisha",
""
],
[
"Das",
"Athish Ram",
""
],
[
"Ayoola",
"Moses",
""
],
[
"Gireesan",
"Ganga",
""
],
[
"Nanduri",
"Bindu",
""
],
[
"Ramkumar",
"Mahalingam",
""
]
] |
2403.18205 | Yuqi Yang | Yuqi Yang, Xiaowen Huang, Jitao Sang | Exploring the Privacy Protection Capabilities of Chinese Large Language
Models | 11 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs), renowned for their impressive capabilities in
various tasks, have significantly advanced artificial intelligence. Yet, these
advancements have raised growing concerns about privacy and security
implications. To address these issues and explain the risks inherent in these
models, we have devised a three-tiered progressive framework tailored for
evaluating privacy in language systems. This framework consists of
progressively complex and in-depth privacy test tasks at each tier. Our primary
objective is to comprehensively evaluate the sensitivity of large language
models to private information, examining how effectively they discern, manage,
and safeguard sensitive data in diverse scenarios. This systematic evaluation
helps us understand the degree to which these models comply with privacy
protection guidelines and the effectiveness of their inherent safeguards
against privacy breaches. Our observations indicate that existing Chinese large
language models universally show privacy protection shortcomings. It seems that
at the moment this widespread issue is unavoidable and may pose corresponding
privacy risks in applications based on these models.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2024 02:31:54 GMT"
}
] | 1,711,584,000,000 | [
[
"Yang",
"Yuqi",
""
],
[
"Huang",
"Xiaowen",
""
],
[
"Sang",
"Jitao",
""
]
] |
2403.18218 | Yu Wang | Yu Wang | Leveraging Large Language Models for Fuzzy String Matching in Political
Science | 7 pages, 2 figures, 1 table; | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Fuzzy string matching remains a key issue when political scientists combine
data from different sources. Existing matching methods invariably rely on
string distances, such as Levenshtein distance and cosine similarity. As such,
they are inherently incapable of matching strings that refer to the same entity
with different names such as ''JP Morgan'' and ''Chase Bank'', ''DPRK'' and
''North Korea'', ''Chuck Fleischmann (R)'' and ''Charles Fleischmann (R)''. In
this letter, we propose to use large language models to entirely sidestep this
problem in an easy and intuitive manner. Extensive experiments show that our
proposed methods can improve the state of the art by as much as 39% in terms of
average precision while being substantially easier and more intuitive to use by
political scientists. Moreover, our results are robust against various
temperatures. We further note that enhanced prompting can lead to additional
performance improvements.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2024 03:04:21 GMT"
}
] | 1,711,584,000,000 | [
[
"Wang",
"Yu",
""
]
] |
2403.18230 | Cheng Wang | Chuwen Wang, Shirong Zeng, Cheng Wang | Large Language Models Need Consultants for Reasoning: Becoming an Expert
in a Complex Human System Through Behavior Simulation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs), in conjunction with various reasoning
reinforcement methodologies, have demonstrated remarkable capabilities
comparable to humans in fields such as mathematics, law, coding, common sense,
and world knowledge. In this paper, we delve into the reasoning abilities of
LLMs within complex human systems. We propose a novel reasoning framework,
termed ``Mosaic Expert Observation Wall'' (MEOW) exploiting
generative-agents-based simulation technique. In the MEOW framework, simulated
data are utilized to train an expert model concentrating ``experience'' about a
specific task in each independent time of simulation. It is the accumulated
``experience'' through the simulation that makes for an expert on a task in a
complex human system. We conduct the experiments within a communication game
that mirrors real-world security scenarios. The results indicate that our
proposed methodology can cooperate with existing methodologies to enhance the
reasoning abilities of LLMs in complex human systems.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2024 03:33:32 GMT"
}
] | 1,711,584,000,000 | [
[
"Wang",
"Chuwen",
""
],
[
"Zeng",
"Shirong",
""
],
[
"Wang",
"Cheng",
""
]
] |
2403.18243 | Linhao Ye | Linhao Ye, Zhikai Lei, Jianghao Yin, Qin Chen, Jie Zhou, Liang He | Boosting Conversational Question Answering with Fine-Grained
Retrieval-Augmentation and Self-Check | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Retrieval-Augmented Generation (RAG) aims to generate more reliable and
accurate responses, by augmenting large language models (LLMs) with the
external vast and dynamic knowledge. Most previous work focuses on using RAG
for single-round question answering, while how to adapt RAG to the complex
conversational setting wherein the question is interdependent on the preceding
context is not well studied. In this paper, we propose a conversation-level RAG
approach, which incorporates fine-grained retrieval augmentation and self-check
for conversational question answering (CQA). In particular, our approach
consists of three components, namely conversational question refiner,
fine-grained retriever and self-check based response generator, which work
collaboratively for question understanding and relevant information acquisition
in conversational settings. Extensive experiments demonstrate the great
advantages of our approach over the state-of-the-art baselines. Moreover, we
also release a Chinese CQA dataset with new features including reformulated
question, extracted keyword, retrieved paragraphs and their helpfulness, which
facilitates further researches in RAG enhanced CQA.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2024 04:20:18 GMT"
}
] | 1,711,584,000,000 | [
[
"Ye",
"Linhao",
""
],
[
"Lei",
"Zhikai",
""
],
[
"Yin",
"Jianghao",
""
],
[
"Chen",
"Qin",
""
],
[
"Zhou",
"Jie",
""
],
[
"He",
"Liang",
""
]
] |
2403.18278 | Michael Livanos | Michael Livanos and Ian Davidson | Identification and Uses of Deep Learning Backbones via Pattern Mining | 9 pages, 6 figures, published SIAM SDM24 | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Deep learning is extensively used in many areas of data mining as a black-box
method with impressive results. However, understanding the core mechanism of
how deep learning makes predictions is a relatively understudied problem. Here
we explore the notion of identifying a backbone of deep learning for a given
group of instances. A group here can be instances of the same class or even
misclassified instances of the same class. We view each instance for a given
group as activating a subset of neurons and attempt to find a subgraph of
neurons associated with a given concept/group. We formulate this problem as a
set cover style problem and show it is intractable and presents a highly
constrained integer linear programming (ILP) formulation. As an alternative, we
explore a coverage-based heuristic approach related to pattern mining, and show
it converges to a Pareto equilibrium point of the ILP formulation.
Experimentally we explore these backbones to identify mistakes and improve
performance, explanation, and visualization. We demonstrate application-based
results using several challenging data sets, including Bird Audio Detection
(BAD) Challenge and Labeled Faces in the Wild (LFW), as well as the classic
MNIST data.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2024 06:13:39 GMT"
}
] | 1,711,584,000,000 | [
[
"Livanos",
"Michael",
""
],
[
"Davidson",
"Ian",
""
]
] |
2403.18338 | Christophe Servan | Christophe Servan (ILES, STL), Sahar Ghannay (LISN), Sophie Rosset
(LISN) | mALBERT: Is a Compact Multilingual BERT Model Still Worth It? | The 2024 Joint International Conference on Computational Linguistics,
Language Resources and Evaluation, May 2024, Torino, Italy | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Within the current trend of Pretained Language Models (PLM), emerge more and
more criticisms about the ethical andecological impact of such models. In this
article, considering these critical remarks, we propose to focus on
smallermodels, such as compact models like ALBERT, which are more ecologically
virtuous than these PLM. However,PLMs enable huge breakthroughs in Natural
Language Processing tasks, such as Spoken and Natural LanguageUnderstanding,
classification, Question--Answering tasks. PLMs also have the advantage of
being multilingual, and,as far as we know, a multilingual version of compact
ALBERT models does not exist. Considering these facts, wepropose the free
release of the first version of a multilingual compact ALBERT model,
pre-trained using Wikipediadata, which complies with the ethical aspect of such
a language model. We also evaluate the model against classicalmultilingual PLMs
in classical NLP tasks. Finally, this paper proposes a rare study on the
subword tokenizationimpact on language performances.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2024 08:25:28 GMT"
}
] | 1,711,584,000,000 | [
[
"Servan",
"Christophe",
"",
"ILES, STL"
],
[
"Ghannay",
"Sahar",
"",
"LISN"
],
[
"Rosset",
"Sophie",
"",
"LISN"
]
] |
2403.18344 | Mingxing Peng | Mingxing Peng, Xusen Guo, Xianda Chen, Meixin Zhu, Kehua Chen, Hao
(Frank) Yang, Xuesong Wang, and Yinhai Wang | LC-LLM: Explainable Lane-Change Intention and Trajectory Predictions
with Large Language Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To ensure safe driving in dynamic environments, autonomous vehicles should
possess the capability to accurately predict the lane change intentions of
surrounding vehicles in advance and forecast their future trajectories.
Existing motion prediction approaches have ample room for improvement,
particularly in terms of long-term prediction accuracy and interpretability. In
this paper, we address these challenges by proposing LC-LLM, an explainable
lane change prediction model that leverages the strong reasoning capabilities
and self-explanation abilities of Large Language Models (LLMs). Essentially, we
reformulate the lane change prediction task as a language modeling problem,
processing heterogeneous driving scenario information in natural language as
prompts for input into the LLM and employing a supervised fine-tuning technique
to tailor the LLM specifically for our lane change prediction task. This allows
us to utilize the LLM's powerful common sense reasoning abilities to understand
complex interactive information, thereby improving the accuracy of long-term
predictions. Furthermore, we incorporate explanatory requirements into the
prompts in the inference stage. Therefore, our LC-LLM model not only can
predict lane change intentions and trajectories but also provides explanations
for its predictions, enhancing the interpretability. Extensive experiments on
the large-scale highD dataset demonstrate the superior performance and
interpretability of our LC-LLM in lane change prediction task. To the best of
our knowledge, this is the first attempt to utilize LLMs for predicting lane
change behavior. Our study shows that LLMs can encode comprehensive interaction
information for driving behavior understanding.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2024 08:34:55 GMT"
}
] | 1,711,584,000,000 | [
[
"Peng",
"Mingxing",
"",
"Frank"
],
[
"Guo",
"Xusen",
"",
"Frank"
],
[
"Chen",
"Xianda",
"",
"Frank"
],
[
"Zhu",
"Meixin",
"",
"Frank"
],
[
"Chen",
"Kehua",
"",
"Frank"
],
[
"Hao",
"",
"",
"Frank"
],
[
"Yang",
"",
""
],
[
"Wang",
"Xuesong",
""
],
[
"Wang",
"Yinhai",
""
]
] |
2403.18405 | Shengjie Ma | Shengjie Ma, Chong Chen, Qi Chu and Jiaxin Mao | Leveraging Large Language Models for Relevance Judgments in Legal Case
Retrieval | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Collecting relevant judgments for legal case retrieval is a challenging and
time-consuming task. Accurately judging the relevance between two legal cases
requires a considerable effort to read the lengthy text and a high level of
domain expertise to extract Legal Facts and make juridical judgments. With the
advent of advanced large language models, some recent studies have suggested
that it is promising to use LLMs for relevance judgment. Nonetheless, the
method of employing a general large language model for reliable relevance
judgments in legal case retrieval is yet to be thoroughly explored. To fill
this research gap, we devise a novel few-shot workflow tailored to the relevant
judgment of legal cases. The proposed workflow breaks down the annotation
process into a series of stages, imitating the process employed by human
annotators and enabling a flexible integration of expert reasoning to enhance
the accuracy of relevance judgments. By comparing the relevance judgments of
LLMs and human experts, we empirically show that we can obtain reliable
relevance judgments with the proposed workflow. Furthermore, we demonstrate the
capacity to augment existing legal case retrieval models through the synthesis
of data generated by the large language model.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2024 09:46:56 GMT"
}
] | 1,711,584,000,000 | [
[
"Ma",
"Shengjie",
""
],
[
"Chen",
"Chong",
""
],
[
"Chu",
"Qi",
""
],
[
"Mao",
"Jiaxin",
""
]
] |
2403.18547 | Philip Kenneweg | Philip Kenneweg, Sarah Schr\"oder, Barbara Hammer | Neural Architecture Search for Sentence Classification with BERT | null | null | 10.14428/esann/2022.ES2022-45 | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Pre training of language models on large text corpora is common practice in
Natural Language Processing. Following, fine tuning of these models is
performed to achieve the best results on a variety of tasks. In this paper we
question the common practice of only adding a single output layer as a
classification head on top of the network. We perform an AutoML search to find
architectures that outperform the current single layer at only a small compute
cost. We validate our classification architecture on a variety of NLP
benchmarks from the GLUE dataset.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2024 13:25:43 GMT"
}
] | 1,711,584,000,000 | [
[
"Kenneweg",
"Philip",
""
],
[
"Schröder",
"Sarah",
""
],
[
"Hammer",
"Barbara",
""
]
] |
2403.18659 | Stefanie Rinderle-Ma | Janik-Vasily Benzin and Gyunam Park and Juergen Mangler and Stefanie
Rinderle-Ma | INEXA: Interactive and Explainable Process Model Abstraction Through
Object-Centric Process Mining | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Process events are recorded by multiple information systems at different
granularity levels. Based on the resulting event logs, process models are
discovered at different granularity levels, as well. Events stored at a
fine-grained granularity level, for example, may hinder the discovered process
model to be displayed due the high number of resulting model elements. The
discovered process model of a real-world manufacturing process, for example,
consists of 1,489 model elements and over 2,000 arcs. Existing process model
abstraction techniques could help reducing the size of the model, but would
disconnect it from the underlying event log. Existing event abstraction
techniques do neither support the analysis of mixed granularity levels, nor
interactive exploration of a suitable granularity level. To enable the
exploration of discovered process models at different granularity levels, we
propose INEXA, an interactive, explainable process model abstraction method
that keeps the link to the event log. As a starting point, INEXA aggregates
large process models to a "displayable" size, e.g., for the manufacturing use
case to a process model with 58 model elements. Then, the process analyst can
explore granularity levels interactively, while applied abstractions are
automatically traced in the event log for explainability.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2024 15:03:33 GMT"
}
] | 1,711,584,000,000 | [
[
"Benzin",
"Janik-Vasily",
""
],
[
"Park",
"Gyunam",
""
],
[
"Mangler",
"Juergen",
""
],
[
"Rinderle-Ma",
"Stefanie",
""
]
] |
2403.18725 | Dennis Gross | Dennis Gross, Helge Spieker | Probabilistic Model Checking of Stochastic Reinforcement Learning
Policies | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We introduce a method to verify stochastic reinforcement learning (RL)
policies. This approach is compatible with any RL algorithm as long as the
algorithm and its corresponding environment collectively adhere to the Markov
property. In this setting, the future state of the environment should depend
solely on its current state and the action executed, independent of any
previous states or actions. Our method integrates a verification technique,
referred to as model checking, with RL, leveraging a Markov decision process, a
trained RL policy, and a probabilistic computation tree logic (PCTL) formula to
build a formal model that can be subsequently verified via the model checker
Storm. We demonstrate our method's applicability across multiple benchmarks,
comparing it to baseline methods called deterministic safety estimates and
naive monolithic model checking. Our results show that our method is suited to
verify stochastic RL policies.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2024 16:15:21 GMT"
}
] | 1,711,584,000,000 | [
[
"Gross",
"Dennis",
""
],
[
"Spieker",
"Helge",
""
]
] |
2403.19790 | Niall Taylor | Niall Taylor, Andrey Kormilitzin, Isabelle Lorge, Alejo
Nevado-Holgado, Dan W Joyce | Bespoke Large Language Models for Digital Triage Assistance in Mental
Health Care | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Contemporary large language models (LLMs) may have utility for processing
unstructured, narrative free-text clinical data contained in electronic health
records (EHRs) -- a particularly important use-case for mental health where a
majority of routinely-collected patient data lacks structured, machine-readable
content.
A significant problem for the the United Kingdom's National Health Service
(NHS) are the long waiting lists for specialist mental healthcare. According to
NHS data, in each month of 2023, there were between 370,000 and 470,000
individual new referrals into secondary mental healthcare services. Referrals
must be triaged by clinicians, using clinical information contained in the
patient's EHR to arrive at a decision about the most appropriate mental
healthcare team to assess and potentially treat these patients.
The ability to efficiently recommend a relevant team by ingesting potentially
voluminous clinical notes could help services both reduce referral waiting
times and with the right technology, improve the evidence available to justify
triage decisions.
We present and evaluate three different approaches for LLM-based, end-to-end
ingestion of variable-length clinical EHR data to assist clinicians when
triaging referrals. Our model is able to deliver triage recommendations
consistent with existing clinical practices and it's architecture was
implemented on a single GPU, making it practical for implementation in
resource-limited NHS environments where private implementations of LLM
technology will be necessary to ensure confidential clinical data is
appropriately controlled and governed.
| [
{
"version": "v1",
"created": "Thu, 28 Mar 2024 19:17:07 GMT"
}
] | 1,711,929,600,000 | [
[
"Taylor",
"Niall",
""
],
[
"Kormilitzin",
"Andrey",
""
],
[
"Lorge",
"Isabelle",
""
],
[
"Nevado-Holgado",
"Alejo",
""
],
[
"Joyce",
"Dan W",
""
]
] |
2403.19826 | Qitian Ma | Qitian Ma and Shyam Nanda Rai and Carlo Masone and Tatiana Tommasi | Segmentation Re-thinking Uncertainty Estimation Metrics for Semantic
Segmentation | Premature Submission: accidentally submitted before it was ready | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In the domain of computer vision, semantic segmentation emerges as a
fundamental application within machine learning, wherein individual pixels of
an image are classified into distinct semantic categories. This task transcends
traditional accuracy metrics by incorporating uncertainty quantification, a
critical measure for assessing the reliability of each segmentation prediction.
Such quantification is instrumental in facilitating informed decision-making,
particularly in applications where precision is paramount. Within this nuanced
framework, the metric known as PAvPU (Patch Accuracy versus Patch Uncertainty)
has been developed as a specialized tool for evaluating entropy-based
uncertainty in image segmentation tasks. However, our investigation identifies
three core deficiencies within the PAvPU framework and proposes robust
solutions aimed at refining the metric. By addressing these issues, we aim to
enhance the reliability and applicability of uncertainty quantification,
especially in scenarios that demand high levels of safety and accuracy, thus
contributing to the advancement of semantic segmentation methodologies in
critical applications.
| [
{
"version": "v1",
"created": "Thu, 28 Mar 2024 20:34:02 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Apr 2024 14:55:53 GMT"
}
] | 1,712,620,800,000 | [
[
"Ma",
"Qitian",
""
],
[
"Rai",
"Shyam Nanda",
""
],
[
"Masone",
"Carlo",
""
],
[
"Tommasi",
"Tatiana",
""
]
] |
2403.19857 | Xiaomin Ouyang Dr. | Xiaomin Ouyang and Mani Srivastava | LLMSense: Harnessing LLMs for High-level Reasoning Over Spatiotemporal
Sensor Traces | 6 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Most studies on machine learning in sensing systems focus on low-level
perception tasks that process raw sensory data within a short time window.
However, many practical applications, such as human routine modeling and
occupancy tracking, require high-level reasoning abilities to comprehend
concepts and make inferences based on long-term sensor traces. Existing machine
learning-based approaches for handling such complex tasks struggle to
generalize due to the limited training samples and the high dimensionality of
sensor traces, necessitating the integration of human knowledge for designing
first-principle models or logic reasoning methods. We pose a fundamental
question: Can we harness the reasoning capabilities and world knowledge of
Large Language Models (LLMs) to recognize complex events from long-term
spatiotemporal sensor traces? To answer this question, we design an effective
prompting framework for LLMs on high-level reasoning tasks, which can handle
traces from the raw sensor data as well as the low-level perception results. We
also design two strategies to enhance performance with long sensor traces,
including summarization before reasoning and selective inclusion of historical
traces. Our framework can be implemented in an edge-cloud setup, running small
LLMs on the edge for data summarization and performing high-level reasoning on
the cloud for privacy preservation. The results show that LLMSense can achieve
over 80\% accuracy on two high-level reasoning tasks such as dementia diagnosis
with behavior traces and occupancy tracking with environmental sensor traces.
This paper provides a few insights and guidelines for leveraging LLM for
high-level reasoning on sensor traces and highlights several directions for
future work.
| [
{
"version": "v1",
"created": "Thu, 28 Mar 2024 22:06:04 GMT"
}
] | 1,711,929,600,000 | [
[
"Ouyang",
"Xiaomin",
""
],
[
"Srivastava",
"Mani",
""
]
] |
2403.19881 | Jiapu Wang | Jiapu Wang, Zheng Cui, Boyue Wang, Shirui Pan, Junbin Gao, Baocai Yin,
Wen Gao | IME: Integrating Multi-curvature Shared and Specific Embedding for
Temporal Knowledge Graph Completion | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temporal Knowledge Graphs (TKGs) incorporate a temporal dimension, allowing
for a precise capture of the evolution of knowledge and reflecting the dynamic
nature of the real world. Typically, TKGs contain complex geometric structures,
with various geometric structures interwoven. However, existing Temporal
Knowledge Graph Completion (TKGC) methods either model TKGs in a single space
or neglect the heterogeneity of different curvature spaces, thus constraining
their capacity to capture these intricate geometric structures. In this paper,
we propose a novel Integrating Multi-curvature shared and specific Embedding
(IME) model for TKGC tasks. Concretely, IME models TKGs into multi-curvature
spaces, including hyperspherical, hyperbolic, and Euclidean spaces.
Subsequently, IME incorporates two key properties, namely space-shared property
and space-specific property. The space-shared property facilitates the learning
of commonalities across different curvature spaces and alleviates the spatial
gap caused by the heterogeneous nature of multi-curvature spaces, while the
space-specific property captures characteristic features. Meanwhile, IME
proposes an Adjustable Multi-curvature Pooling (AMP) approach to effectively
retain important information. Furthermore, IME innovatively designs similarity,
difference, and structure loss functions to attain the stated objective.
Experimental results clearly demonstrate the superior performance of IME over
existing state-of-the-art TKGC models.
| [
{
"version": "v1",
"created": "Thu, 28 Mar 2024 23:31:25 GMT"
}
] | 1,711,929,600,000 | [
[
"Wang",
"Jiapu",
""
],
[
"Cui",
"Zheng",
""
],
[
"Wang",
"Boyue",
""
],
[
"Pan",
"Shirui",
""
],
[
"Gao",
"Junbin",
""
],
[
"Yin",
"Baocai",
""
],
[
"Gao",
"Wen",
""
]
] |
2403.19883 | Frederico Messa | Frederico Messa, Andr\'e Grahl Pereira | Policy-Space Search: Equivalences, Improvements, and Compression | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fully-observable non-deterministic (FOND) planning is at the core of
artificial intelligence planning with uncertainty. It models uncertainty
through actions with non-deterministic effects. A* with Non-Determinism (AND*)
(Messa and Pereira, 2023) is a FOND planner that generalizes A* (Hart et al.,
1968) for FOND planning. It searches for a solution policy by performing an
explicit heuristic search on the policy space of the FOND task. In this paper,
we study and improve the performance of the policy-space search performed by
AND*. We present a polynomial-time procedure that constructs a solution policy
given just the set of states that should be mapped. This procedure, together
with a better understanding of the structure of FOND policies, allows us to
present three concepts of equivalences between policies. We use policy
equivalences to prune part of the policy search space, making AND*
substantially more effective in solving FOND tasks. We also study the impact of
taking into account structural state-space symmetries to strengthen the
detection of equivalence policies and the impact of performing the search with
satisficing techniques. We apply a recent technique from the group theory
literature to better compute structural state-space symmetries. Finally, we
present a solution compressor that, given a policy defined over complete
states, finds a policy that unambiguously represents it using the minimum
number of partial states. AND* with the introduced techniques generates, on
average, two orders of magnitude fewer policies to solve FOND tasks. These
techniques allow explicit policy-space search to be competitive in terms of
both coverage and solution compactness with other state-of-the-art FOND
planners.
| [
{
"version": "v1",
"created": "Thu, 28 Mar 2024 23:40:20 GMT"
}
] | 1,711,929,600,000 | [
[
"Messa",
"Frederico",
""
],
[
"Pereira",
"André Grahl",
""
]
] |
2403.19941 | Sejik Park | Sejik Park | Diverse Feature Learning by Self-distillation and Reset | 15 pages, 6 Figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Our paper addresses the problem of models struggling to learn diverse
features, due to either forgetting previously learned features or failing to
learn new ones. To overcome this problem, we introduce Diverse Feature Learning
(DFL), a method that combines an important feature preservation algorithm with
a new feature learning algorithm. Specifically, for preserving important
features, we utilize self-distillation in ensemble models by selecting the
meaningful model weights observed during training. For learning new features,
we employ reset that involves periodically re-initializing part of the model.
As a result, through experiments with various models on the image
classification, we have identified the potential for synergistic effects
between self-distillation and reset.
| [
{
"version": "v1",
"created": "Fri, 29 Mar 2024 02:49:15 GMT"
}
] | 1,711,929,600,000 | [
[
"Park",
"Sejik",
""
]
] |
2403.20089 | Niklas K\"uhl Prof Dr | Luca Deck, Jan-Laurin M\"uller, Conradin Braun, Domenique Zipperling,
Niklas K\"uhl | Implications of the AI Act for Non-Discrimination Law and Algorithmic
Fairness | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The topic of fairness in AI, as debated in the FATE (Fairness,
Accountability, Transparency, and Ethics in AI) communities, has sparked
meaningful discussions in the past years. However, from a legal perspective,
particularly from European Union law, many open questions remain. Whereas
algorithmic fairness aims to mitigate structural inequalities at the design
level, European non-discrimination law is tailored to individual cases of
discrimination after an AI model has been deployed. The AI Act might present a
tremendous step towards bridging these two concepts by shifting
non-discrimination responsibilities into the design stage of AI models. Based
on an integrative reading of the AI Act, we comment on legal as well as
technical enforcement problems and propose practical implications on bias
detection and bias correction in order to specify and comply with specific
technical requirements.
| [
{
"version": "v1",
"created": "Fri, 29 Mar 2024 09:54:09 GMT"
}
] | 1,711,929,600,000 | [
[
"Deck",
"Luca",
""
],
[
"Müller",
"Jan-Laurin",
""
],
[
"Braun",
"Conradin",
""
],
[
"Zipperling",
"Domenique",
""
],
[
"Kühl",
"Niklas",
""
]
] |
2403.20127 | Kaito Taguchi | Kaito Taguchi, Yujie Gu, and Kouichi Sakurai | The Impact of Prompts on Zero-Shot Detection of AI-Generated Text | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, there have been significant advancements in the development
of Large Language Models (LLMs). While their practical applications are now
widespread, their potential for misuse, such as generating fake news and
committing plagiarism, has posed significant concerns. To address this issue,
detectors have been developed to evaluate whether a given text is
human-generated or AI-generated. Among others, zero-shot detectors stand out as
effective approaches that do not require additional training data and are often
likelihood-based. In chat-based applications, users commonly input prompts and
utilize the AI-generated texts. However, zero-shot detectors typically analyze
these texts in isolation, neglecting the impact of the original prompts. It is
conceivable that this approach may lead to a discrepancy in likelihood
assessments between the text generation phase and the detection phase. So far,
there remains an unverified gap concerning how the presence or absence of
prompts impacts detection accuracy for zero-shot detectors. In this paper, we
introduce an evaluative framework to empirically analyze the impact of prompts
on the detection accuracy of AI-generated text. We assess various zero-shot
detectors using both white-box detection, which leverages the prompt, and
black-box detection, which operates without prompt information. Our experiments
reveal the significant influence of prompts on detection accuracy. Remarkably,
compared with black-box detection without prompts, the white-box methods using
prompts demonstrate an increase in AUC of at least $0.1$ across all zero-shot
detectors tested. Code is available:
\url{https://github.com/kaito25atugich/Detector}.
| [
{
"version": "v1",
"created": "Fri, 29 Mar 2024 11:33:34 GMT"
}
] | 1,711,929,600,000 | [
[
"Taguchi",
"Kaito",
""
],
[
"Gu",
"Yujie",
""
],
[
"Sakurai",
"Kouichi",
""
]
] |
2403.20151 | Jiani Fan Ms | Jiani Fan, Minrui Xu, Ziyao Liu, Huanyi Ye, Chaojie Gu, Dusit Niyato,
Kwok-Yan Lam | A Learning-based Incentive Mechanism for Mobile AIGC Service in
Decentralized Internet of Vehicles | 2023 IEEE 98th Vehicular Technology Conference (VTC2023-Fall) | null | 10.1109/VTC2023-Fall60731.2023.10333689 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial Intelligence-Generated Content (AIGC) refers to the paradigm of
automated content generation utilizing AI models. Mobile AIGC services in the
Internet of Vehicles (IoV) network have numerous advantages over traditional
cloud-based AIGC services, including enhanced network efficiency, better
reconfigurability, and stronger data security and privacy. Nonetheless, AIGC
service provisioning frequently demands significant resources. Consequently,
resource-constrained roadside units (RSUs) face challenges in maintaining a
heterogeneous pool of AIGC services and addressing all user service requests
without degrading overall performance. Therefore, in this paper, we propose a
decentralized incentive mechanism for mobile AIGC service allocation, employing
multi-agent deep reinforcement learning to find the balance between the supply
of AIGC services on RSUs and user demand for services within the IoV context,
optimizing user experience and minimizing transmission latency. Experimental
results demonstrate that our approach achieves superior performance compared to
other baseline models.
| [
{
"version": "v1",
"created": "Fri, 29 Mar 2024 12:46:07 GMT"
},
{
"version": "v2",
"created": "Thu, 9 May 2024 08:49:43 GMT"
}
] | 1,715,299,200,000 | [
[
"Fan",
"Jiani",
""
],
[
"Xu",
"Minrui",
""
],
[
"Liu",
"Ziyao",
""
],
[
"Ye",
"Huanyi",
""
],
[
"Gu",
"Chaojie",
""
],
[
"Niyato",
"Dusit",
""
],
[
"Lam",
"Kwok-Yan",
""
]
] |
2403.20204 | Junhao Xu | Junhao Xu, Longdi Xian, Zening Liu, Mingliang Chen, Qiuyang Yin,
Fenghua Song | The Future of Combating Rumors? Retrieval, Discrimination, and
Generation | 8 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Artificial Intelligence Generated Content (AIGC) technology development has
facilitated the creation of rumors with misinformation, impacting societal,
economic, and political ecosystems, challenging democracy. Current rumor
detection efforts fall short by merely labeling potentially misinformation
(classification task), inadequately addressing the issue, and it is unrealistic
to have authoritative institutions debunk every piece of information on social
media. Our proposed comprehensive debunking process not only detects rumors but
also provides explanatory generated content to refute the authenticity of the
information. The Expert-Citizen Collective Wisdom (ECCW) module we designed
aensures high-precision assessment of the credibility of information and the
retrieval module is responsible for retrieving relevant knowledge from a
Real-time updated debunking database based on information keywords. By using
prompt engineering techniques, we feed results and knowledge into a LLM (Large
Language Model), achieving satisfactory discrimination and explanatory effects
while eliminating the need for fine-tuning, saving computational costs, and
contributing to debunking efforts.
| [
{
"version": "v1",
"created": "Fri, 29 Mar 2024 14:32:41 GMT"
}
] | 1,711,929,600,000 | [
[
"Xu",
"Junhao",
""
],
[
"Xian",
"Longdi",
""
],
[
"Liu",
"Zening",
""
],
[
"Chen",
"Mingliang",
""
],
[
"Yin",
"Qiuyang",
""
],
[
"Song",
"Fenghua",
""
]
] |
2403.20234 | Francesco Linsalata | Antonio Coviello, Francesco Linsalata, Umberto Spagnolini, Maurizio
Magarini | Artificial Neural Networks-based Real-time Classification of ENG Signals
for Implanted Nerve Interfaces | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Neuropathies are gaining higher relevance in clinical settings, as they risk
permanently jeopardizing a person's life. To support the recovery of patients,
the use of fully implanted devices is emerging as one of the most promising
solutions. However, these devices, even if becoming an integral part of a fully
complex neural nanonetwork system, pose numerous challenges. In this article,
we address one of them, which consists of the classification of motor/sensory
stimuli. The task is performed by exploring four different types of artificial
neural networks (ANNs) to extract various sensory stimuli from the
electroneurographic (ENG) signal measured in the sciatic nerve of rats.
Different sizes of the data sets are considered to analyze the feasibility of
the investigated ANNs for real-time classification through a comparison of
their performance in terms of accuracy, F1-score, and prediction time. The
design of the ANNs takes advantage of the modelling of the ENG signal as a
multiple-input multiple-output (MIMO) system to describe the measures taken by
state-of-the-art implanted nerve interfaces. These are based on the use of
multi-contact cuff electrodes to achieve nanoscale spatial discrimination of
the nerve activity. The MIMO ENG signal model is another contribution of this
paper. Our results show that some ANNs are more suitable for real-time
applications, being capable of achieving accuracies over $90\%$ for signal
windows of $100$ and $200\,$ms with a low enough processing time to be
effective for pathology recovery.
| [
{
"version": "v1",
"created": "Fri, 29 Mar 2024 15:23:30 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Apr 2024 09:26:43 GMT"
}
] | 1,712,102,400,000 | [
[
"Coviello",
"Antonio",
""
],
[
"Linsalata",
"Francesco",
""
],
[
"Spagnolini",
"Umberto",
""
],
[
"Magarini",
"Maurizio",
""
]
] |
2404.00276 | Hongqiu Wu | Hongqiu Wu, Y. Wang, Xingyuan Liu, Hai Zhao, Min Zhang | Instruction-Driven Game Engines on Large Language Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Instruction-Driven Game Engine (IDGE) project aims to democratize game
development by enabling a large language model (LLM) to follow free-form game
rules and autonomously generate game-play processes. The IDGE allows users to
create games by issuing simple natural language instructions, which
significantly lowers the barrier for game development. We approach the learning
process for IDGEs as a Next State Prediction task, wherein the model
autoregressively predicts in-game states given player actions. It is a
challenging task because the computation of in-game states must be precise;
otherwise, slight errors could disrupt the game-play. To address this, we train
the IDGE in a curriculum manner that progressively increases the model's
exposure to complex scenarios. Our initial progress lies in developing an IDGE
for Poker, a universally cherished card game. The engine we've designed not
only supports a wide range of poker variants but also allows for high
customization of rules through natural language inputs. Furthermore, it also
favors rapid prototyping of new games from minimal samples, proposing an
innovative paradigm in game development that relies on minimal prompt and data
engineering. This work lays the groundwork for future advancements in
instruction-driven game creation, potentially transforming how games are
designed and played.
| [
{
"version": "v1",
"created": "Sat, 30 Mar 2024 08:02:16 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Apr 2024 05:47:00 GMT"
}
] | 1,712,188,800,000 | [
[
"Wu",
"Hongqiu",
""
],
[
"Wang",
"Y.",
""
],
[
"Liu",
"Xingyuan",
""
],
[
"Zhao",
"Hai",
""
],
[
"Zhang",
"Min",
""
]
] |
2404.00320 | Zekun Wu | Xingrui Gu, Zhixuan Wang, Irisa Jin, Zekun Wu | Advancing Multimodal Data Fusion in Pain Recognition: A Strategy
Leveraging Statistical Correlation and Human-Centered Perspectives | Under reviewed by ACII 2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This research tackles the challenge of integrating heterogeneous data for
specific behavior recognition within the domain of Pain Recognition, presenting
a novel methodology that harmonizes statistical correlations with a
human-centered approach. By leveraging a diverse range of deep learning
architectures, we highlight the adaptability and efficacy of our approach in
improving model performance across various complex scenarios. The novelty of
our methodology is the strategic incorporation of statistical relevance weights
and the segmentation of modalities from a human-centric perspective, enhancing
model precision and providing a explainable analysis of multimodal data. This
study surpasses traditional modality fusion techniques by underscoring the role
of data diversity and customized modality segmentation in enhancing pain
behavior analysis. Introducing a framework that matches each modality with an
suited classifier, based on the statistical significance, signals a move
towards customized and accurate multimodal fusion strategies. Our contributions
extend beyond the field of Pain Recognition by delivering new insights into
modality fusion and human-centered computing applications, contributing towards
explainable AI and bolstering patient-centric healthcare interventions. Thus,
we bridge a significant void in the effective and interpretable fusion of
multimodal data, establishing a novel standard for forthcoming inquiries in
pain behavior recognition and allied fields.
| [
{
"version": "v1",
"created": "Sat, 30 Mar 2024 11:13:18 GMT"
}
] | 1,712,016,000,000 | [
[
"Gu",
"Xingrui",
""
],
[
"Wang",
"Zhixuan",
""
],
[
"Jin",
"Irisa",
""
],
[
"Wu",
"Zekun",
""
]
] |
2404.00341 | Ahmed R. Sadik Dr.-Ing. | Ahmed R.Sadik, Bodo Urban | Ontology in Holonic Cooperative Manufacturing: A Solution to Share and
Exchange the Knowledge | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Cooperative manufacturing is a new trend in industry, which depends on the
existence of a collaborative robot. A collaborative robot is usually a
light-weight robot which is capable of operating safely with a human co-worker
in a shared work environment. During this cooperation, a vast amount of
information is exchanged between the collaborative robot and the worker. This
information constructs the cooperative manufacturing knowledge, which describes
the production components and environment. In this research, we propose a
holonic control solution, which uses the ontology concept to represent the
cooperative manufacturing knowledge. The holonic control solution is
implemented as an autonomous multi-agent system that exchanges the
manufacturing knowledge based on an ontology model. Ultimately, the research
illustrates and implements the proposed solution over a cooperative assembly
scenario, which involves two workers and one collaborative robot, whom
cooperate together to assemble a customized product.
| [
{
"version": "v1",
"created": "Sat, 30 Mar 2024 12:38:47 GMT"
}
] | 1,712,016,000,000 | [
[
"Sadik",
"Ahmed R.",
""
],
[
"Urban",
"Bodo",
""
]
] |
2404.00560 | Bing Liu | Changnan Xiao and Bing Liu | A Theory for Length Generalization in Learning to Reason | arXiv admin note: text overlap with arXiv:2311.16173 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Length generalization (LG) is a challenging problem in learning to reason. It
refers to the phenomenon that when trained on reasoning problems of smaller
lengths or sizes, the resulting model struggles with problems of larger sizes
or lengths. Although LG has been studied by many researchers, the challenge
remains. This paper proposes a theoretical study of LG for problems whose
reasoning processes can be modeled as DAGs (directed acyclic graphs). The paper
first identifies and proves the conditions under which LG can be achieved in
learning to reason. It then designs problem representations based on the theory
to learn to solve challenging reasoning problems like parity, addition, and
multiplication, using a Transformer to achieve perfect LG.
| [
{
"version": "v1",
"created": "Sun, 31 Mar 2024 04:44:22 GMT"
}
] | 1,712,016,000,000 | [
[
"Xiao",
"Changnan",
""
],
[
"Liu",
"Bing",
""
]
] |
2404.00586 | Lv Ao | Ao Lv, Yongzhong Huang, Guige Ouyang, Yue Chen, Haoran Xie | RLGNet: Repeating-Local-Global History Network for Temporal Knowledge
Graph Reasoning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temporal Knowledge Graph (TKG) reasoning is based on historical information
to predict the future. Therefore, parsing and mining historical information is
key to predicting the future. Most existing methods fail to concurrently
address and comprehend historical information from both global and local
perspectives. Neglecting the global view might result in overlooking
macroscopic trends and patterns, while ignoring the local view can lead to
missing critical detailed information. Additionally, some methods do not focus
on learning from high-frequency repeating events, which means they may not
fully grasp frequently occurring historical events. To this end, we propose the
\textbf{R}epetitive-\textbf{L}ocal-\textbf{G}lobal History
\textbf{Net}work(RLGNet). We utilize a global history encoder to capture the
overarching nature of historical information. Subsequently, the local history
encoder provides information related to the query timestamp. Finally, we employ
the repeating history encoder to identify and learn from frequently occurring
historical events. In the evaluation on six benchmark datasets, our approach
generally outperforms existing TKG reasoning models in multi-step and
single-step reasoning tasks.
| [
{
"version": "v1",
"created": "Sun, 31 Mar 2024 07:19:29 GMT"
}
] | 1,712,016,000,000 | [
[
"Lv",
"Ao",
""
],
[
"Huang",
"Yongzhong",
""
],
[
"Ouyang",
"Guige",
""
],
[
"Chen",
"Yue",
""
],
[
"Xie",
"Haoran",
""
]
] |
2404.00886 | Liwen Zhu | Liwen Zhu, Peixi Peng, Zongqing Lu, Yonghong Tian | MTLight: Efficient Multi-Task Reinforcement Learning for Traffic Signal
Control | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Traffic signal control has a great impact on alleviating traffic congestion
in modern cities. Deep reinforcement learning (RL) has been widely used for
this task in recent years, demonstrating promising performance but also facing
many challenges such as limited performances and sample inefficiency. To handle
these challenges, MTLight is proposed to enhance the agent observation with a
latent state, which is learned from numerous traffic indicators. Meanwhile,
multiple auxiliary and supervisory tasks are constructed to learn the latent
state, and two types of embedding latent features, the task-specific feature
and task-shared feature, are used to make the latent state more abundant.
Extensive experiments conducted on CityFlow demonstrate that MTLight has
leading convergence speed and asymptotic performance. We further simulate under
peak-hour pattern in all scenarios with increasing control difficulty and the
results indicate that MTLight is highly adaptable.
| [
{
"version": "v1",
"created": "Mon, 1 Apr 2024 03:27:46 GMT"
}
] | 1,712,016,000,000 | [
[
"Zhu",
"Liwen",
""
],
[
"Peng",
"Peixi",
""
],
[
"Lu",
"Zongqing",
""
],
[
"Tian",
"Yonghong",
""
]
] |
2404.01503 | Michael Katz | Michael Katz, Junkyu Lee, Jungkoo Kang, Shirin Sohrabi | Some Orders Are Important: Partially Preserving Orders in Top-Quality
Planning | To appear at SoCS 2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability to generate multiple plans is central to using planning in
real-life applications. Top-quality planners generate sets of such top-cost
plans, allowing flexibility in determining equivalent ones. In terms of the
order between actions in a plan, the literature only considers two extremes --
either all orders are important, making each plan unique, or all orders are
unimportant, treating two plans differing only in the order of actions as
equivalent. To allow flexibility in selecting important orders, we propose
specifying a subset of actions the orders between which are important,
interpolating between the top-quality and unordered top-quality planning
problems. We explore the ways of adapting partial order reduction search
pruning techniques to address this new computational problem and present
experimental evaluations demonstrating the benefits of exploiting such
techniques in this setting.
| [
{
"version": "v1",
"created": "Mon, 1 Apr 2024 22:10:12 GMT"
}
] | 1,712,102,400,000 | [
[
"Katz",
"Michael",
""
],
[
"Lee",
"Junkyu",
""
],
[
"Kang",
"Jungkoo",
""
],
[
"Sohrabi",
"Shirin",
""
]
] |
2404.01526 | Carlos Leandro | Carlos Leandro | Categorical semiotics: Foundations for Knowledge Integration | 71 pages, 15 figures. arXiv admin note: substantial text overlap with
arXiv:1604.02790 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The integration of knowledge extracted from diverse models, whether described
by domain experts or generated by machine learning algorithms, has historically
been challenged by the absence of a suitable framework for specifying and
integrating structures, learning processes, data transformations, and data
models or rules. In this work, we extend algebraic specification methods to
address these challenges within such a framework.
In our work, we tackle the challenging task of developing a comprehensive
framework for defining and analyzing deep learning architectures. We believe
that previous efforts have fallen short by failing to establish a clear
connection between the constraints a model must adhere to and its actual
implementation.
Our methodology employs graphical structures that resemble Ehresmann's
sketches, interpreted within a universe of fuzzy sets. This approach offers a
unified theory that elegantly encompasses both deterministic and
non-deterministic neural network designs. Furthermore, we highlight how this
theory naturally incorporates fundamental concepts from computer science and
automata theory. Our extended algebraic specification framework, grounded in
graphical structures akin to Ehresmann's sketches, offers a promising solution
for integrating knowledge across disparate models and domains. By bridging the
gap between domain-specific expertise and machine-generated insights, we pave
the way for more comprehensive, collaborative, and effective approaches to
knowledge integration and modeling.
| [
{
"version": "v1",
"created": "Mon, 1 Apr 2024 23:19:01 GMT"
}
] | 1,712,102,400,000 | [
[
"Leandro",
"Carlos",
""
]
] |
2404.01794 | Eric Veith | Eric MSP Veith, Torben Logemann, Aleksandr Berezin, Arlena
Well{\ss}ow, Stephan Balduin | Imitation Game: A Model-based and Imitation Learning Deep Reinforcement
Learning Hybrid | Accepted as publication at MSCPES '24 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Autonomous and learning systems based on Deep Reinforcement Learning have
firmly established themselves as a foundation for approaches to creating
resilient and efficient Cyber-Physical Energy Systems. However, most current
approaches suffer from two distinct problems: Modern model-free algorithms such
as Soft Actor Critic need a high number of samples to learn a meaningful
policy, as well as a fallback to ward against concept drifts (e. g.,
catastrophic forgetting). In this paper, we present the work in progress
towards a hybrid agent architecture that combines model-based Deep
Reinforcement Learning with imitation learning to overcome both problems.
| [
{
"version": "v1",
"created": "Tue, 2 Apr 2024 09:55:30 GMT"
}
] | 1,712,102,400,000 | [
[
"Veith",
"Eric MSP",
""
],
[
"Logemann",
"Torben",
""
],
[
"Berezin",
"Aleksandr",
""
],
[
"Wellßow",
"Arlena",
""
],
[
"Balduin",
"Stephan",
""
]
] |
2404.02039 | Sihao Hu | Sihao Hu, Tiansheng Huang, Fatih Ilhan, Selim Tekin, Gaowen Liu,
Ramana Kompella, Ling Liu | A Survey on Large Language Model-Based Game Agents | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The development of game agents holds a critical role in advancing towards
Artificial General Intelligence (AGI). The progress of LLMs and their
multimodal counterparts (MLLMs) offers an unprecedented opportunity to evolve
and empower game agents with human-like decision-making capabilities in complex
computer game environments. This paper provides a comprehensive overview of
LLM-based game agents from a holistic viewpoint. First, we introduce the
conceptual architecture of LLM-based game agents, centered around six essential
functional components: perception, memory, thinking, role-playing, action, and
learning. Second, we survey existing representative LLM-based game agents
documented in the literature with respect to methodologies and adaptation
agility across six genres of games, including adventure, communication,
competition, cooperation, simulation, and crafting & exploration games.
Finally, we present an outlook of future research and development directions in
this burgeoning field. A curated list of relevant papers is maintained and made
accessible at: https://github.com/git-disl/awesome-LLM-game-agent-papers.
| [
{
"version": "v1",
"created": "Tue, 2 Apr 2024 15:34:18 GMT"
}
] | 1,712,102,400,000 | [
[
"Hu",
"Sihao",
""
],
[
"Huang",
"Tiansheng",
""
],
[
"Ilhan",
"Fatih",
""
],
[
"Tekin",
"Selim",
""
],
[
"Liu",
"Gaowen",
""
],
[
"Kompella",
"Ramana",
""
],
[
"Liu",
"Ling",
""
]
] |
2404.02579 | Carlos Monserrat | David Nieves, Mar\'ia Jos\'e Ram\'irez-Quintana, Carlos Monserrat,
C\'esar Ferri, Jos\'e Hern\'andez-Orallo | Learning Alternative Ways of Performing a Task | 32 pages, Github repository, published paper, authors' version | Expert Systems With Applications, volume 148, 2020, 113263 | 10.1016/j.eswa.2020.113263 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | A common way of learning to perform a task is to observe how it is carried
out by experts. However, it is well known that for most tasks there is no
unique way to perform them. This is especially noticeable the more complex the
task is because factors such as the skill or the know-how of the expert may
well affect the way she solves the task. In addition, learning from experts
also suffers of having a small set of training examples generally coming from
several experts (since experts are usually a limited and expensive resource),
being all of them positive examples (i.e. examples that represent successful
executions of the task). Traditional machine learning techniques are not useful
in such scenarios, as they require extensive training data. Starting from very
few executions of the task presented as activity sequences, we introduce a
novel inductive approach for learning multiple models, with each one
representing an alternative strategy of performing a task. By an iterative
process based on generalisation and specialisation, we learn the underlying
patterns that capture the different styles of performing a task exhibited by
the examples. We illustrate our approach on two common activity recognition
tasks: a surgical skills training task and a cooking domain. We evaluate the
inferred models with respect to two metrics that measure how well the models
represent the examples and capture the different forms of executing a task
showed by the examples. We compare our results with the traditional process
mining approach and show that a small set of meaningful examples is enough to
obtain patterns that capture the different strategies that are followed to
solve the tasks.
| [
{
"version": "v1",
"created": "Wed, 3 Apr 2024 08:54:58 GMT"
}
] | 1,712,188,800,000 | [
[
"Nieves",
"David",
""
],
[
"Ramírez-Quintana",
"María José",
""
],
[
"Monserrat",
"Carlos",
""
],
[
"Ferri",
"César",
""
],
[
"Hernández-Orallo",
"José",
""
]
] |
2404.02611 | Ivan Sevillano-Garc\'ia | Iv\'an Sevillano-Garc\'ia, Juli\'an Luengo and Francisco Herrera | SHIELD: A regularization technique for eXplainable Artificial
Intelligence | 18 pages, 8 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | As Artificial Intelligence systems become integral across domains, the demand
for explainability grows. While the effort by the scientific community is
focused on obtaining a better explanation for the model, it is important not to
ignore the potential of this explanation process to improve training as well.
While existing efforts primarily focus on generating and evaluating
explanations for black-box models, there remains a critical gap in directly
enhancing models through these evaluations. This paper introduces SHIELD
(Selective Hidden Input Evaluation for Learning Dynamics), a regularization
technique for explainable artificial intelligence designed to improve model
quality by concealing portions of input data and assessing the resulting
discrepancy in predictions. In contrast to conventional approaches, SHIELD
regularization seamlessly integrates into the objective function, enhancing
model explainability while also improving performance. Experimental validation
on benchmark datasets underscores SHIELD's effectiveness in improving
Artificial Intelligence model explainability and overall performance. This
establishes SHIELD regularization as a promising pathway for developing
transparent and reliable Artificial Intelligence regularization techniques.
| [
{
"version": "v1",
"created": "Wed, 3 Apr 2024 09:56:38 GMT"
}
] | 1,712,188,800,000 | [
[
"Sevillano-García",
"Iván",
""
],
[
"Luengo",
"Julián",
""
],
[
"Herrera",
"Francisco",
""
]
] |
2404.02831 | Shanghua Gao | Shanghua Gao, Ada Fang, Yepeng Huang, Valentina Giunchiglia, Ayush
Noori, Jonathan Richard Schwarz, Yasha Ektefaie, Jovana Kondic, Marinka
Zitnik | Empowering Biomedical Discovery with AI Agents | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We envision 'AI scientists' as systems capable of skeptical learning and
reasoning that empower biomedical research through collaborative agents that
integrate machine learning tools with experimental platforms. Rather than
taking humans out of the discovery process, biomedical AI agents combine human
creativity and expertise with AI's ability to analyze large datasets, navigate
hypothesis spaces, and execute repetitive tasks. AI agents are proficient in a
variety of tasks, including self-assessment and planning of discovery
workflows. These agents use large language models and generative models to
feature structured memory for continual learning and use machine learning tools
to incorporate scientific knowledge, biological principles, and theories. AI
agents can impact areas ranging from hybrid cell simulation, programmable
control of phenotypes, and the design of cellular circuits to the development
of new therapies.
| [
{
"version": "v1",
"created": "Wed, 3 Apr 2024 16:08:01 GMT"
}
] | 1,712,188,800,000 | [
[
"Gao",
"Shanghua",
""
],
[
"Fang",
"Ada",
""
],
[
"Huang",
"Yepeng",
""
],
[
"Giunchiglia",
"Valentina",
""
],
[
"Noori",
"Ayush",
""
],
[
"Schwarz",
"Jonathan Richard",
""
],
[
"Ektefaie",
"Yasha",
""
],
[
"Kondic",
"Jovana",
""
],
[
"Zitnik",
"Marinka",
""
]
] |
2404.02838 | Ata \c{C}elen | Ata \c{C}elen, Guo Han, Konrad Schindler, Luc Van Gool, Iro Armeni,
Anton Obukhov, Xi Wang | I-Design: Personalized LLM Interior Designer | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Interior design allows us to be who we are and live how we want - each design
is as unique as our distinct personality. However, it is not trivial for
non-professionals to express and materialize this since it requires aligning
functional and visual expectations with the constraints of physical space; this
renders interior design a luxury. To make it more accessible, we present
I-Design, a personalized interior designer that allows users to generate and
visualize their design goals through natural language communication. I-Design
starts with a team of large language model agents that engage in dialogues and
logical reasoning with one another, transforming textual user input into
feasible scene graph designs with relative object relationships. Subsequently,
an effective placement algorithm determines optimal locations for each object
within the scene. The final design is then constructed in 3D by retrieving and
integrating assets from an existing object database. Additionally, we propose a
new evaluation protocol that utilizes a vision-language model and complements
the design pipeline. Extensive quantitative and qualitative experiments show
that I-Design outperforms existing methods in delivering high-quality 3D design
solutions and aligning with abstract concepts that match user input, showcasing
its advantages across detailed 3D arrangement and conceptual fidelity.
| [
{
"version": "v1",
"created": "Wed, 3 Apr 2024 16:17:53 GMT"
}
] | 1,712,188,800,000 | [
[
"Çelen",
"Ata",
""
],
[
"Han",
"Guo",
""
],
[
"Schindler",
"Konrad",
""
],
[
"Van Gool",
"Luc",
""
],
[
"Armeni",
"Iro",
""
],
[
"Obukhov",
"Anton",
""
],
[
"Wang",
"Xi",
""
]
] |
2404.02872 | John Komp | Ashutosh Gupta, John Komp, Abhay Singh Rajput, Krishna
Shankaranarayanan, Ashutosh Trivedi, Namrita Varshney | Integrating Explanations in Learning LTL Specifications from
Demonstrations | 21 Pages, 13 Page Appendix | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper investigates whether recent advances in Large Language Models
(LLMs) can assist in translating human explanations into a format that can
robustly support learning Linear Temporal Logic (LTL) from demonstrations. Both
LLMs and optimization-based methods can extract LTL specifications from
demonstrations; however, they have distinct limitations. LLMs can quickly
generate solutions and incorporate human explanations, but their lack of
consistency and reliability hampers their applicability in safety-critical
domains. On the other hand, optimization-based methods do provide formal
guarantees but cannot process natural language explanations and face
scalability challenges. We present a principled approach to combining LLMs and
optimization-based methods to faithfully translate human explanations and
demonstrations into LTL specifications. We have implemented a tool called
Janaka based on our approach. Our experiments demonstrate the effectiveness of
combining explanations with demonstrations in learning LTL specifications
through several case studies.
| [
{
"version": "v1",
"created": "Wed, 3 Apr 2024 17:09:00 GMT"
}
] | 1,712,188,800,000 | [
[
"Gupta",
"Ashutosh",
""
],
[
"Komp",
"John",
""
],
[
"Rajput",
"Abhay Singh",
""
],
[
"Shankaranarayanan",
"Krishna",
""
],
[
"Trivedi",
"Ashutosh",
""
],
[
"Varshney",
"Namrita",
""
]
] |
2404.03499 | Christoph Wehner | Simon Schramm and Christoph Wehner and Ute Schmid | Comprehensible Artificial Intelligence on Knowledge Graphs: A survey | null | null | 10.1016/j.websem.2023.100806 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Artificial Intelligence applications gradually move outside the safe walls of
research labs and invade our daily lives. This is also true for Machine
Learning methods on Knowledge Graphs, which has led to a steady increase in
their application since the beginning of the 21st century. However, in many
applications, users require an explanation of the Artificial Intelligences
decision. This led to increased demand for Comprehensible Artificial
Intelligence. Knowledge Graphs epitomize fertile soil for Comprehensible
Artificial Intelligence, due to their ability to display connected data, i.e.
knowledge, in a human- as well as machine-readable way. This survey gives a
short history to Comprehensible Artificial Intelligence on Knowledge Graphs.
Furthermore, we contribute by arguing that the concept Explainable Artificial
Intelligence is overloaded and overlapping with Interpretable Machine Learning.
By introducing the parent concept Comprehensible Artificial Intelligence, we
provide a clear-cut distinction of both concepts while accounting for their
similarities. Thus, we provide in this survey a case for Comprehensible
Artificial Intelligence on Knowledge Graphs consisting of Interpretable Machine
Learning on Knowledge Graphs and Explainable Artificial Intelligence on
Knowledge Graphs. This leads to the introduction of a novel taxonomy for
Comprehensible Artificial Intelligence on Knowledge Graphs. In addition, a
comprehensive overview of the research on Comprehensible Artificial
Intelligence on Knowledge Graphs is presented and put into the context of the
taxonomy. Finally, research gaps in the field of Comprehensible Artificial
Intelligence on Knowledge Graphs are identified for future research.
| [
{
"version": "v1",
"created": "Thu, 4 Apr 2024 14:57:32 GMT"
}
] | 1,712,275,200,000 | [
[
"Schramm",
"Simon",
""
],
[
"Wehner",
"Christoph",
""
],
[
"Schmid",
"Ute",
""
]
] |
2404.03893 | Tengfei Ma | Tengfei Ma, Xiang song, Wen Tao, Mufei Li, Jiani Zhang, Xiaoqin Pan,
Jianxin Lin, Bosheng Song, xiangxiang Zeng | KGExplainer: Towards Exploring Connected Subgraph Explanations for
Knowledge Graph Completion | 13 pages, 7 figures, 11 tables. Under Review | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Knowledge graph completion (KGC) aims to alleviate the inherent
incompleteness of knowledge graphs (KGs), which is a critical task for various
applications, such as recommendations on the web. Although knowledge graph
embedding (KGE) models have demonstrated superior predictive performance on KGC
tasks, these models infer missing links in a black-box manner that lacks
transparency and accountability, preventing researchers from developing
accountable models. Existing KGE-based explanation methods focus on exploring
key paths or isolated edges as explanations, which is information-less to
reason target prediction. Additionally, the missing ground truth leads to these
explanation methods being ineffective in quantitatively evaluating explored
explanations. To overcome these limitations, we propose KGExplainer, a
model-agnostic method that identifies connected subgraph explanations and
distills an evaluator to assess them quantitatively. KGExplainer employs a
perturbation-based greedy search algorithm to find key connected subgraphs as
explanations within the local structure of target predictions. To evaluate the
quality of the explored explanations, KGExplainer distills an evaluator from
the target KGE model. By forwarding the explanations to the evaluator, our
method can examine the fidelity of them. Extensive experiments on benchmark
datasets demonstrate that KGExplainer yields promising improvement and achieves
an optimal ratio of 83.3% in human evaluation.
| [
{
"version": "v1",
"created": "Fri, 5 Apr 2024 05:02:12 GMT"
}
] | 1,712,534,400,000 | [
[
"Ma",
"Tengfei",
""
],
[
"song",
"Xiang",
""
],
[
"Tao",
"Wen",
""
],
[
"Li",
"Mufei",
""
],
[
"Zhang",
"Jiani",
""
],
[
"Pan",
"Xiaoqin",
""
],
[
"Lin",
"Jianxin",
""
],
[
"Song",
"Bosheng",
""
],
[
"Zeng",
"xiangxiang",
""
]
] |
2404.04436 | Anirban Mukherjee | Anirban Mukherjee, Hannah Hanwen Chang | AI Knowledge and Reasoning: Emulating Expert Creativity in Scientific
Research | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We investigate whether modern AI can emulate expert creativity in complex
scientific endeavors. We introduce novel methodology that utilizes original
research articles published after the AI's training cutoff, ensuring no prior
exposure, mitigating concerns of rote memorization and prior training. The AI
are tasked with redacting findings, predicting outcomes from redacted research,
and assessing prediction accuracy against reported results. Analysis on 589
published studies in four leading psychology journals over a 28-month period,
showcase the AI's proficiency in understanding specialized research, deductive
reasoning, and evaluating evidentiary alignment--cognitive hallmarks of human
subject matter expertise and creativity. These findings suggest the potential
of general-purpose AI to transform academia, with roles requiring
knowledge-based creativity become increasingly susceptible to technological
substitution.
| [
{
"version": "v1",
"created": "Fri, 5 Apr 2024 22:30:47 GMT"
}
] | 1,712,620,800,000 | [
[
"Mukherjee",
"Anirban",
""
],
[
"Chang",
"Hannah Hanwen",
""
]
] |
2404.04442 | Saikat Barua | Saikat Barua | Exploring Autonomous Agents through the Lens of Large Language Models: A
Review | 47 pages, 5 figures | null | 10.48550/arXiv.2404.04442 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) are transforming artificial intelligence,
enabling autonomous agents to perform diverse tasks across various domains.
These agents, proficient in human-like text comprehension and generation, have
the potential to revolutionize sectors from customer service to healthcare.
However, they face challenges such as multimodality, human value alignment,
hallucinations, and evaluation. Techniques like prompting, reasoning, tool
utilization, and in-context learning are being explored to enhance their
capabilities. Evaluation platforms like AgentBench, WebArena, and ToolLLM
provide robust methods for assessing these agents in complex scenarios. These
advancements are leading to the development of more resilient and capable
autonomous agents, anticipated to become integral in our digital lives,
assisting in tasks from email responses to disease diagnosis. The future of AI,
with LLMs at the forefront, is promising.
| [
{
"version": "v1",
"created": "Fri, 5 Apr 2024 22:59:02 GMT"
}
] | 1,712,707,200,000 | [
[
"Barua",
"Saikat",
""
]
] |
2404.04540 | Vishal Pallagani | Biplav Srivastava, Vishal Pallagani | The Case for Developing a Foundation Model for Planning-like Tasks from
Scratch | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Foundation Models (FMs) have revolutionized many areas of computing,
including Automated Planning and Scheduling (APS). For example, a recent study
found them useful for planning problems: plan generation, language translation,
model construction, multi-agent planning, interactive planning, heuristics
optimization, tool integration, and brain-inspired planning. Besides APS, there
are many seemingly related tasks involving the generation of a series of
actions with varying guarantees of their executability to achieve intended
goals, which we collectively call planning-like (PL) tasks like business
processes, programs, workflows, and guidelines, where researchers have
considered using FMs. However, previous works have primarily focused on
pre-trained, off-the-shelf FMs and optionally fine-tuned them. This paper
discusses the need for a comprehensive FM for PL tasks from scratch and
explores its design considerations. We argue that such an FM will open new and
efficient avenues for PL problem-solving, just like LLMs are creating for APS.
| [
{
"version": "v1",
"created": "Sat, 6 Apr 2024 07:44:40 GMT"
}
] | 1,712,620,800,000 | [
[
"Srivastava",
"Biplav",
""
],
[
"Pallagani",
"Vishal",
""
]
] |
2404.05235 | Dillon Z. Chen | Dillon Z. Chen, Sylvie Thi\'ebaux | Novelty Heuristics, Multi-Queue Search, and Portfolios for Numeric
Planning | Extended version of SoCS 2024 paper | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Heuristic search is a powerful approach for solving planning problems and
numeric planning is no exception. In this paper, we boost the performance of
heuristic search for numeric planning with various powerful techniques
orthogonal to improving heuristic informedness: numeric novelty heuristics, the
Manhattan distance heuristic, and exploring the use of multi-queue search and
portfolios for combining heuristics.
| [
{
"version": "v1",
"created": "Mon, 8 Apr 2024 07:01:35 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Apr 2024 15:00:15 GMT"
}
] | 1,712,880,000,000 | [
[
"Chen",
"Dillon Z.",
""
],
[
"Thiébaux",
"Sylvie",
""
]
] |
2404.05259 | Yani Zhang | Yani Zhang and Helmut B\"olcskei | Cellular automata, many-valued logic, and deep neural networks | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We develop a theory characterizing the fundamental capability of deep neural
networks to learn, from evolution traces, the logical rules governing the
behavior of cellular automata (CA). This is accomplished by first establishing
a novel connection between CA and Lukasiewicz propositional logic. While binary
CA have been known for decades to essentially perform operations in Boolean
logic, no such relationship exists for general CA. We demonstrate that
many-valued (MV) logic, specifically Lukasiewicz propositional logic,
constitutes a suitable language for characterizing general CA as logical
machines. This is done by interpolating CA transition functions to continuous
piecewise linear functions, which, by virtue of the McNaughton theorem, yield
formulae in MV logic characterizing the CA. Recognizing that deep rectified
linear unit (ReLU) networks realize continuous piecewise linear functions, it
follows that these formulae are naturally extracted from CA evolution traces by
deep ReLU networks. A corresponding algorithm together with a software
implementation is provided. Finally, we show that the dynamical behavior of CA
can be realized by recurrent neural networks.
| [
{
"version": "v1",
"created": "Mon, 8 Apr 2024 07:49:52 GMT"
}
] | 1,712,620,800,000 | [
[
"Zhang",
"Yani",
""
],
[
"Bölcskei",
"Helmut",
""
]
] |
2404.05272 | Jie Liu | Jie Liu, Tao Feng, Yan Jiang, Peizheng Wang, Chao Wu | Constructing Data Transaction Chains Based on Opportunity Cost
Exploration | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data trading is increasingly gaining attention. However, the inherent
replicability and privacy concerns of data make it challenging to directly
apply traditional trading theories to data markets. This paper compares data
trading markets with traditional ones, focusing particularly on how the
replicability and privacy of data impact data markets. We discuss how data's
replicability fundamentally alters the concept of opportunity cost in
traditional microeconomics within the context of data markets. Additionally, we
explore how to leverage this change to maximize benefits without compromising
data privacy. This paper outlines the constraints for data circulation within
the privacy domain chain and presents a model that maximizes data's value under
these constraints. Specific application scenarios are provided, and experiments
demonstrate the solvability of this model.
| [
{
"version": "v1",
"created": "Mon, 8 Apr 2024 08:02:18 GMT"
}
] | 1,712,620,800,000 | [
[
"Liu",
"Jie",
""
],
[
"Feng",
"Tao",
""
],
[
"Jiang",
"Yan",
""
],
[
"Wang",
"Peizheng",
""
],
[
"Wu",
"Chao",
""
]
] |
2404.05735 | Giorgio Nordo | Giorgio Nordo, Saeid Jafari, Arif Mehmood, Bhimraj Basumatary | A Python Framework for Neutrosophic Sets and Mappings | 38 PAGES | Neutrosophic Sets and Systems 65, 2024 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper we present an open source framework developed in Python and
consisting of three distinct classes designed to manipulate in a simple and
intuitive way both symbolic representations of neutrosophic sets over universes
of various types as well as mappings between them. The capabilities offered by
this framework extend and generalize previous attempts to provide software
solutions to the manipulation of neutrosophic sets such as those proposed by
Salama et al., Saranya et al., El-Ghareeb, Topal et al. and Sleem. The code is
described in detail and many examples and use cases are also provided.
| [
{
"version": "v1",
"created": "Sun, 24 Mar 2024 16:00:16 GMT"
}
] | 1,712,707,200,000 | [
[
"Nordo",
"Giorgio",
""
],
[
"Jafari",
"Saeid",
""
],
[
"Mehmood",
"Arif",
""
],
[
"Basumatary",
"Bhimraj",
""
]
] |
2404.06325 | Ruoxi Li | Ruoxi Li, Dana Nau, Mark Roberts, Morgan Fine-Morris | Automatically Learning HTN Methods from Landmarks | This work has been submitted to FLAIRS-24 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Hierarchical Task Network (HTN) planning usually requires a domain engineer
to provide manual input about how to decompose a planning problem. Even
HTN-MAKER, a well-known method-learning algorithm, requires a domain engineer
to annotate the tasks with information about what to learn. We introduce
CURRICULAMA, an HTN method learning algorithm that completely automates the
learning process. It uses landmark analysis to compose annotated tasks and
leverages curriculum learning to order the learning of methods from simpler to
more complex. This eliminates the need for manual input, resolving a core issue
with HTN-MAKER. We prove CURRICULAMA's soundness, and show experimentally that
it has a substantially similar convergence rate in learning a complete set of
methods to HTN-MAKER.
| [
{
"version": "v1",
"created": "Tue, 9 Apr 2024 14:03:38 GMT"
}
] | 1,712,707,200,000 | [
[
"Li",
"Ruoxi",
""
],
[
"Nau",
"Dana",
""
],
[
"Roberts",
"Mark",
""
],
[
"Fine-Morris",
"Morgan",
""
]
] |
2404.06370 | Valdecy Pereira | Valdecy Pereira, Marcio Pereira Basilio, Carlos Henrique Tarjano
SantosCarlos Henrique Tarjano Santos | Enhancing Decision Analysis with a Large Language Model: pyDecision a
Comprehensive Library of MCDA Methods in Python | 23 pages, 2 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Purpose: Multicriteria decision analysis (MCDA) has become increasingly
essential for decision-making in complex environments. In response to this
need, the pyDecision library, implemented in Python and available at
https://bit.ly/3tLFGtH, has been developed to provide a comprehensive and
accessible collection of MCDA methods. Methods: The pyDecision offers 70 MCDA
methods, including AHP, TOPSIS, and the PROMETHEE and ELECTRE families. Beyond
offering a vast range of techniques, the library provides visualization tools
for more intuitive results interpretation. In addition to these features,
pyDecision has integrated ChatGPT, an advanced Large Language Model, where
decision-makers can use ChatGPT to discuss and compare the outcomes of
different methods, providing a more interactive and intuitive understanding of
the solutions. Findings: Large Language Models are undeniably potent but can
sometimes be a double-edged sword. Its answers may be misleading without
rigorous verification of its outputs, especially for researchers lacking deep
domain expertise. It's imperative to approach its insights with a discerning
eye and a solid foundation in the relevant field. Originality: With the
integration of MCDA methods and ChatGPT, pyDecision is a significant
contribution to the scientific community, as it is an invaluable resource for
researchers, practitioners, and decision-makers navigating complex
decision-making problems and seeking the most appropriate solutions based on
MCDA methods.
| [
{
"version": "v1",
"created": "Tue, 9 Apr 2024 15:06:25 GMT"
}
] | 1,712,707,200,000 | [
[
"Pereira",
"Valdecy",
""
],
[
"Basilio",
"Marcio Pereira",
""
],
[
"Santos",
"Carlos Henrique Tarjano SantosCarlos Henrique Tarjano",
""
]
] |
2404.06474 | Jiayi Pan | Jiayi Pan, Yichi Zhang, Nicholas Tomlin, Yifei Zhou, Sergey Levine,
and Alane Suhr | Autonomous Evaluation and Refinement of Digital Agents | Code at https://github.com/Berkeley-NLP/Agent-Eval-Refine | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We show that domain-general automatic evaluators can significantly improve
the performance of agents for web navigation and device control. We experiment
with multiple evaluation models that trade off between inference cost,
modularity of design, and accuracy. We validate the performance of these models
in several popular benchmarks for digital agents, finding between 74.4 and
92.9% agreement with oracle evaluation metrics. Finally, we use these
evaluators to improve the performance of existing agents via fine-tuning and
inference-time guidance. Without any additional supervision, we improve
state-of-the-art performance by 29% on the popular benchmark WebArena, and
achieve a 75% relative improvement in a challenging domain transfer scenario.
| [
{
"version": "v1",
"created": "Tue, 9 Apr 2024 17:25:47 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Apr 2024 04:55:54 GMT"
}
] | 1,712,793,600,000 | [
[
"Pan",
"Jiayi",
""
],
[
"Zhang",
"Yichi",
""
],
[
"Tomlin",
"Nicholas",
""
],
[
"Zhou",
"Yifei",
""
],
[
"Levine",
"Sergey",
""
],
[
"Suhr",
"Alane",
""
]
] |
2404.06571 | Yunqing Li | Yunqing Li, Binil Starly | Building A Knowledge Graph to Enrich ChatGPT Responses in Manufacturing
Service Discovery | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Sourcing and identification of new manufacturing partners is crucial for
manufacturing system integrators to enhance agility and reduce risk through
supply chain diversification in the global economy. The advent of advanced
large language models has captured significant interest, due to their ability
to generate comprehensive and articulate responses across a wide range of
knowledge domains. However, the system often falls short in accuracy and
completeness when responding to domain-specific inquiries, particularly in
areas like manufacturing service discovery. This research explores the
potential of leveraging Knowledge Graphs in conjunction with ChatGPT to
streamline the process for prospective clients in identifying small
manufacturing enterprises. In this study, we propose a method that integrates
bottom-up ontology with advanced machine learning models to develop a
Manufacturing Service Knowledge Graph from an array of structured and
unstructured data sources, including the digital footprints of small-scale
manufacturers throughout North America. The Knowledge Graph and the learned
graph embedding vectors are leveraged to tackle intricate queries within the
digital supply chain network, responding with enhanced reliability and greater
interpretability. The approach highlighted is scalable to millions of entities
that can be distributed to form a global Manufacturing Service Knowledge
Network Graph that can potentially interconnect multiple types of Knowledge
Graphs that span industry sectors, geopolitical boundaries, and business
domains. The dataset developed for this study, now publicly accessible,
encompasses more than 13,000 manufacturers' weblinks, manufacturing services,
certifications, and location entity types.
| [
{
"version": "v1",
"created": "Tue, 9 Apr 2024 18:46:46 GMT"
}
] | 1,712,793,600,000 | [
[
"Li",
"Yunqing",
""
],
[
"Starly",
"Binil",
""
]
] |
2404.06946 | Athanasios Karapantelakis | Athanasios Karapantelakis, Alexandros Nikou, Ajay Kattepur, Jean
Martins, Leonid Mokrushin, Swarup Kumar Mohalik, Marin Orlic, Aneta
Vulgarakis Feljan | A Survey on the Integration of Generative AI for Critical Thinking in
Mobile Networks | 14 pages, 3 figures, 4 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | In the near future, mobile networks are expected to broaden their services
and coverage to accommodate a larger user base and diverse user needs. Thus,
they will increasingly rely on artificial intelligence (AI) to manage network
operation and control costs, undertaking complex decision-making roles. This
shift will necessitate the application of techniques that incorporate critical
thinking abilities, including reasoning and planning. Symbolic AI techniques
already facilitate critical thinking based on existing knowledge. Yet, their
use in telecommunications is hindered by the high cost of mostly manual
curation of this knowledge and high computational complexity of reasoning
tasks. At the same time, there is a spurt of innovations in industries such as
telecommunications due to Generative AI (GenAI) technologies, operating
independently of human-curated knowledge. However, their capacity for critical
thinking remains uncertain. This paper aims to address this gap by examining
the current status of GenAI algorithms with critical thinking capabilities and
investigating their potential applications in telecom networks. Specifically,
the aim of this study is to offer an introduction to the potential utilization
of GenAI for critical thinking techniques in mobile networks, while also
establishing a foundation for future research.
| [
{
"version": "v1",
"created": "Wed, 10 Apr 2024 11:55:33 GMT"
}
] | 1,712,793,600,000 | [
[
"Karapantelakis",
"Athanasios",
""
],
[
"Nikou",
"Alexandros",
""
],
[
"Kattepur",
"Ajay",
""
],
[
"Martins",
"Jean",
""
],
[
"Mokrushin",
"Leonid",
""
],
[
"Mohalik",
"Swarup Kumar",
""
],
[
"Orlic",
"Marin",
""
],
[
"Feljan",
"Aneta Vulgarakis",
""
]
] |
2404.07227 | Michael Timothy Bennett | Michael Timothy Bennett | Is Complexity an Illusion? | Accepted for publication in the Proceedings of the 17th Conference on
Artificial General Intelligence, 2024. Definitions shared with
arXiv:2302.00843 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Simplicity is held by many to be the key to general intelligence. Simpler
models tend to "generalise", identifying the cause or generator of data with
greater sample efficiency. The implications of the correlation between
simplicity and generalisation extend far beyond computer science, addressing
questions of physics and even biology. Yet simplicity is a property of form,
while generalisation is of function. In interactive settings, any correlation
between the two depends on interpretation. In theory there could be no
correlation and yet in practice, there is. Previous theoretical work showed
generalisation to be a consequence of "weak" constraints implied by function,
not form. Experiments demonstrated choosing weak constraints over simple forms
yielded a 110-500% improvement in generalisation rate. Here we show that all
constraints can take equally simple forms, regardless of weakness. However if
forms are spatially extended, then function is represented using a finite
subset of forms. If function is represented using a finite subset of forms,
then we can force a correlation between simplicity and generalisation by making
weak constraints take simple forms. If function is determined by a goal
directed process that favours versatility (e.g. natural selection), then
efficiency demands weak constraints take simple forms. Complexity has no causal
influence on generalisation, but appears to due to confounding.
| [
{
"version": "v1",
"created": "Sun, 31 Mar 2024 13:36:55 GMT"
},
{
"version": "v2",
"created": "Fri, 12 Apr 2024 09:08:35 GMT"
},
{
"version": "v3",
"created": "Sun, 28 Apr 2024 10:44:36 GMT"
},
{
"version": "v4",
"created": "Thu, 30 May 2024 13:38:42 GMT"
}
] | 1,717,113,600,000 | [
[
"Bennett",
"Michael Timothy",
""
]
] |
2404.08543 | J.-M. Chauvet | Jean-Marie Chauvet | Memory Traces: Are Transformers Tulving Machines? | 14 pages, 1 figure and 4 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Memory traces--changes in the memory system that result from the perception
and encoding of an event--were measured in pioneering studies by Endel Tulving
and Michael J. Watkins in 1975. These and further experiments informed the
maturation of Tulving's memory model, from the GAPS (General Abstract
Processing System} to the SPI (Serial-Parallel Independent) model. Having
current top of the line LLMs revisit the original Tulving-Watkins tests may
help in assessing whether foundation models completely instantiate or not this
class of psychological models.
| [
{
"version": "v1",
"created": "Fri, 12 Apr 2024 15:37:35 GMT"
}
] | 1,713,139,200,000 | [
[
"Chauvet",
"Jean-Marie",
""
]
] |
2404.08706 | Chengpeng Hu | Chengpeng Hu, Yunlong Zhao, Jialin Liu | Game Generation via Large Language Models | 2024 IEEE Conference on Games | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, the emergence of large language models (LLMs) has unlocked new
opportunities for procedural content generation. However, recent attempts
mainly focus on level generation for specific games with defined game rules
such as Super Mario Bros. and Zelda. This paper investigates the game
generation via LLMs. Based on video game description language, this paper
proposes an LLM-based framework to generate game rules and levels
simultaneously. Experiments demonstrate how the framework works with prompts
considering different combinations of context. Our findings extend the current
applications of LLMs and offer new insights for generating new games in the
area of procedural content generation.
| [
{
"version": "v1",
"created": "Thu, 11 Apr 2024 10:06:05 GMT"
},
{
"version": "v2",
"created": "Thu, 30 May 2024 03:17:00 GMT"
}
] | 1,717,113,600,000 | [
[
"Hu",
"Chengpeng",
""
],
[
"Zhao",
"Yunlong",
""
],
[
"Liu",
"Jialin",
""
]
] |
2404.08837 | Cl\'audio Gomes | Cl\'audio Gomes, Jo\~ao Paulo Fernandes, Gabriel Falcao, Soummya Kar,
Sridhar Tayur | Vehicle-to-Vehicle Charging: Model, Complexity, and Heuristics | 7 pages, 6 figures, and 3 tables. This work has been submitted to the
IEEE for possible publication. Copyright may be transferred without notice,
after which this version may no longer be accessible | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The rapid adoption of Electric Vehicles (EVs) poses challenges for
electricity grids to accommodate or mitigate peak demand. Vehicle-to-Vehicle
Charging (V2VC) has been recently adopted by popular EVs, posing new
opportunities and challenges to the management and operation of EVs. We present
a novel V2VC model that allows decision-makers to take V2VC into account when
optimizing their EV operations. We show that optimizing V2VC is NP-Complete and
find that even small problem instances are computationally challenging. We
propose R-V2VC, a heuristic that takes advantage of the resulting totally
unimodular constraint matrix to efficiently solve problems of realistic sizes.
Our results demonstrate that R-V2VC presents a linear growth in the solution
time as the problem size increases, while achieving solutions of optimal or
near-optimal quality. R-V2VC can be used for real-world operations and to study
what-if scenarios when evaluating the costs and benefits of V2VC.
| [
{
"version": "v1",
"created": "Fri, 12 Apr 2024 22:46:37 GMT"
}
] | 1,713,225,600,000 | [
[
"Gomes",
"Cláudio",
""
],
[
"Fernandes",
"João Paulo",
""
],
[
"Falcao",
"Gabriel",
""
],
[
"Kar",
"Soummya",
""
],
[
"Tayur",
"Sridhar",
""
]
] |
2404.09304 | Tristan Cazenave | Tristan Cazenave | Monte Carlo Search Algorithms Discovering Monte Carlo Tree Search
Exploration Terms | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Monte Carlo Tree Search and Monte Carlo Search have good results for many
combinatorial problems. In this paper we propose to use Monte Carlo Search to
design mathematical expressions that are used as exploration terms for Monte
Carlo Tree Search algorithms. The optimized Monte Carlo Tree Search algorithms
are PUCT and SHUSS. We automatically design the PUCT and the SHUSS root
exploration terms. For small search budgets of 32 evaluations the discovered
root exploration terms make both algorithms competitive with usual PUCT.
| [
{
"version": "v1",
"created": "Sun, 14 Apr 2024 17:06:20 GMT"
}
] | 1,713,225,600,000 | [
[
"Cazenave",
"Tristan",
""
]
] |
2404.09468 | Zhuo Chen | Yichi Zhang, Zhuo Chen, Lingbing Guo, Yajing Xu, Binbin Hu, Ziqi Liu,
Huajun Chen, Wen Zhang | MyGO: Discrete Modality Information as Fine-Grained Tokens for
Multi-modal Knowledge Graph Completion | Working in progress; Repo is available at
https://github.com/zjukg/MyGO | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-modal knowledge graphs (MMKG) store structured world knowledge
containing rich multi-modal descriptive information. To overcome their inherent
incompleteness, multi-modal knowledge graph completion (MMKGC) aims to discover
unobserved knowledge from given MMKGs, leveraging both structural information
from the triples and multi-modal information of the entities. Existing MMKGC
methods usually extract multi-modal features with pre-trained models and employ
a fusion module to integrate multi-modal features with triple prediction.
However, this often results in a coarse handling of multi-modal data,
overlooking the nuanced, fine-grained semantic details and their interactions.
To tackle this shortfall, we introduce a novel framework MyGO to process, fuse,
and augment the fine-grained modality information from MMKGs. MyGO tokenizes
multi-modal raw data as fine-grained discrete tokens and learns entity
representations with a cross-modal entity encoder. To further augment the
multi-modal representations, MyGO incorporates fine-grained contrastive
learning to highlight the specificity of the entity representations.
Experiments on standard MMKGC benchmarks reveal that our method surpasses 20 of
the latest models, underlining its superior performance. Code and data are
available at https://github.com/zjukg/MyGO
| [
{
"version": "v1",
"created": "Mon, 15 Apr 2024 05:40:41 GMT"
}
] | 1,713,225,600,000 | [
[
"Zhang",
"Yichi",
""
],
[
"Chen",
"Zhuo",
""
],
[
"Guo",
"Lingbing",
""
],
[
"Xu",
"Yajing",
""
],
[
"Hu",
"Binbin",
""
],
[
"Liu",
"Ziqi",
""
],
[
"Chen",
"Huajun",
""
],
[
"Zhang",
"Wen",
""
]
] |
2404.09554 | Johannes Schneider | Johannes Schneider | Explainable Generative AI (GenXAI): A Survey, Conceptualization, and
Research Agenda | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Generative AI (GenAI) marked a shift from AI being able to recognize to AI
being able to generate solutions for a wide variety of tasks. As the generated
solutions and applications become increasingly more complex and multi-faceted,
novel needs, objectives, and possibilities have emerged for explainability
(XAI). In this work, we elaborate on why XAI has gained importance with the
rise of GenAI and its challenges for explainability research. We also unveil
novel and emerging desiderata that explanations should fulfill, covering
aspects such as verifiability, interactivity, security, and cost. To this end,
we focus on surveying existing works. Furthermore, we provide a taxonomy of
relevant dimensions that allows us to better characterize existing XAI
mechanisms and methods for GenAI. We discuss different avenues to ensure XAI,
from training data to prompting. Our paper offers a short but concise technical
background of GenAI for non-technical readers, focusing on text and images to
better understand novel or adapted XAI techniques for GenAI. However, due to
the vast array of works on GenAI, we decided to forego detailed aspects of XAI
related to evaluation and usage of explanations. As such, the manuscript
interests both technically oriented people and other disciplines, such as
social scientists and information systems researchers. Our research roadmap
provides more than ten directions for future investigation.
| [
{
"version": "v1",
"created": "Mon, 15 Apr 2024 08:18:16 GMT"
}
] | 1,713,225,600,000 | [
[
"Schneider",
"Johannes",
""
]
] |
2404.09587 | Umutcan Serles PhD | Umutcan Serles and Elias K\"arle and Richard Hunkel and Dieter Fensel | German Tourism Knowledge Graph | 4 pages. Accepted to Poster and Demo Track of 21st European Semantic
Web Conference 2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Tourism is one of the most critical sectors of the global economy. Due to its
heterogeneous and fragmented nature, it provides one of the most suitable use
cases for knowledge graphs. In this poster, we introduce the German Tourism
Knowledge Graph that integrates tourism-related data from 16 federal states of
Germany and various other sources to provide a curated knowledge source for
various applications. It is publicly available through GUIs and an API.
| [
{
"version": "v1",
"created": "Mon, 15 Apr 2024 08:56:53 GMT"
}
] | 1,713,225,600,000 | [
[
"Serles",
"Umutcan",
""
],
[
"Kärle",
"Elias",
""
],
[
"Hunkel",
"Richard",
""
],
[
"Fensel",
"Dieter",
""
]
] |
2404.09631 | Diego Aineto | Diego Aineto, Enrico Scala | Action Model Learning with Guarantees | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper studies the problem of action model learning with full
observability. Following the learning by search paradigm by Mitchell, we
develop a theory for action model learning based on version spaces that
interprets the task as search for hypothesis that are consistent with the
learning examples. Our theoretical findings are instantiated in an online
algorithm that maintains a compact representation of all solutions of the
problem. Among these range of solutions, we bring attention to actions models
approximating the actual transition system from below (sound models) and from
above (complete models). We show how to manipulate the output of our learning
algorithm to build deterministic and non-deterministic formulations of the
sound and complete models and prove that, given enough examples, both
formulations converge into the very same true model. Our experiments reveal
their usefulness over a range of planning domains.
| [
{
"version": "v1",
"created": "Mon, 15 Apr 2024 10:01:43 GMT"
}
] | 1,713,225,600,000 | [
[
"Aineto",
"Diego",
""
],
[
"Scala",
"Enrico",
""
]
] |
2404.09877 | Savvas Papaioannou | Savvas Papaioannou, Panayiotis Kolios, Christos G. Panayiotou, and
Marios M. Polycarpou | Synergising Human-like Responses and Machine Intelligence for Planning
in Disaster Response | 2024 IEEE World Congress on Computational Intelligence (IEEE WCCI),
2024 International Joint Conference on Neural Networks (IJCNN) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In the rapidly changing environments of disaster response, planning and
decision-making for autonomous agents involve complex and interdependent
choices. Although recent advancements have improved traditional artificial
intelligence (AI) approaches, they often struggle in such settings,
particularly when applied to agents operating outside their well-defined
training parameters. To address these challenges, we propose an attention-based
cognitive architecture inspired by Dual Process Theory (DPT). This framework
integrates, in an online fashion, rapid yet heuristic (human-like) responses
(System 1) with the slow but optimized planning capabilities of machine
intelligence (System 2). We illustrate how a supervisory controller can
dynamically determine in real-time the engagement of either system to optimize
mission objectives by assessing their performance across a number of distinct
attributes. Evaluated for trajectory planning in dynamic environments, our
framework demonstrates that this synergistic integration effectively manages
complex tasks by optimizing multiple mission objectives.
| [
{
"version": "v1",
"created": "Mon, 15 Apr 2024 15:47:08 GMT"
}
] | 1,713,225,600,000 | [
[
"Papaioannou",
"Savvas",
""
],
[
"Kolios",
"Panayiotis",
""
],
[
"Panayiotou",
"Christos G.",
""
],
[
"Polycarpou",
"Marios M.",
""
]
] |
2404.09939 | Zhaoyu Li | Zhaoyu Li, Jialiang Sun, Logan Murphy, Qidong Su, Zenan Li, Xian
Zhang, Kaiyu Yang, Xujie Si | A Survey on Deep Learning for Theorem Proving | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Theorem proving is a fundamental aspect of mathematics, spanning from
informal reasoning in mathematical language to rigorous derivations in formal
systems. In recent years, the advancement of deep learning, especially the
emergence of large language models, has sparked a notable surge of research
exploring these techniques to enhance the process of theorem proving. This
paper presents a pioneering comprehensive survey of deep learning for theorem
proving by offering i) a thorough review of existing approaches across various
tasks such as autoformalization, premise selection, proofstep generation, and
proof search; ii) a meticulous summary of available datasets and strategies for
data generation; iii) a detailed analysis of evaluation metrics and the
performance of state-of-the-art; and iv) a critical discussion on the
persistent challenges and the promising avenues for future exploration. Our
survey aims to serve as a foundational reference for deep learning approaches
in theorem proving, seeking to catalyze further research endeavors in this
rapidly growing field.
| [
{
"version": "v1",
"created": "Mon, 15 Apr 2024 17:07:55 GMT"
}
] | 1,713,225,600,000 | [
[
"Li",
"Zhaoyu",
""
],
[
"Sun",
"Jialiang",
""
],
[
"Murphy",
"Logan",
""
],
[
"Su",
"Qidong",
""
],
[
"Li",
"Zenan",
""
],
[
"Zhang",
"Xian",
""
],
[
"Yang",
"Kaiyu",
""
],
[
"Si",
"Xujie",
""
]
] |
2404.10160 | Rosy Cheng | Ruoxi Cheng, Haoxuan Ma, Shuirong Cao, Tianyu Shi | RLRF:Reinforcement Learning from Reflection through Debates as Feedback
for Bias Mitigation in LLMs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biases and stereotypes in Large Language Models (LLMs) can have negative
implications for user experience and societal outcomes. Current approaches to
bias mitigation like Reinforcement Learning from Human Feedback (RLHF) rely on
costly manual feedback. While LLMs have the capability to understand logic and
identify biases in text, they often struggle to effectively acknowledge and
address their own biases due to factors such as prompt influences, internal
mechanisms, and policies. We found that informing LLMs that the content they
generate is not their own and questioning them about potential biases in the
text can significantly enhance their recognition and improvement capabilities
regarding biases. Based on this finding, we propose RLRF (Reinforcement
Learning from Reflection through Debates as Feedback), replacing human feedback
with AI for bias mitigation. RLRF engages LLMs in multi-role debates to expose
biases and gradually reduce biases in each iteration using a ranking scoring
mechanism. The dialogue are then used to create a dataset with high-bias and
low-bias instances to train the reward model in reinforcement learning. This
dataset can be generated by the same LLMs for self-reflection or a superior
LLMs guiding the former in a student-teacher mode to enhance its logical
reasoning abilities. Experimental results demonstrate the significant
effectiveness of our approach in bias reduction.
| [
{
"version": "v1",
"created": "Mon, 15 Apr 2024 22:18:50 GMT"
},
{
"version": "v2",
"created": "Sun, 28 Apr 2024 04:08:39 GMT"
}
] | 1,714,435,200,000 | [
[
"Cheng",
"Ruoxi",
""
],
[
"Ma",
"Haoxuan",
""
],
[
"Cao",
"Shuirong",
""
],
[
"Shi",
"Tianyu",
""
]
] |
2404.10200 | Joshua Ackerman | George Cybenko, Joshua Ackerman and Paul Lintilhac | TEL'M: Test and Evaluation of Language Models | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Language Models have demonstrated remarkable capabilities on some tasks while
failing dramatically on others. The situation has generated considerable
interest in understanding and comparing the capabilities of various Language
Models (LMs) but those efforts have been largely ad hoc with results that are
often little more than anecdotal. This is in stark contrast with testing and
evaluation processes used in healthcare, radar signal processing, and other
defense areas. In this paper, we describe Test and Evaluation of Language
Models (TEL'M) as a principled approach for assessing the value of current and
future LMs focused on high-value commercial, government and national security
applications. We believe that this methodology could be applied to other
Artificial Intelligence (AI) technologies as part of the larger goal of
"industrializing" AI.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2024 00:54:17 GMT"
}
] | 1,713,312,000,000 | [
[
"Cybenko",
"George",
""
],
[
"Ackerman",
"Joshua",
""
],
[
"Lintilhac",
"Paul",
""
]
] |
2404.10317 | Jennifer D'Souza | Hamed Babaei Giglou and Jennifer D'Souza and Felix Engel and S\"oren
Auer | LLMs4OM: Matching Ontologies with Large Language Models | 8 pages, 1 figure, accepted to ESWC 2024 Special Track on LLMs for
Knowledge Engineering
(https://2024.eswc-conferences.org/call-for-papers-llms/) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ontology Matching (OM), is a critical task in knowledge integration, where
aligning heterogeneous ontologies facilitates data interoperability and
knowledge sharing. Traditional OM systems often rely on expert knowledge or
predictive models, with limited exploration of the potential of Large Language
Models (LLMs). We present the LLMs4OM framework, a novel approach to evaluate
the effectiveness of LLMs in OM tasks. This framework utilizes two modules for
retrieval and matching, respectively, enhanced by zero-shot prompting across
three ontology representations: concept, concept-parent, and concept-children.
Through comprehensive evaluations using 20 OM datasets from various domains, we
demonstrate that LLMs, under the LLMs4OM framework, can match and even surpass
the performance of traditional OM systems, particularly in complex matching
scenarios. Our results highlight the potential of LLMs to significantly
contribute to the field of OM.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2024 06:55:45 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Apr 2024 10:37:51 GMT"
}
] | 1,713,916,800,000 | [
[
"Giglou",
"Hamed Babaei",
""
],
[
"D'Souza",
"Jennifer",
""
],
[
"Engel",
"Felix",
""
],
[
"Auer",
"Sören",
""
]
] |
2404.10329 | Reihaneh Amini | Reihaneh Amini, Sanaz Saki Norouzi, Pascal Hitzler, Reza Amini | Towards Complex Ontology Alignment using Large Language Models | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Ontology alignment, a critical process in the Semantic Web for detecting
relationships between different ontologies, has traditionally focused on
identifying so-called "simple" 1-to-1 relationships through class labels and
properties comparison. The more practically useful exploration of more complex
alignments remains a hard problem to automate, and as such is largely
underexplored, i.e. in application practice it is usually done manually by
ontology and domain experts. Recently, the surge in Natural Language Processing
(NLP) capabilities, driven by advancements in Large Language Models (LLMs),
presents new opportunities for enhancing ontology engineering practices,
including ontology alignment tasks. This paper investigates the application of
LLM technologies to tackle the complex ontology alignment challenge. Leveraging
a prompt-based approach and integrating rich ontology content so-called modules
our work constitutes a significant advance towards automating the complex
alignment task.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2024 07:13:22 GMT"
}
] | 1,713,312,000,000 | [
[
"Amini",
"Reihaneh",
""
],
[
"Norouzi",
"Sanaz Saki",
""
],
[
"Hitzler",
"Pascal",
""
],
[
"Amini",
"Reza",
""
]
] |
2404.10337 | Jianqi Zhang | Jianqi Zhang, Jingyao Wang, Wenwen Qiang, Fanjiang Xu, Changwen Zheng,
Fuchun Sun and Hui Xiong | Intriguing Properties of Positional Encoding in Time Series Forecasting | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Transformer-based methods have made significant progress in time series
forecasting (TSF). They primarily handle two types of tokens, i.e., temporal
tokens that contain all variables of the same timestamp, and variable tokens
that contain all input time points for a specific variable. Transformer-based
methods rely on positional encoding (PE) to mark tokens' positions,
facilitating the model to perceive the correlation between tokens. However, in
TSF, research on PE remains insufficient. To address this gap, we conduct
experiments and uncover intriguing properties of existing PEs in TSF: (i) The
positional information injected by PEs diminishes as the network depth
increases; (ii) Enhancing positional information in deep networks is
advantageous for improving the model's performance; (iii) PE based on the
similarity between tokens can improve the model's performance. Motivated by
these findings, we introduce two new PEs: Temporal Position Encoding (T-PE) for
temporal tokens and Variable Positional Encoding (V-PE) for variable tokens.
Both T-PE and V-PE incorporate geometric PE based on tokens' positions and
semantic PE based on the similarity between tokens but using different
calculations. To leverage both the PEs, we design a Transformer-based
dual-branch framework named T2B-PE. It first calculates temporal tokens'
correlation and variable tokens' correlation respectively and then fuses the
dual-branch features through the gated unit. Extensive experiments demonstrate
the superior robustness and effectiveness of T2B-PE. The code is available at:
\href{https://github.com/jlu-phyComputer/T2B-PE}{https://github.com/jlu-phyComputer/T2B-PE}.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2024 07:21:39 GMT"
}
] | 1,713,312,000,000 | [
[
"Zhang",
"Jianqi",
""
],
[
"Wang",
"Jingyao",
""
],
[
"Qiang",
"Wenwen",
""
],
[
"Xu",
"Fanjiang",
""
],
[
"Zheng",
"Changwen",
""
],
[
"Sun",
"Fuchun",
""
],
[
"Xiong",
"Hui",
""
]
] |
2404.10416 | Pancheng Wang | Pancheng Wang, Shasha Li, Dong Li, Kehan Long, Jintao Tang, Ting Wang | Disentangling Instructive Information from Ranked Multiple Candidates
for Multi-Document Scientific Summarization | Accepted by SIGIR 2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatically condensing multiple topic-related scientific papers into a
succinct and concise summary is referred to as Multi-Document Scientific
Summarization (MDSS). Currently, while commonly used abstractive MDSS methods
can generate flexible and coherent summaries, the difficulty in handling global
information and the lack of guidance during decoding still make it challenging
to generate better summaries. To alleviate these two shortcomings, this paper
introduces summary candidates into MDSS, utilizing the global information of
the document set and additional guidance from the summary candidates to guide
the decoding process. Our insights are twofold: Firstly, summary candidates can
provide instructive information from both positive and negative perspectives,
and secondly, selecting higher-quality candidates from multiple options
contributes to producing better summaries. Drawing on the insights, we propose
a summary candidates fusion framework -- Disentangling Instructive information
from Ranked candidates (DIR) for MDSS. Specifically, DIR first uses a
specialized pairwise comparison method towards multiple candidates to pick out
those of higher quality. Then DIR disentangles the instructive information of
summary candidates into positive and negative latent variables with Conditional
Variational Autoencoder. These variables are further incorporated into the
decoder to guide generation. We evaluate our approach with three different
types of Transformer-based models and three different types of candidates, and
consistently observe noticeable performance improvements according to automatic
and human evaluation. More analyses further demonstrate the effectiveness of
our model in handling global information and enhancing decoding
controllability.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2024 09:33:07 GMT"
}
] | 1,713,312,000,000 | [
[
"Wang",
"Pancheng",
""
],
[
"Li",
"Shasha",
""
],
[
"Li",
"Dong",
""
],
[
"Long",
"Kehan",
""
],
[
"Tang",
"Jintao",
""
],
[
"Wang",
"Ting",
""
]
] |
2404.10429 | Zhengwei Tao | Zhengwei Tao, Zhi Jin, Junqiang Huang, Xiancai Chen, Xiaoying Bai,
Haiyan Zhao, Yifan Zhang, Chongyang Tao | MEEL: Multi-Modal Event Evolution Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-modal Event Reasoning (MMER) endeavors to endow machines with the
ability to comprehend intricate event relations across diverse data modalities.
MMER is fundamental and underlies a wide broad of applications. Despite
extensive instruction fine-tuning, current multi-modal large language models
still fall short in such ability. The disparity stems from that existing models
are insufficient to capture underlying principles governing event evolution in
various scenarios. In this paper, we introduce Multi-Modal Event Evolution
Learning (MEEL) to enable the model to grasp the event evolution mechanism,
yielding advanced MMER ability. Specifically, we commence with the design of
event diversification to gather seed events from a rich spectrum of scenarios.
Subsequently, we employ ChatGPT to generate evolving graphs for these seed
events. We propose an instruction encapsulation process that formulates the
evolving graphs into instruction-tuning data, aligning the comprehension of
event reasoning to humans. Finally, we observe that models trained in this way
are still struggling to fully comprehend event evolution. In such a case, we
propose the guiding discrimination strategy, in which models are trained to
discriminate the improper evolution direction. We collect and curate a
benchmark M-EV2 for MMER. Extensive experiments on M-EV2 validate the
effectiveness of our approach, showcasing competitive performance in
open-source multi-modal LLMs.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2024 09:46:37 GMT"
}
] | 1,713,312,000,000 | [
[
"Tao",
"Zhengwei",
""
],
[
"Jin",
"Zhi",
""
],
[
"Huang",
"Junqiang",
""
],
[
"Chen",
"Xiancai",
""
],
[
"Bai",
"Xiaoying",
""
],
[
"Zhao",
"Haiyan",
""
],
[
"Zhang",
"Yifan",
""
],
[
"Tao",
"Chongyang",
""
]
] |
2404.10505 | Mahta Bakhshizadeh | Mahta Bakhshizadeh, Christian Jilek, Markus Schr\"oder, Heiko Maus,
Andreas Dengel | Data Collection of Real-Life Knowledge Work in Context: The RLKWiC
Dataset | Accepted and presented at the 10th International Conference on
Information Management (ICIM2024), will be published in Springer CCIS series
Conference Proceedings (Electronic ISSN: 1865-0937; Print ISSN: 1865-0929) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the years, various approaches have been employed to enhance the
productivity of knowledge workers, from addressing psychological well-being to
the development of personal knowledge assistants. A significant challenge in
this research area has been the absence of a comprehensive, publicly accessible
dataset that mirrors real-world knowledge work. Although a handful of datasets
exist, many are restricted in access or lack vital information dimensions,
complicating meaningful comparison and benchmarking in the domain. This paper
presents RLKWiC, a novel dataset of Real-Life Knowledge Work in Context,
derived from monitoring the computer interactions of eight participants over a
span of two months. As the first publicly available dataset offering a wealth
of essential information dimensions (such as explicated contexts, textual
contents, and semantics), RLKWiC seeks to address the research gap in the
personal information management domain, providing valuable insights for
modeling user behavior.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2024 12:23:59 GMT"
}
] | 1,713,312,000,000 | [
[
"Bakhshizadeh",
"Mahta",
""
],
[
"Jilek",
"Christian",
""
],
[
"Schröder",
"Markus",
""
],
[
"Maus",
"Heiko",
""
],
[
"Dengel",
"Andreas",
""
]
] |