title,firstAuthor,url,dateSubmitted,keywords,pdf_titles,abstract
"""Do Anything Now"": Characterizing and Evaluating In-The-Wild Jailbreak  Prompts on Large Language Models",Xinyue Shen,http://arxiv.org/pdf/2308.03825v1.pdf,2023-08-07,"['cs.cr', 'cs.lg']",2308.03825v1.pdf,"  The misuse of large language models (LLMs) has garnered significant attention
from the general public and LLM vendors. In response, efforts have been made to
align LLMs with human values and intent use. However, a particular type of
adversarial prompts, known as jailbreak prompt, has emerged and continuously
evolved to bypass the safeguards and elicit harmful content from LLMs. In this
paper, we conduct the first measurement study on jailbreak prompts in the wild,
with 6,387 prompts collected from four platforms over six months. Leveraging
natural language processing technologies and graph-based community detection
methods, we discover unique characteristics of jailbreak prompts and their
major attack strategies, such as prompt injection and privilege escalation. We
also observe that jailbreak prompts increasingly shift from public platforms to
private ones, posing new challenges for LLM vendors in proactive detection. To
assess the potential harm caused by jailbreak prompts, we create a question set
comprising 46,800 samples across 13 forbidden scenarios. Our experiments show
that current LLMs and safeguards cannot adequately defend jailbreak prompts in
all scenarios. Particularly, we identify two highly effective jailbreak prompts
which achieve 0.99 attack success rates on ChatGPT (GPT-3.5) and GPT-4, and
they have persisted online for over 100 days. Our work sheds light on the
severe and evolving threat landscape of jailbreak prompts. We hope our study
can facilitate the research community and LLM vendors in promoting safer and
regulated LLMs.
"
Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study,Yi Liu,http://arxiv.org/pdf/2305.13860v1.pdf,2023-05-23,"['cs.se', 'cs.ai', 'cs.cl']",2305.13860v1.pdf,"  Large Language Models (LLMs), like ChatGPT, have demonstrated vast potential
but also introduce challenges related to content constraints and potential
misuse. Our study investigates three key research questions: (1) the number of
different prompt types that can jailbreak LLMs, (2) the effectiveness of
jailbreak prompts in circumventing LLM constraints, and (3) the resilience of
ChatGPT against these jailbreak prompts. Initially, we develop a classification
model to analyze the distribution of existing prompts, identifying ten distinct
patterns and three categories of jailbreak prompts. Subsequently, we assess the
jailbreak capability of prompts with ChatGPT versions 3.5 and 4.0, utilizing a
dataset of 3,120 jailbreak questions across eight prohibited scenarios.
Finally, we evaluate the resistance of ChatGPT against jailbreak prompts,
finding that the prompts can consistently evade the restrictions in 40 use-case
scenarios. The study underscores the importance of prompt structures in
jailbreaking LLMs and discusses the challenges of robust jailbreak prompt
generation and prevention.
"
AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language  Models,Xiaogeng Liu,http://arxiv.org/pdf/2310.04451v1.pdf,2023-10-03,"['cs.cl', 'cs.ai']",2310.04451v1.pdf,"  The aligned Large Language Models (LLMs) are powerful language understanding
and decision-making tools that are created through extensive alignment with
human feedback. However, these large models remain susceptible to jailbreak
attacks, where adversaries manipulate prompts to elicit malicious outputs that
should not be given by aligned LLMs. Investigating jailbreak prompts can lead
us to delve into the limitations of LLMs and further guide us to secure them.
Unfortunately, existing jailbreak techniques suffer from either (1) scalability
issues, where attacks heavily rely on manual crafting of prompts, or (2)
stealthiness problems, as attacks depend on token-based algorithms to generate
prompts that are often semantically meaningless, making them susceptible to
detection through basic perplexity testing. In light of these challenges, we
intend to answer this question: Can we develop an approach that can
automatically generate stealthy jailbreak prompts? In this paper, we introduce
AutoDAN, a novel jailbreak attack against aligned LLMs. AutoDAN can
automatically generate stealthy jailbreak prompts by the carefully designed
hierarchical genetic algorithm. Extensive evaluations demonstrate that AutoDAN
not only automates the process while preserving semantic meaningfulness, but
also demonstrates superior attack strength in cross-model transferability, and
cross-sample universality compared with the baseline. Moreover, we also compare
AutoDAN with perplexity-based defense methods and show that AutoDAN can bypass
them effectively.
"
Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM,Bochuan Cao,http://arxiv.org/pdf/2309.14348v1.pdf,2023-09-18,"['cs.cl', 'cs.ai', 'cs.cr', 'cs.lg']",2309.14348v1.pdf,"  Recently, Large Language Models (LLMs) have made significant advancements and
are now widely used across various domains. Unfortunately, there has been a
rising concern that LLMs can be misused to generate harmful or malicious
content. Though a line of research has focused on aligning LLMs with human
values and preventing them from producing inappropriate content, such
alignments are usually vulnerable and can be bypassed by alignment-breaking
attacks via adversarially optimized or handcrafted jailbreaking prompts. In
this work, we introduce a Robustly Aligned LLM (RA-LLM) to defend against
potential alignment-breaking attacks. RA-LLM can be directly constructed upon
an existing aligned LLM with a robust alignment checking function, without
requiring any expensive retraining or fine-tuning process of the original LLM.
Furthermore, we also provide a theoretical analysis for RA-LLM to verify its
effectiveness in defending against alignment-breaking attacks. Through
real-world experiments on open-source large language models, we demonstrate
that RA-LLM can successfully defend against both state-of-the-art adversarial
prompts and popular handcrafted jailbreaking prompts by reducing their attack
success rates from nearly 100\% to around 10\% or less.
"
FuzzLLM: A Novel and Universal Fuzzing Framework for Proactively  Discovering Jailbreak Vulnerabilities in Large Language Models,Dongyu Yao,http://arxiv.org/pdf/2309.05274v1.pdf,2023-09-11,['cs.cr'],2309.05274v1.pdf,"  Jailbreak vulnerabilities in Large Language Models (LLMs), which exploit
meticulously crafted prompts to elicit content that violates service
guidelines, have captured the attention of research communities. While model
owners can defend against individual jailbreak prompts through safety training
strategies, this relatively passive approach struggles to handle the broader
category of similar jailbreaks. To tackle this issue, we introduce FuzzLLM, an
automated fuzzing framework designed to proactively test and discover jailbreak
vulnerabilities in LLMs. We utilize templates to capture the structural
integrity of a prompt and isolate key features of a jailbreak class as
constraints. By integrating different base classes into powerful combo attacks
and varying the elements of constraints and prohibited questions, FuzzLLM
enables efficient testing with reduced manual effort. Extensive experiments
demonstrate FuzzLLM's effectiveness and comprehensiveness in vulnerability
discovery across various LLMs.
"
Scalable and Transferable Black-Box Jailbreaks for Language Models via  Persona Modulation,Rusheb Shah,http://arxiv.org/pdf/2311.03348v1.pdf,2023-11-06,"['cs.cl', 'cs.ai', 'cs.lg']",2311.03348v1.pdf,"  Despite efforts to align large language models to produce harmless responses,
they are still vulnerable to jailbreak prompts that elicit unrestricted
behaviour. In this work, we investigate persona modulation as a black-box
jailbreaking method to steer a target model to take on personalities that are
willing to comply with harmful instructions. Rather than manually crafting
prompts for each persona, we automate the generation of jailbreaks using a
language model assistant. We demonstrate a range of harmful completions made
possible by persona modulation, including detailed instructions for
synthesising methamphetamine, building a bomb, and laundering money. These
automated attacks achieve a harmful completion rate of 42.5% in GPT-4, which is
185 times larger than before modulation (0.23%). These prompts also transfer to
Claude 2 and Vicuna with harmful completion rates of 61.0% and 35.9%,
respectively. Our work reveals yet another vulnerability in commercial large
language models and highlights the need for more comprehensive safeguards.
"
Latent Jailbreak: A Benchmark for Evaluating Text Safety and Output  Robustness of Large Language Models,Huachuan Qiu,http://arxiv.org/pdf/2307.08487v3.pdf,2023-07-17,['cs.cl'],2307.08487v3.pdf,"  Considerable research efforts have been devoted to ensuring that large
language models (LLMs) align with human values and generate safe text. However,
an excessive focus on sensitivity to certain topics can compromise the model's
robustness in following instructions, thereby impacting its overall performance
in completing tasks. Previous benchmarks for jailbreaking LLMs have primarily
focused on evaluating the safety of the models without considering their
robustness. In this paper, we propose a benchmark that assesses both the safety
and robustness of LLMs, emphasizing the need for a balanced approach. To
comprehensively study text safety and output robustness, we introduce a latent
jailbreak prompt dataset, each involving malicious instruction embedding.
Specifically, we instruct the model to complete a regular task, such as
translation, with the text to be translated containing malicious instructions.
To further analyze safety and robustness, we design a hierarchical annotation
framework. We present a systematic analysis of the safety and robustness of
LLMs regarding the position of explicit normal instructions, word replacements
(verbs in explicit normal instructions, target groups in malicious
instructions, cue words for explicit normal instructions), and instruction
replacements (different explicit normal instructions). Our results demonstrate
that current LLMs not only prioritize certain instruction verbs but also
exhibit varying jailbreak rates for different instruction verbs in explicit
normal instructions. Code and data are available at
https://github.com/qiuhuachuan/latent-jailbreak.
"
MasterKey: Automated Jailbreak Across Multiple Large Language Model  Chatbots,Gelei Deng,http://arxiv.org/pdf/2307.08715v2.pdf,2023-07-16,['cs.cr'],2307.08715v2.pdf,"  Large Language Models (LLMs) have revolutionized Artificial Intelligence (AI)
services due to their exceptional proficiency in understanding and generating
human-like text. LLM chatbots, in particular, have seen widespread adoption,
transforming human-machine interactions. However, these LLM chatbots are
susceptible to ""jailbreak"" attacks, where malicious users manipulate prompts to
elicit inappropriate or sensitive responses, contravening service policies.
Despite existing attempts to mitigate such threats, our research reveals a
substantial gap in our understanding of these vulnerabilities, largely due to
the undisclosed defensive measures implemented by LLM service providers.
  In this paper, we present Jailbreaker, a comprehensive framework that offers
an in-depth understanding of jailbreak attacks and countermeasures. Our work
makes a dual contribution. First, we propose an innovative methodology inspired
by time-based SQL injection techniques to reverse-engineer the defensive
strategies of prominent LLM chatbots, such as ChatGPT, Bard, and Bing Chat.
This time-sensitive approach uncovers intricate details about these services'
defenses, facilitating a proof-of-concept attack that successfully bypasses
their mechanisms. Second, we introduce an automatic generation method for
jailbreak prompts. Leveraging a fine-tuned LLM, we validate the potential of
automated jailbreak generation across various commercial LLM chatbots. Our
method achieves a promising average success rate of 21.58%, significantly
outperforming the effectiveness of existing techniques. We have responsibly
disclosed our findings to the concerned service providers, underscoring the
urgent need for more robust defenses. Jailbreaker thus marks a significant step
towards understanding and mitigating jailbreak threats in the realm of LLM
chatbots.
"
Using Large Language Models for Cybersecurity Capture-The-Flag  Challenges and Certification Questions,Wesley Tann,http://arxiv.org/pdf/2308.10443v1.pdf,2023-08-21,"['cs.ai', 'cs.cl', 'cs.cy']",2308.10443v1.pdf,"  The assessment of cybersecurity Capture-The-Flag (CTF) exercises involves
participants finding text strings or ``flags'' by exploiting system
vulnerabilities. Large Language Models (LLMs) are natural-language models
trained on vast amounts of words to understand and generate text; they can
perform well on many CTF challenges. Such LLMs are freely available to
students. In the context of CTF exercises in the classroom, this raises
concerns about academic integrity. Educators must understand LLMs' capabilities
to modify their teaching to accommodate generative AI assistance. This research
investigates the effectiveness of LLMs, particularly in the realm of CTF
challenges and questions. Here we evaluate three popular LLMs, OpenAI ChatGPT,
Google Bard, and Microsoft Bing. First, we assess the LLMs' question-answering
performance on five Cisco certifications with varying difficulty levels. Next,
we qualitatively study the LLMs' abilities in solving CTF challenges to
understand their limitations. We report on the experience of using the LLMs for
seven test cases in all five types of CTF challenges. In addition, we
demonstrate how jailbreak prompts can bypass and break LLMs' ethical
safeguards. The paper concludes by discussing LLM's impact on CTF exercises and
its implications.
"
Baseline Defenses for Adversarial Attacks Against Aligned Language  Models,Neel Jain,http://arxiv.org/pdf/2309.00614v2.pdf,2023-09-01,"['cs.lg', 'cs.cl', 'cs.cr']",2309.00614v2.pdf,"  As Large Language Models quickly become ubiquitous, it becomes critical to
understand their security vulnerabilities. Recent work shows that text
optimizers can produce jailbreaking prompts that bypass moderation and
alignment. Drawing from the rich body of work on adversarial machine learning,
we approach these attacks with three questions: What threat models are
practically useful in this domain? How do baseline defense techniques perform
in this new domain? How does LLM security differ from computer vision?
  We evaluate several baseline defense strategies against leading adversarial
attacks on LLMs, discussing the various settings in which each is feasible and
effective. Particularly, we look at three types of defenses: detection
(perplexity based), input preprocessing (paraphrase and retokenization), and
adversarial training. We discuss white-box and gray-box settings and discuss
the robustness-performance trade-off for each of the defenses considered. We
find that the weakness of existing discrete optimizers for text, combined with
the relatively high costs of optimization, makes standard adaptive attacks more
challenging for LLMs. Future research will be needed to uncover whether more
powerful optimizers can be developed, or whether the strength of filtering and
preprocessing defenses is greater in the LLMs domain than it has been in
computer vision.
"
GPTFUZZER: Red Teaming Large Language Models with Auto-Generated  Jailbreak Prompts,Jiahao Yu,http://arxiv.org/pdf/2309.10253v2.pdf,2023-09-19,['cs.ai'],2309.10253v2.pdf,"  Large language models (LLMs) have recently experienced tremendous popularity
and are widely used from casual conversations to AI-driven programming.
However, despite their considerable success, LLMs are not entirely reliable and
can give detailed guidance on how to conduct harmful or illegal activities.
While safety measures can reduce the risk of such outputs, adversarial
jailbreak attacks can still exploit LLMs to produce harmful content. These
jailbreak templates are typically manually crafted, making large-scale testing
challenging.
  In this paper, we introduce GPTFuzz, a novel black-box jailbreak fuzzing
framework inspired by the AFL fuzzing framework. Instead of manual engineering,
GPTFuzz automates the generation of jailbreak templates for red-teaming LLMs.
At its core, GPTFuzz starts with human-written templates as initial seeds, then
mutates them to produce new templates. We detail three key components of
GPTFuzz: a seed selection strategy for balancing efficiency and variability,
mutate operators for creating semantically equivalent or similar sentences, and
a judgment model to assess the success of a jailbreak attack.
  We evaluate GPTFuzz against various commercial and open-source LLMs,
including ChatGPT, LLaMa-2, and Vicuna, under diverse attack scenarios. Our
results indicate that GPTFuzz consistently produces jailbreak templates with a
high success rate, surpassing human-crafted templates. Remarkably, GPTFuzz
achieves over 90% attack success rates against ChatGPT and Llama-2 models, even
with suboptimal initial seed templates. We anticipate that GPTFuzz will be
instrumental for researchers and practitioners in examining LLM robustness and
will encourage further exploration into enhancing LLM safety.
"
Probing LLMs for hate speech detection: strengths and vulnerabilities,Sarthak Roy,http://arxiv.org/pdf/2310.12860v2.pdf,2023-10-19,"['cs.cl', 'cs.cy']",2310.12860v2.pdf,"  Recently efforts have been made by social media platforms as well as
researchers to detect hateful or toxic language using large language models.
However, none of these works aim to use explanation, additional context and
victim community information in the detection process. We utilise different
prompt variation, input information and evaluate large language models in zero
shot setting (without adding any in-context examples). We select three large
language models (GPT-3.5, text-davinci and Flan-T5) and three datasets -
HateXplain, implicit hate and ToxicSpans. We find that on average including the
target information in the pipeline improves the model performance substantially
(~20-30%) over the baseline across the datasets. There is also a considerable
effect of adding the rationales/explanations into the pipeline (~10-20%) over
the baseline across the datasets. In addition, we further provide a typology of
the error cases where these large language models fail to (i) classify and (ii)
explain the reason for the decisions they take. Such vulnerable points
automatically constitute 'jailbreak' prompts for these models and industry
scale safeguard techniques need to be developed to make the models robust
against such prompts.
"
Exploiting Asymmetry for Synthetic Training Data Generation: SynthIE and  the Case of Information Extraction,Martin Josifoski,http://arxiv.org/pdf/2303.04132v2.pdf,2023-03-07,"['cs.cl', 'cs.ai', 'cs.lg']",2303.04132v2.pdf,"  Large language models (LLMs) have great potential for synthetic data
generation. This work shows that useful data can be synthetically generated
even for tasks that cannot be solved directly by LLMs: for problems with
structured outputs, it is possible to prompt an LLM to perform the task in the
reverse direction, by generating plausible input text for a target output
structure. Leveraging this asymmetry in task difficulty makes it possible to
produce large-scale, high-quality data for complex tasks. We demonstrate the
effectiveness of this approach on closed information extraction, where
collecting ground-truth data is challenging, and no satisfactory dataset exists
to date. We synthetically generate a dataset of 1.8M data points, establish its
superior quality compared to existing datasets in a human evaluation, and use
it to finetune small models (220M and 770M parameters), termed SynthIE, that
outperform the prior state of the art (with equal model size) by a substantial
margin of 57 absolute points in micro-F1 and 79 points in macro-F1. Code, data,
and models are available at https://github.com/epfl-dlab/SynthIE.
"
Small Language Models Improve Giants by Rewriting Their Outputs,Giorgos Vernikos,http://arxiv.org/pdf/2305.13514v1.pdf,2023-05-22,"['cs.cl', 'cs.lg']",2305.13514v1.pdf,"  Large language models (LLMs) have demonstrated impressive few-shot learning
capabilities, but they often underperform compared to fine-tuned models on
challenging tasks. Furthermore, their large size and restricted access only
through APIs make task-specific fine-tuning impractical. Moreover, LLMs are
sensitive to different aspects of prompts (e.g., the selection and order of
demonstrations) and can thus require time-consuming prompt engineering. In this
light, we propose a method to correct LLM outputs without relying on their
weights. First, we generate a pool of candidates by few-shot prompting an LLM.
Second, we refine the LLM-generated outputs using a smaller model, the
LM-corrector (LMCor), which is trained to rank, combine and rewrite the
candidates to produce the final target output. Our experiments demonstrate that
even a small LMCor model (250M) substantially improves the few-shot performance
of LLMs (62B) across diverse tasks. Moreover, we illustrate that the LMCor
exhibits robustness against different prompts, thereby minimizing the need for
extensive prompt engineering. Finally, we showcase that the LMCor can be
seamlessly integrated with different LLMs at inference time, serving as a
plug-and-play module to improve their performance.
"
Aligning Language Models to User Opinions,EunJeong Hwang,http://arxiv.org/pdf/2305.14929v1.pdf,2023-05-24,['cs.cl'],2305.14929v1.pdf,"  An important aspect of developing LLMs that interact with humans is to align
models' behavior to their users. It is possible to prompt an LLM into behaving
as a certain persona, especially a user group or ideological persona the model
captured during its pertaining stage. But, how to best align an LLM with a
specific user and not a demographic or ideological group remains an open
question. Mining public opinion surveys (by Pew Research), we find that the
opinions of a user and their demographics and ideologies are not mutual
predictors. We use this insight to align LLMs by modeling both user opinions as
well as user demographics and ideology, achieving up to 7 points accuracy gains
in predicting public opinions from survey questions across a broad set of
topics. In addition to the typical approach of prompting LLMs with demographics
and ideology, we discover that utilizing the most relevant past opinions from
individual users enables the model to predict user opinions more accurately.
"
Marked Personas: Using Natural Language Prompts to Measure Stereotypes  in Language Models,Myra Cheng,http://arxiv.org/pdf/2305.18189v1.pdf,2023-05-29,"['cs.cl', 'cs.ai', 'cs.cy']",2305.18189v1.pdf,"  To recognize and mitigate harms from large language models (LLMs), we need to
understand the prevalence and nuances of stereotypes in LLM outputs. Toward
this end, we present Marked Personas, a prompt-based method to measure
stereotypes in LLMs for intersectional demographic groups without any lexicon
or data labeling. Grounded in the sociolinguistic concept of markedness (which
characterizes explicitly linguistically marked categories versus unmarked
defaults), our proposed method is twofold: 1) prompting an LLM to generate
personas, i.e., natural language descriptions, of the target demographic group
alongside personas of unmarked, default groups; 2) identifying the words that
significantly distinguish personas of the target group from corresponding
unmarked ones. We find that the portrayals generated by GPT-3.5 and GPT-4
contain higher rates of racial stereotypes than human-written portrayals using
the same prompts. The words distinguishing personas of marked (non-white,
non-male) groups reflect patterns of othering and exoticizing these
demographics. An intersectional lens further reveals tropes that dominate
portrayals of marginalized groups, such as tropicalism and the
hypersexualization of minoritized women. These representational harms have
concerning implications for downstream applications like story generation.
"
Reranking for Natural Language Generation from Logical Forms: A Study  based on Large Language Models,Levon Haroutunian,http://arxiv.org/pdf/2309.12294v1.pdf,2023-09-21,['cs.cl'],2309.12294v1.pdf,"  Large language models (LLMs) have demonstrated impressive capabilities in
natural language generation. However, their output quality can be inconsistent,
posing challenges for generating natural language from logical forms (LFs).
This task requires the generated outputs to embody the exact semantics of LFs,
without missing any LF semantics or creating any hallucinations. In this work,
we tackle this issue by proposing a novel generate-and-rerank approach. Our
approach involves initially generating a set of candidate outputs by prompting
an LLM and subsequently reranking them using a task-specific reranker model. In
addition, we curate a manually collected dataset to evaluate the alignment
between different ranking metrics and human judgements. The chosen ranking
metrics are utilized to enhance the training and evaluation of the reranker
model. By conducting extensive experiments on three diverse datasets, we
demonstrate that the candidates selected by our reranker outperform those
selected by baseline methods in terms of semantic consistency and fluency, as
measured by three comprehensive metrics. Our findings provide strong evidence
for the effectiveness of our approach in improving the quality of generated
outputs.
"
Query Rewriting for Retrieval-Augmented Large Language Models,Xinbei Ma,http://arxiv.org/pdf/2305.14283v3.pdf,2023-05-23,['cs.cl'],2305.14283v3.pdf,"  Large Language Models (LLMs) play powerful, black-box readers in the
retrieve-then-read pipeline, making remarkable progress in knowledge-intensive
tasks. This work introduces a new framework, Rewrite-Retrieve-Read instead of
the previous retrieve-then-read for the retrieval-augmented LLMs from the
perspective of the query rewriting. Unlike prior studies focusing on adapting
either the retriever or the reader, our approach pays attention to the
adaptation of the search query itself, for there is inevitably a gap between
the input text and the needed knowledge in retrieval. We first prompt an LLM to
generate the query, then use a web search engine to retrieve contexts.
Furthermore, to better align the query to the frozen modules, we propose a
trainable scheme for our pipeline. A small language model is adopted as a
trainable rewriter to cater to the black-box LLM reader. The rewriter is
trained using the feedback of the LLM reader by reinforcement learning.
Evaluation is conducted on downstream tasks, open-domain QA and multiple-choice
QA. Experiments results show consistent performance improvement, indicating
that our framework is proven effective and scalable, and brings a new framework
for retrieval-augmented LLM.
"
ALGO: Synthesizing Algorithmic Programs with Generated Oracle Verifiers,Kexun Zhang,http://arxiv.org/pdf/2305.14591v2.pdf,2023-05-24,"['cs.cl', 'cs.se']",2305.14591v2.pdf,"  Large language models (LLMs) excel at implementing code from functionality
descriptions but struggle with algorithmic problems that require not only
implementation but also identification of the suitable algorithm. Moreover,
LLM-generated programs lack guaranteed correctness and require human
verification. To address these challenges, we propose ALGO, a framework that
synthesizes Algorithmic programs with LLM-Generated Oracles to guide the
generation and verify their correctness. ALGO first generates a reference
oracle by prompting an LLM to exhaustively enumerate all the combinations of
relevant variables. This oracle is then utilized to guide an arbitrary search
strategy in exploring the algorithm space and to verify the synthesized
algorithms. Our study shows that the LLM-generated oracles are correct for 88%
of the cases. With the oracles as verifiers, ALGO can be integrated with any
existing code generation model in a model-agnostic manner to enhance its
performance. Experiments show that when equipped with ALGO, we achieve an 8x
better one-submission pass rate over the Codex model and a 2.6x better
one-submission pass rate over CodeT, the current state-of-the-art model on
CodeContests. We can also get 1.3x better pass rate over the ChatGPT Code
Interpreter on unseen problems. The problem set we used for testing, the
prompts we used, the verifier and solution programs, and the test cases
generated by ALGO are available at https://github.com/zkx06111/ALGO.
"
PromptNER: Prompting For Named Entity Recognition,Dhananjay Ashok,http://arxiv.org/pdf/2305.15444v2.pdf,2023-05-24,"['cs.cl', 'cs.ai', 'cs.lg']",2305.15444v2.pdf,"  In a surprising turn, Large Language Models (LLMs) together with a growing
arsenal of prompt-based heuristics now offer powerful off-the-shelf approaches
providing few-shot solutions to myriad classic NLP problems. However, despite
promising early results, these LLM-based few-shot methods remain far from the
state of the art in Named Entity Recognition (NER), where prevailing methods
include learning representations via end-to-end structural understanding and
fine-tuning on standard labeled corpora. In this paper, we introduce PromptNER,
a new state-of-the-art algorithm for few-Shot and cross-domain NER. To adapt to
any new NER task PromptNER requires a set of entity definitions in addition to
the standard few-shot examples. Given a sentence, PromptNER prompts an LLM to
produce a list of potential entities along with corresponding explanations
justifying their compatibility with the provided entity type definitions.
Remarkably, PromptNER achieves state-of-the-art performance on few-shot NER,
achieving a 4% (absolute) improvement in F1 score on the ConLL dataset, a 9%
(absolute) improvement on the GENIA dataset, and a 4% (absolute) improvement on
the FewNERD dataset. PromptNER also moves the state of the art on Cross Domain
NER, outperforming prior methods (including those not limited to the few-shot
setting), setting a new mark on 3/5 CrossNER target domains, with an average F1
gain of 3%, despite using less than 2% of the available data.
"
Dcc --help: Generating Context-Aware Compiler Error Explanations with  Large Language Models,Andrew Taylor,http://arxiv.org/pdf/2308.11873v2.pdf,2023-08-23,"['cs.se', 'cs.lg', 'cs.pl']",2308.11873v2.pdf,"  In the challenging field of introductory programming, high enrollments and
failure rates drive us to explore tools and systems to enhance student
outcomes, especially automated tools that scale to large cohorts. This paper
presents and evaluates the dcc --help tool, an integration of a Large Language
Model (LLM) into the Debugging C Compiler (DCC) to generate unique,
novice-focused explanations tailored to each error. dcc --help prompts an LLM
with contextual information of compile- and run-time error occurrences,
including the source code, error location and standard compiler error message.
The LLM is instructed to generate novice-focused, actionable error explanations
and guidance, designed to help students understand and resolve problems without
providing solutions. dcc --help was deployed to our CS1 and CS2 courses, with
2,565 students using the tool over 64,000 times in ten weeks. We analysed a
subset of these error/explanation pairs to evaluate their properties, including
conceptual correctness, relevancy, and overall quality. We found that the
LLM-generated explanations were conceptually accurate in 90% of compile-time
and 75% of run-time cases, but often disregarded the instruction not to provide
solutions in code. Our findings, observations and reflections following
deployment indicate that dcc-help provides novel opportunities for scaffolding
students' introduction to programming.
"
BLSP: Bootstrapping Language-Speech Pre-training via Behavior Alignment  of Continuation Writing,Chen Wang,http://arxiv.org/pdf/2309.00916v1.pdf,2023-09-02,"['cs.cl', 'cs.sd', 'eess.as']",2309.00916v1.pdf,"  The emergence of large language models (LLMs) has sparked significant
interest in extending their remarkable language capabilities to speech.
However, modality alignment between speech and text still remains an open
problem. Current solutions can be categorized into two strategies. One is a
cascaded approach where outputs (tokens or states) of a separately trained
speech recognition system are used as inputs for LLMs, which limits their
potential in modeling alignment between speech and text. The other is an
end-to-end approach that relies on speech instruction data, which is very
difficult to collect in large quantities. In this paper, we address these
issues and propose the BLSP approach that Bootstraps Language-Speech
Pre-training via behavior alignment of continuation writing. We achieve this by
learning a lightweight modality adapter between a frozen speech encoder and an
LLM, ensuring that the LLM exhibits the same generation behavior regardless of
the modality of input: a speech segment or its transcript. The training process
can be divided into two steps. The first step prompts an LLM to generate texts
with speech transcripts as prefixes, obtaining text continuations. In the
second step, these continuations are used as supervised signals to train the
modality adapter in an end-to-end manner. We demonstrate that this
straightforward process can extend the capabilities of LLMs to speech, enabling
speech recognition, speech translation, spoken language understanding, and
speech conversation, even in zero-shot cross-lingual scenarios.
"
Balanced and Explainable Social Media Analysis for Public Health with  Large Language Models,Yan Jiang,http://arxiv.org/pdf/2309.05951v1.pdf,2023-09-12,['cs.cl'],2309.05951v1.pdf,"  As social media becomes increasingly popular, more and more public health
activities emerge, which is worth noting for pandemic monitoring and government
decision-making. Current techniques for public health analysis involve popular
models such as BERT and large language models (LLMs). Although recent progress
in LLMs has shown a strong ability to comprehend knowledge by being fine-tuned
on specific domain datasets, the costs of training an in-domain LLM for every
specific public health task are especially expensive. Furthermore, such kinds
of in-domain datasets from social media are generally highly imbalanced, which
will hinder the efficiency of LLMs tuning. To tackle these challenges, the data
imbalance issue can be overcome by sophisticated data augmentation methods for
social media datasets. In addition, the ability of the LLMs can be effectively
utilised by prompting the model properly. In light of the above discussion, in
this paper, a novel ALEX framework is proposed for social media analysis on
public health. Specifically, an augmentation pipeline is developed to resolve
the data imbalance issue. Furthermore, an LLMs explanation mechanism is
proposed by prompting an LLM with the predicted results from BERT models.
Extensive experiments conducted on three tasks at the Social Media Mining for
Health 2023 (SMM4H) competition with the first ranking in two tasks demonstrate
the superior performance of the proposed ALEX method. Our code has been
released in https://github.com/YanJiangJerry/ALEX.
"
HowToCaption: Prompting LLMs to Transform Video Annotations at Scale,Nina Shvetsova,http://arxiv.org/pdf/2310.04900v1.pdf,2023-10-07,['cs.cv'],2310.04900v1.pdf,"  Instructional videos are an excellent source for learning multimodal
representations by leveraging video-subtitle pairs extracted with automatic
speech recognition systems (ASR) from the audio signal in the videos. However,
in contrast to human-annotated captions, both speech and subtitles naturally
differ from the visual content of the videos and thus provide only noisy
supervision for multimodal learning. As a result, large-scale annotation-free
web video training data remains sub-optimal for training text-video models. In
this work, we propose to leverage the capability of large language models
(LLMs) to obtain fine-grained video descriptions aligned with videos.
Specifically, we prompt an LLM to create plausible video descriptions based on
ASR narrations of the video for a large-scale instructional video dataset. To
this end, we introduce a prompting method that is able to take into account a
longer text of subtitles, allowing us to capture context beyond a single
sentence. To align the captions to the video temporally, we prompt the LLM to
generate timestamps for each produced caption based on the subtitles. In this
way, we obtain human-style video captions at scale without human supervision.
We apply our method to the subtitles of the HowTo100M dataset, creating a new
large-scale dataset, HowToCaption. Our evaluation shows that the resulting
captions not only significantly improve the performance over many different
benchmark datasets for text-video retrieval but also lead to a disentangling of
textual narration from the audio, boosting performance in text-video-audio
tasks.
"
ClarifyGPT: Empowering LLM-based Code Generation with Intention  Clarification,Fangwen Mu,http://arxiv.org/pdf/2310.10996v1.pdf,2023-10-17,['cs.se'],2310.10996v1.pdf,"  We introduce a novel framework named ClarifyGPT, which aims to enhance code
generation by empowering LLMs with the ability to identify ambiguous
requirements and ask targeted clarifying questions. In particular, ClarifyGPT
first detects whether a given requirement is ambiguous by performing a code
consistency check. If it is ambiguous, ClarifyGPT prompts an LLM to generate
targeted clarifying questions. After receiving question responses, ClarifyGPT
refines the ambiguous requirement and inputs it into the same LLM to generate a
final code solution. To evaluate our ClarifyGPT, we first conduct a human
evaluation involving ten participants who use ClarifyGPT for code generation on
two publicly available benchmarks: MBPP-sanitized and MBPP-ET. The results show
that ClarifyGPT elevates the performance (Pass@1) of GPT-4 from 70.96% to
80.80% on MBPP-sanitized. Furthermore, to perform large-scale automated
evaluations of ClarifyGPT across different LLMs and benchmarks without
requiring user participation, we introduce a high-fidelity simulation method to
simulate user responses. The automated evaluation results also demonstrate that
ClarifyGPT can significantly enhance code generation performance compared to
the baselines. In particular, ClarifyGPT improves the average performance of
GPT-4 and ChatGPT across four benchmarks from 68.02% to 75.75% and from 58.55%
to 67.22%, respectively. We believe that ClarifyGPT can effectively facilitate
the practical application of LLMs in real-world development environments.
"
Harnessing Explanations: LLM-to-LM Interpreter for Enhanced  Text-Attributed Graph Representation Learning,Xiaoxin He,http://arxiv.org/pdf/2305.19523v3.pdf,2023-05-31,['cs.lg'],2305.19523v3.pdf,"  Representation learning on text-attributed graphs (TAGs) has become a
critical research problem in recent years. A typical example of a TAG is a
paper citation graph, where the text of each paper serves as node attributes.
Initial graph neural network (GNN) pipelines handled these text attributes by
transforming them into shallow or hand-crafted features, such as skip-gram or
bag-of-words features. Recent efforts have focused on enhancing these pipelines
with language models (LMs), which typically demand intricate designs and
substantial computational resources. With the advent of powerful large language
models (LLMs) such as GPT or Llama2, which demonstrate an ability to reason and
to utilize general knowledge, there is a growing need for techniques which
combine the textual modelling abilities of LLMs with the structural learning
capabilities of GNNs. Hence, in this work, we focus on leveraging LLMs to
capture textual information as features, which can be used to boost GNN
performance on downstream tasks. A key innovation is our use of explanations as
features: we prompt an LLM to perform zero-shot classification, request textual
explanations for its decision-making process, and design an LLM-to-LM
interpreter to translate these explanations into informative features that
enhance downstream GNNs. Our experiments demonstrate that our method achieves
state-of-the-art results on well-established TAG datasets, including Cora,
PubMed, ogbn-arxiv, as well as our newly introduced dataset, arXiv-2023.
Furthermore, our method significantly speeds up training, achieving a 2.88
times improvement over the closest baseline on ogbn-arxiv. Lastly, we believe
the versatility of the proposed method extends beyond TAGs and holds the
potential to enhance other tasks involving graph-text data~\footnote{Our codes
and datasets are available at: \url{https://github.com/XiaoxinHe/TAPE}}.
"
LEGO-Prover: Neural Theorem Proving with Growing Libraries,Haiming Wang,http://arxiv.org/pdf/2310.00656v3.pdf,2023-10-01,['cs.ai'],2310.00656v3.pdf,"  Despite the success of large language models (LLMs), the task of theorem
proving still remains one of the hardest reasoning tasks that is far from being
fully solved. Prior methods using language models have demonstrated promising
results, but they still struggle to prove even middle school level theorems.
One common limitation of these methods is that they assume a fixed theorem
library during the whole theorem proving process. However, as we all know,
creating new useful theorems or even new theories is not only helpful but
crucial and necessary for advancing mathematics and proving harder and deeper
results. In this work, we present LEGO-Prover, which employs a growing skill
library containing verified lemmas as skills to augment the capability of LLMs
used in theorem proving. By constructing the proof modularly, LEGO-Prover
enables LLMs to utilize existing skills retrieved from the library and to
create new skills during the proving process. These skills are further evolved
(by prompting an LLM) to enrich the library on another scale. Modular and
reusable skills are constantly added to the library to enable tackling
increasingly intricate mathematical problems. Moreover, the learned library
further bridges the gap between human proofs and formal proofs by making it
easier to impute missing steps. LEGO-Prover advances the state-of-the-art pass
rate on miniF2F-valid (48.0% to 57.0%) and miniF2F-test (45.5% to 47.1%).
During the proving process, LEGO-Prover also manages to generate over 20,000
skills (theorems/lemmas) and adds them to the growing library. Our ablation
study indicates that these newly added skills are indeed helpful for proving
theorems, resulting in an improvement from a success rate of 47.1% to 50.4%. We
also release our code and all the generated skills.
"
BooookScore: A systematic exploration of book-length summarization in  the era of LLMs,Yapei Chang,http://arxiv.org/pdf/2310.00785v2.pdf,2023-10-01,"['cs.cl', 'cs.ai', 'cs.lg']",2310.00785v2.pdf,"  Summarizing book-length documents (>100K tokens) that exceed the context
window size of large language models (LLMs) requires first breaking the input
document into smaller chunks and then prompting an LLM to merge, update, and
compress chunk-level summaries. Despite the complexity and importance of this
task, it has yet to be meaningfully studied due to the challenges of
evaluation: existing book-length summarization datasets (e.g., BookSum) are in
the pretraining data of most public LLMs, and existing evaluation methods
struggle to capture errors made by modern LLM summarizers. In this paper, we
present the first study of the coherence of LLM-based book-length summarizers
implemented via two prompting workflows: (1) hierarchically merging chunk-level
summaries, and (2) incrementally updating a running summary. We obtain 1193
fine-grained human annotations on GPT-4 generated summaries of 100
recently-published books and identify eight common types of coherence errors
made by LLMs. Because human evaluation is expensive and time-consuming, we
develop an automatic metric, BooookScore, that measures the proportion of
sentences in a summary that do not contain any of the identified error types.
BooookScore has high agreement with human annotations and allows us to
systematically evaluate the impact of many other critical parameters (e.g.,
chunk size, base LLM) while saving $15K and 500 hours in human evaluation
costs. We find that closed-source LLMs such as GPT-4 and Claude 2 produce
summaries with higher BooookScore than the oft-repetitive ones generated by
LLaMA 2. Incremental updating yields lower BooookScore but higher level of
detail than hierarchical merging, a trade-off sometimes preferred by human
annotators. We release code and annotations after blind review to spur more
principled research on book-length summarization.
"
The Unreliability of Explanations in Few-shot Prompting for Textual  Reasoning,Xi Ye,http://arxiv.org/pdf/2205.03401v2.pdf,2022-05-06,['cs.cl'],2205.03401v2.pdf,"  Does prompting a large language model (LLM) like GPT-3 with explanations
improve in-context learning? We study this question on two NLP tasks that
involve reasoning over text, namely question answering and natural language
inference. We test the performance of four LLMs on three textual reasoning
datasets using prompts that include explanations in multiple different styles.
For these tasks, we find that including explanations in the prompts for OPT,
GPT-3 (davinci), and InstructGPT (text-davinci-001) only yields small to
moderate accuracy improvements over standard few-show learning. However,
text-davinci-002 is able to benefit more substantially.
  We further show that explanations generated by the LLMs may not entail the
models' predictions nor be factually grounded in the input, even on simple
tasks with extractive explanations. However, these flawed explanations can
still be useful as a way to verify LLMs' predictions post-hoc. Through analysis
in our three settings, we show that explanations judged by humans to be
good--logically consistent with the input and the prediction--more likely
cooccur with accurate predictions. Following these observations, we train
calibrators using automatically extracted scores that assess the reliability of
explanations, allowing us to improve performance post-hoc across all of our
datasets.
"
Contrastive Novelty-Augmented Learning: Anticipating Outliers with Large  Language Models,Albert Xu,http://arxiv.org/pdf/2211.15718v2.pdf,2022-11-28,['cs.cl'],2211.15718v2.pdf,"  In many task settings, text classification models are likely to encounter
examples from novel classes on which they cannot predict correctly. Selective
prediction, in which models abstain on low-confidence examples, provides a
possible solution, but existing models are often overly confident on unseen
classes. To remedy this overconfidence, we introduce Contrastive
Novelty-Augmented Learning (CoNAL), a two-step method that generates OOD
examples representative of novel classes, then trains to decrease confidence on
them. First, we generate OOD examples by prompting a large language model
twice: we prompt it to enumerate relevant novel classes, then generate examples
from each novel class matching the task format. Second, we train a classifier
with a novel contrastive objective that encourages lower confidence on
generated OOD examples than training examples. When trained with CoNAL,
classifiers improve in their ability to detect and abstain on novel class
examples over prior methods by an average of 2.3% in terms of accuracy under
the accuracy-coverage curve (AUAC) and 5.5% AUROC across 4 NLP datasets, with
no cost to in-distribution accuracy.
"
Extensible Prompts for Language Models,Tao Ge,http://arxiv.org/pdf/2212.00616v1.pdf,2022-12-01,['cs.cl'],2212.00616v1.pdf,"  We propose eXtensible Prompt (X-Prompt) for prompting a large language model
(LLM) beyond natural language (NL). X-Prompt instructs an LLM with not only NL
but also an extensible vocabulary of imaginary words that are introduced to
help represent what NL words hardly describe, allowing a prompt to be more
descriptive. Like NL prompts, X-Prompt is out-of-distribution (OOD) robust, for
which we propose context-guided learning with prompt augmentation to learn its
imaginary words for general usability, enabling them to use in different prompt
contexts for fine-grain specifications. The promising results of X-Prompt
demonstrate its potential of approaching advanced interaction between humans
and LLMs to bridge their communication gap.
"
Reward Design with Language Models,Minae Kwon,http://arxiv.org/pdf/2303.00001v1.pdf,2023-02-27,"['cs.lg', 'cs.ai', 'cs.cl']",2303.00001v1.pdf,"  Reward design in reinforcement learning (RL) is challenging since specifying
human notions of desired behavior may be difficult via reward functions or
require many expert demonstrations. Can we instead cheaply design rewards using
a natural language interface? This paper explores how to simplify reward design
by prompting a large language model (LLM) such as GPT-3 as a proxy reward
function, where the user provides a textual prompt containing a few examples
(few-shot) or a description (zero-shot) of the desired behavior. Our approach
leverages this proxy reward function in an RL framework. Specifically, users
specify a prompt once at the beginning of training. During training, the LLM
evaluates an RL agent's behavior against the desired behavior described by the
prompt and outputs a corresponding reward signal. The RL agent then uses this
reward to update its behavior. We evaluate whether our approach can train
agents aligned with user objectives in the Ultimatum Game, matrix games, and
the DealOrNoDeal negotiation task. In all three tasks, we show that RL agents
trained with our framework are well-aligned with the user's objectives and
outperform RL agents trained with reward functions learned via supervised
learning
"
Prompt-Based Monte-Carlo Tree Search for Goal-Oriented Dialogue Policy  Planning,Xiao Yu,http://arxiv.org/pdf/2305.13660v2.pdf,2023-05-23,['cs.cl'],2305.13660v2.pdf,"  Planning for goal-oriented dialogue often requires simulating future dialogue
interactions and estimating task progress. Many approaches thus consider
training neural networks to perform look-ahead search algorithms such as A*
search and Monte Carlo Tree Search (MCTS). However, this training often
requires abundant annotated data, which creates challenges when faced with
noisy annotations or low-resource settings. We introduce GDP-Zero, an approach
using Open-Loop MCTS to perform goal-oriented dialogue policy planning without
any model training. GDP-Zero prompts a large language model to act as a policy
prior, value function, user simulator, and system model during the tree search.
We evaluate GDP-Zero on the goal-oriented task PersuasionForGood, and find that
its responses are preferred over ChatGPT up to 59.32% of the time, and are
rated more persuasive than ChatGPT during interactive evaluations.
"
IDAS: Intent Discovery with Abstractive Summarization,Maarten De Raedt,http://arxiv.org/pdf/2305.19783v1.pdf,2023-05-31,['cs.cl'],2305.19783v1.pdf,"  Intent discovery is the task of inferring latent intents from a set of
unlabeled utterances, and is a useful step towards the efficient creation of
new conversational agents. We show that recent competitive methods in intent
discovery can be outperformed by clustering utterances based on abstractive
summaries, i.e., ""labels"", that retain the core elements while removing
non-essential information. We contribute the IDAS approach, which collects a
set of descriptive utterance labels by prompting a Large Language Model,
starting from a well-chosen seed set of prototypical utterances, to bootstrap
an In-Context Learning procedure to generate labels for non-prototypical
utterances. The utterances and their resulting noisy labels are then encoded by
a frozen pre-trained encoder, and subsequently clustered to recover the latent
intents. For the unsupervised task (without any intent labels) IDAS outperforms
the state-of-the-art by up to +7.42% in standard cluster metrics for the
Banking, StackOverflow, and Transport datasets. For the semi-supervised task
(with labels for a subset of intents) IDAS surpasses 2 recent methods on the
CLINC benchmark without even using labeled data.
"
Prompting a Large Language Model to Generate Diverse Motivational  Messages: A Comparison with Human-Written Messages,Samuel Rhys Cox,http://arxiv.org/pdf/2308.13479v1.pdf,2023-08-25,"['cs.cl', 'cs.hc']",2308.13479v1.pdf,"  Large language models (LLMs) are increasingly capable and prevalent, and can
be used to produce creative content. The quality of content is influenced by
the prompt used, with more specific prompts that incorporate examples generally
producing better results. On from this, it could be seen that using
instructions written for crowdsourcing tasks (that are specific and include
examples to guide workers) could prove effective LLM prompts. To explore this,
we used a previous crowdsourcing pipeline that gave examples to people to help
them generate a collectively diverse corpus of motivational messages. We then
used this same pipeline to generate messages using GPT-4, and compared the
collective diversity of messages from: (1) crowd-writers, (2) GPT-4 using the
pipeline, and (3 & 4) two baseline GPT-4 prompts. We found that the LLM prompts
using the crowdsourcing pipeline caused GPT-4 to produce more diverse messages
than the two baseline prompts. We also discuss implications from messages
generated by both human writers and LLMs.
"
Social Simulacra: Creating Populated Prototypes for Social Computing  Systems,Joon Sung Park,http://arxiv.org/pdf/2208.04024v1.pdf,2022-08-08,['cs.hc'],2208.04024v1.pdf,"  Social computing prototypes probe the social behaviors that may arise in an
envisioned system design. This prototyping practice is currently limited to
recruiting small groups of people. Unfortunately, many challenges do not arise
until a system is populated at a larger scale. Can a designer understand how a
social system might behave when populated, and make adjustments to the design
before the system falls prey to such challenges? We introduce social simulacra,
a prototyping technique that generates a breadth of realistic social
interactions that may emerge when a social computing system is populated.
Social simulacra take as input the designer's description of a community's
design -- goal, rules, and member personas -- and produce as output an instance
of that design with simulated behavior, including posts, replies, and
anti-social behaviors. We demonstrate that social simulacra shift the behaviors
that they generate appropriately in response to design changes, and that they
enable exploration of ""what if?"" scenarios where community members or
moderators intervene. To power social simulacra, we contribute techniques for
prompting a large language model to generate thousands of distinct community
members and their social interactions with each other; these techniques are
enabled by the observation that large language models' training data already
includes a wide variety of positive and negative behavior on social media
platforms. In evaluations, we show that participants are often unable to
distinguish social simulacra from actual community behavior and that social
computing designers successfully refine their social computing designs when
using social simulacra.
"
Generate rather than Retrieve: Large Language Models are Strong Context  Generators,Wenhao Yu,http://arxiv.org/pdf/2209.10063v3.pdf,2022-09-21,"['cs.cl', 'cs.ai']",2209.10063v3.pdf,"  Knowledge-intensive tasks, such as open-domain question answering (QA),
require access to a large amount of world or domain knowledge. A common
approach for knowledge-intensive tasks is to employ a retrieve-then-read
pipeline that first retrieves a handful of relevant contextual documents from
an external corpus such as Wikipedia and then predicts an answer conditioned on
the retrieved documents. In this paper, we present a novel perspective for
solving knowledge-intensive tasks by replacing document retrievers with large
language model generators. We call our method generate-then-read (GenRead),
which first prompts a large language model to generate contextutal documents
based on a given question, and then reads the generated documents to produce
the final answer. Furthermore, we propose a novel clustering-based prompting
method that selects distinct prompts, resulting in the generated documents that
cover different perspectives, leading to better recall over acceptable answers.
We conduct extensive experiments on three different knowledge-intensive tasks,
including open-domain QA, fact checking, and dialogue system. Notably, GenRead
achieves 71.6 and 54.4 exact match scores on TriviaQA and WebQ, significantly
outperforming the state-of-the-art retrieve-then-read pipeline DPR-FiD by +4.0
and +3.9, without retrieving any documents from any external knowledge source.
Lastly, we demonstrate the model performance can be further improved by
combining retrieval and generation. Our code and generated documents can be
found at https://github.com/wyu97/GenRead.
"
q2d: Turning Questions into Dialogs to Teach Models How to Search,Yonatan Bitton,http://arxiv.org/pdf/2304.14318v1.pdf,2023-04-27,['cs.cl'],2304.14318v1.pdf,"  One of the exciting capabilities of recent language models for dialog is
their ability to independently search for relevant information to ground a
given dialog response. However, obtaining training data to teach models how to
issue search queries is time and resource consuming. In this work, we propose
q2d: an automatic data generation pipeline that generates information-seeking
dialogs from questions. We prompt a large language model (PaLM) to create
conversational versions of question answering datasets, and use it to improve
query generation models that communicate with external search APIs to ground
dialog responses. Unlike previous approaches which relied on human written
dialogs with search queries, our method allows to automatically generate
query-based grounded dialogs with better control and scale. Our experiments
demonstrate that: (1) For query generation on the QReCC dataset, models trained
on our synthetically-generated data achieve 90%--97% of the performance of
models trained on the human-generated data; (2) We can successfully generate
data for training dialog models in new domains without any existing dialog data
as demonstrated on the multi-hop MuSiQue and Bamboogle QA datasets. (3) We
perform a thorough analysis of the generated dialogs showing that humans find
them of high quality and struggle to distinguish them from human-written
dialogs.
"
Multi-Modal Classifiers for Open-Vocabulary Object Detection,Prannay Kaul,http://arxiv.org/pdf/2306.05493v1.pdf,2023-06-08,"['cs.cv', 'cs.ai', 'cs.lg', 'i.4.6; i.4.8; i.4.9; i.2.10']",2306.05493v1.pdf,"  The goal of this paper is open-vocabulary object detection (OVOD)
$\unicode{x2013}$ building a model that can detect objects beyond the set of
categories seen at training, thus enabling the user to specify categories of
interest at inference without the need for model retraining. We adopt a
standard two-stage object detector architecture, and explore three ways for
specifying novel categories: via language descriptions, via image exemplars, or
via a combination of the two. We make three contributions: first, we prompt a
large language model (LLM) to generate informative language descriptions for
object classes, and construct powerful text-based classifiers; second, we
employ a visual aggregator on image exemplars that can ingest any number of
images as input, forming vision-based classifiers; and third, we provide a
simple method to fuse information from language descriptions and image
exemplars, yielding a multi-modal classifier. When evaluating on the
challenging LVIS open-vocabulary benchmark we demonstrate that: (i) our
text-based classifiers outperform all previous OVOD works; (ii) our
vision-based classifiers perform as well as text-based classifiers in prior
work; (iii) using multi-modal classifiers perform better than either modality
alone; and finally, (iv) our text-based and multi-modal classifiers yield
better performance than a fully-supervised detector.
"
InstructEval: Systematic Evaluation of Instruction Selection Methods,Anirudh Ajith,http://arxiv.org/pdf/2307.00259v2.pdf,2023-07-01,"['cs.cl', 'cs.ai']",2307.00259v2.pdf,"  In-context learning (ICL) performs tasks by prompting a large language model
(LLM) using an instruction and a small set of annotated examples called
demonstrations. Recent work has shown that precise details of the inputs used
in the ICL prompt significantly impact performance, which has incentivized
instruction selection algorithms. The effect of instruction-choice however is
severely underexplored, with existing analyses restricted to shallow subsets of
models and tasks, limiting the generalizability of their insights. We develop
InstructEval, an ICL evaluation suite to conduct a thorough assessment of these
techniques. The suite includes 13 open-sourced LLMs of varying scales from four
model families, and covers nine tasks across three categories. Using the suite,
we evaluate the relative performance of seven popular instruction selection
methods over five metrics relevant to ICL. Our experiments reveal that using
curated manually-written instructions or simple instructions without any
task-specific descriptions often elicits superior ICL performance overall than
that of automatic instruction-induction methods, pointing to a lack of
generalizability among the latter. We release our evaluation suite for
benchmarking instruction selection approaches and enabling more generalizable
methods in this space.
"
Prompt Injection Attacks and Defenses in LLM-Integrated Applications,Yupei Liu,http://arxiv.org/pdf/2310.12815v1.pdf,2023-10-19,"['cs.cr', 'cs.ai', 'cs.cl', 'cs.lg']",2310.12815v1.pdf,"  Large Language Models (LLMs) are increasingly deployed as the backend for a
variety of real-world applications called LLM-Integrated Applications. Multiple
recent works showed that LLM-Integrated Applications are vulnerable to prompt
injection attacks, in which an attacker injects malicious instruction/data into
the input of those applications such that they produce results as the attacker
desires. However, existing works are limited to case studies. As a result, the
literature lacks a systematic understanding of prompt injection attacks and
their defenses. We aim to bridge the gap in this work. In particular, we
propose a general framework to formalize prompt injection attacks. Existing
attacks, which are discussed in research papers and blog posts, are special
cases in our framework. Our framework enables us to design a new attack by
combining existing attacks. Moreover, we also propose a framework to
systematize defenses against prompt injection attacks. Using our frameworks, we
conduct a systematic evaluation on prompt injection attacks and their defenses
with 10 LLMs and 7 tasks. We hope our frameworks can inspire future research in
this field. Our code is available at
https://github.com/liu00222/Open-Prompt-Injection.
"
Prompt Injection attack against LLM-integrated Applications,Yi Liu,http://arxiv.org/pdf/2306.05499v1.pdf,2023-06-08,"['cs.cr', 'cs.ai', 'cs.cl', 'cs.se']",2306.05499v1.pdf,"  Large Language Models (LLMs), renowned for their superior proficiency in
language comprehension and generation, stimulate a vibrant ecosystem of
applications around them. However, their extensive assimilation into various
services introduces significant security risks. This study deconstructs the
complexities and implications of prompt injection attacks on actual
LLM-integrated applications. Initially, we conduct an exploratory analysis on
ten commercial applications, highlighting the constraints of current attack
strategies in practice. Prompted by these limitations, we subsequently
formulate HouYi, a novel black-box prompt injection attack technique, which
draws inspiration from traditional web injection attacks. HouYi is
compartmentalized into three crucial elements: a seamlessly-incorporated
pre-constructed prompt, an injection prompt inducing context partition, and a
malicious payload designed to fulfill the attack objectives. Leveraging HouYi,
we unveil previously unknown and severe attack outcomes, such as unrestricted
arbitrary LLM usage and uncomplicated application prompt theft. We deploy HouYi
on 36 actual LLM-integrated applications and discern 31 applications
susceptible to prompt injection. 10 vendors have validated our discoveries,
including Notion, which has the potential to impact millions of users. Our
investigation illuminates both the possible risks of prompt injection attacks
and the possible tactics for mitigation.
"
Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game,Sam Toyer,http://arxiv.org/pdf/2311.01011v1.pdf,2023-11-02,"['cs.lg', 'cs.cr']",2311.01011v1.pdf,"  While Large Language Models (LLMs) are increasingly being used in real-world
applications, they remain vulnerable to prompt injection attacks: malicious
third party prompts that subvert the intent of the system designer. To help
researchers study this problem, we present a dataset of over 126,000 prompt
injection attacks and 46,000 prompt-based ""defenses"" against prompt injection,
all created by players of an online game called Tensor Trust. To the best of
our knowledge, this is currently the largest dataset of human-generated
adversarial examples for instruction-following LLMs. The attacks in our dataset
have a lot of easily interpretable stucture, and shed light on the weaknesses
of LLMs. We also use the dataset to create a benchmark for resistance to two
types of prompt injection, which we refer to as prompt extraction and prompt
hijacking. Our benchmark results show that many models are vulnerable to the
attack strategies in the Tensor Trust dataset. Furthermore, we show that some
attack strategies from the dataset generalize to deployed LLM-based
applications, even though they have a very different set of constraints to the
game. We release all data and source code at https://tensortrust.ai/paper
"
Not what you've signed up for: Compromising Real-World LLM-Integrated  Applications with Indirect Prompt Injection,Kai Greshake,http://arxiv.org/pdf/2302.12173v2.pdf,2023-02-23,"['cs.cr', 'cs.ai', 'cs.cl', 'cs.cy']",2302.12173v2.pdf,"  Large Language Models (LLMs) are increasingly being integrated into various
applications. The functionalities of recent LLMs can be flexibly modulated via
natural language prompts. This renders them susceptible to targeted adversarial
prompting, e.g., Prompt Injection (PI) attacks enable attackers to override
original instructions and employed controls. So far, it was assumed that the
user is directly prompting the LLM. But, what if it is not the user prompting?
We argue that LLM-Integrated Applications blur the line between data and
instructions. We reveal new attack vectors, using Indirect Prompt Injection,
that enable adversaries to remotely (without a direct interface) exploit
LLM-integrated applications by strategically injecting prompts into data likely
to be retrieved. We derive a comprehensive taxonomy from a computer security
perspective to systematically investigate impacts and vulnerabilities,
including data theft, worming, information ecosystem contamination, and other
novel security risks. We demonstrate our attacks' practical viability against
both real-world systems, such as Bing's GPT-4 powered Chat and code-completion
engines, and synthetic applications built on GPT-4. We show how processing
retrieved prompts can act as arbitrary code execution, manipulate the
application's functionality, and control how and if other APIs are called.
Despite the increasing integration and reliance on LLMs, effective mitigations
of these emerging threats are currently lacking. By raising awareness of these
vulnerabilities and providing key insights into their implications, we aim to
promote the safe and responsible deployment of these powerful models and the
development of robust defenses that protect users and systems from potential
attacks.
"
From Prompt Injections to SQL Injection Attacks: How Protected is Your  LLM-Integrated Web Application?,Rodrigo Pedro,http://arxiv.org/pdf/2308.01990v3.pdf,2023-08-03,['cs.cr'],2308.01990v3.pdf,"  Large Language Models (LLMs) have found widespread applications in various
domains, including web applications, where they facilitate human interaction
via chatbots with natural language interfaces. Internally, aided by an
LLM-integration middleware such as Langchain, user prompts are translated into
SQL queries used by the LLM to provide meaningful responses to users. However,
unsanitized user prompts can lead to SQL injection attacks, potentially
compromising the security of the database. Despite the growing interest in
prompt injection vulnerabilities targeting LLMs, the specific risks of
generating SQL injection attacks through prompt injections have not been
extensively studied. In this paper, we present a comprehensive examination of
prompt-to-SQL (P$_2$SQL) injections targeting web applications based on the
Langchain framework. Using Langchain as our case study, we characterize
P$_2$SQL injections, exploring their variants and impact on application
security through multiple concrete examples. Furthermore, we evaluate 7
state-of-the-art LLMs, demonstrating the pervasiveness of P$_2$SQL attacks
across language models. Our findings indicate that LLM-integrated applications
based on Langchain are highly susceptible to P$_2$SQL injection attacks,
warranting the adoption of robust defenses. To counter these attacks, we
propose four effective defense techniques that can be integrated as extensions
to the Langchain framework. We validate the defenses through an experimental
evaluation with a real-world use case application.
"
Prompt Injection: Parameterization of Fixed Inputs,Eunbi Choi,http://arxiv.org/pdf/2206.11349v2.pdf,2022-05-31,"['cs.lg', 'cs.ai', 'cs.cl']",2206.11349v2.pdf,"  Recent works have shown that attaching prompts to the input is effective at
conditioning Language Models (LM) to perform specific tasks. However, prompts
are always included in the input text during inference, thus incurring
substantial computational and memory overhead. Also, there is currently no
straightforward method of utilizing prompts that are longer than the maximum
input length of the LMs without incurring additional costs during inference. We
propose Prompt Injection (PI), a novel formulation of injecting the prompt into
the parameters of an LM to be an efficient alternative to attaching fixed
prompts to the input. We show that in scenarios with long fixed prompts, PI can
be up to 280 times more efficient in terms of total FLOPs than previous
approaches. We further explore methodologies for PI and show promising results
in persona-dependent conversation, semantic parsing, and zero-shot learning
with task instructions. Through these explorations, we show that PI can be a
promising direction for conditioning language models, especially in scenarios
with long and fixed prompts.
"
Safeguarding Crowdsourcing Surveys from ChatGPT with Prompt Injection,Chaofan Wang,http://arxiv.org/pdf/2306.08833v1.pdf,2023-06-15,['cs.hc'],2306.08833v1.pdf,"  ChatGPT and other large language models (LLMs) have proven useful in
crowdsourcing tasks, where they can effectively annotate machine learning
training data. However, this means that they also have the potential for
misuse, specifically to automatically answer surveys. LLMs can potentially
circumvent quality assurance measures, thereby threatening the integrity of
methodologies that rely on crowdsourcing surveys. In this paper, we propose a
mechanism to detect LLM-generated responses to surveys. The mechanism uses
""prompt injection"", such as directions that can mislead LLMs into giving
predictable responses. We evaluate our technique against a range of question
scenarios, types, and positions, and find that it can reliably detect
LLM-generated responses with more than 93% effectiveness. We also provide an
open-source software to help survey designers use our technique to detect LLM
responses. Our work is a step in ensuring that survey methodologies remain
rigorous vis-a-vis LLMs.
"
Backdooring Instruction-Tuned Large Language Models with Virtual Prompt  Injection,Jun Yan,http://arxiv.org/pdf/2307.16888v2.pdf,2023-07-31,"['cs.cl', 'cs.cr', 'cs.lg']",2307.16888v2.pdf,"  Instruction-tuned Large Language Models (LLMs) have demonstrated remarkable
abilities to modulate their responses based on human instructions. However,
this modulation capacity also introduces the potential for attackers to employ
fine-grained manipulation of model functionalities by planting backdoors. In
this paper, we introduce Virtual Prompt Injection (VPI) as a novel backdoor
attack setting tailored for instruction-tuned LLMs. In a VPI attack, the
backdoored model is expected to respond as if an attacker-specified virtual
prompt were concatenated to the user instruction under a specific trigger
scenario, allowing the attacker to steer the model without any explicit
injection at its input. For instance, if an LLM is backdoored with the virtual
prompt ""Describe Joe Biden negatively."" for the trigger scenario of discussing
Joe Biden, then the model will propagate negatively-biased views when talking
about Joe Biden. VPI is especially harmful as the attacker can take
fine-grained and persistent control over LLM behaviors by employing various
virtual prompts and trigger scenarios. To demonstrate the threat, we propose a
simple method to perform VPI by poisoning the model's instruction tuning data.
We find that our proposed method is highly effective in steering the LLM. For
example, by poisoning only 52 instruction tuning examples (0.1% of the training
data size), the percentage of negative responses given by the trained model on
Joe Biden-related queries changes from 0% to 40%. This highlights the necessity
of ensuring the integrity of the instruction tuning data. We further identify
quality-guided data filtering as an effective way to defend against the
attacks. Our project page is available at https://poison-llm.github.io.
"
Knowledge Prompts: Injecting World Knowledge into Language Models  through Soft Prompts,Cicero Nogueira dos Santos,http://arxiv.org/pdf/2210.04726v1.pdf,2022-10-10,"['cs.cl', 'cs.ai', 'cs.lg']",2210.04726v1.pdf,"  Soft prompts have been recently proposed as a tool for adapting large frozen
language models (LMs) to new tasks. In this work, we repurpose soft prompts to
the task of injecting world knowledge into LMs. We introduce a method to train
soft prompts via self-supervised learning on data from knowledge bases. The
resulting soft knowledge prompts (KPs) are task independent and work as an
external memory of the LMs. We perform qualitative and quantitative experiments
and demonstrate that: (1) KPs can effectively model the structure of the
training data; (2) KPs can be used to improve the performance of LMs in
different knowledge intensive tasks.
"
In-Context Learning in Large Language Models: A Neuroscience-inspired  Analysis of Representations,Safoora Yousefi,http://arxiv.org/pdf/2310.00313v2.pdf,2023-09-30,['cs.cl'],2310.00313v2.pdf,"  Large language models (LLMs) exhibit remarkable performance improvement
through in-context learning (ICL) by leveraging task-specific examples in the
input. However, the mechanisms behind this improvement remain elusive. In this
work, we investigate embeddings and attention representations in Llama-2 70B
and Vicuna 13B. Specifically, we study how embeddings and attention change
after in-context-learning, and how these changes mediate improvement in
behavior. We employ neuroscience-inspired techniques, such as representational
similarity analysis (RSA), and propose novel methods for parameterized probing
and attention ratio analysis (ARA, measuring the ratio of attention to relevant
vs. irrelevant information). We designed three tasks with a priori
relationships among their conditions: reading comprehension, linear regression,
and adversarial prompt injection. We formed hypotheses about expected
similarities in task representations to investigate latent changes in
embeddings and attention. Our analyses revealed a meaningful correlation
between changes in both embeddings and attention representations with
improvements in behavioral performance after ICL. This empirical framework
empowers a nuanced understanding of how latent representations affect LLM
behavior with and without ICL, offering valuable tools and insights for future
research and practical applications.
"
From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and  Privacy,Maanak Gupta,http://arxiv.org/pdf/2307.00691v1.pdf,2023-07-03,"['cs.cr', 'cs.ai']",2307.00691v1.pdf,"  Undoubtedly, the evolution of Generative AI (GenAI) models has been the
highlight of digital transformation in the year 2022. As the different GenAI
models like ChatGPT and Google Bard continue to foster their complexity and
capability, it's critical to understand its consequences from a cybersecurity
perspective. Several instances recently have demonstrated the use of GenAI
tools in both the defensive and offensive side of cybersecurity, and focusing
on the social, ethical and privacy implications this technology possesses. This
research paper highlights the limitations, challenges, potential risks, and
opportunities of GenAI in the domain of cybersecurity and privacy. The work
presents the vulnerabilities of ChatGPT, which can be exploited by malicious
users to exfiltrate malicious information bypassing the ethical constraints on
the model. This paper demonstrates successful example attacks like Jailbreaks,
reverse psychology, and prompt injection attacks on the ChatGPT. The paper also
investigates how cyber offenders can use the GenAI tools in developing cyber
attacks, and explore the scenarios where ChatGPT can be used by adversaries to
create social engineering attacks, phishing attacks, automated hacking, attack
payload generation, malware creation, and polymorphic malware. This paper then
examines defense techniques and uses GenAI tools to improve security measures,
including cyber defense automation, reporting, threat intelligence, secure code
generation and detection, attack identification, developing ethical guidelines,
incidence response plans, and malware detection. We will also discuss the
social, legal, and ethical implications of ChatGPT. In conclusion, the paper
highlights open challenges and future directions to make this GenAI secure,
safe, trustworthy, and ethical as the community understands its cybersecurity
impacts.
"
Evaluating the Instruction-Following Robustness of Large Language Models  to Prompt Injection,Zekun Li,http://arxiv.org/pdf/2308.10819v2.pdf,2023-08-17,"['cs.cl', 'cs.ai']",2308.10819v2.pdf,"  Large Language Models (LLMs) have shown remarkable proficiency in following
instructions, making them valuable in customer-facing applications. However,
their impressive capabilities also raise concerns about the amplification of
risks posed by adversarial instructions, which can be injected into the model
input by third-party attackers to manipulate LLMs' original instructions and
prompt unintended actions and content. Therefore, it is crucial to understand
LLMs' ability to accurately discern which instructions to follow to ensure
their safe deployment in real-world scenarios. In this paper, we propose a
pioneering benchmark for automatically evaluating the robustness of
instruction-following LLMs against adversarial instructions injected in the
prompt. The objective of this benchmark is to quantify the extent to which LLMs
are influenced by injected adversarial instructions and assess their ability to
differentiate between these injected adversarial instructions and original user
instructions. Through experiments conducted with state-of-the-art
instruction-following LLMs, we uncover significant limitations in their
robustness against adversarial instruction injection attacks. Furthermore, our
findings indicate that prevalent instruction-tuned models are prone to being
``overfitted'' to follow any instruction phrase in the prompt without truly
understanding which instructions should be followed. This highlights the need
to address the challenge of training models to comprehend prompts instead of
merely following instruction phrases and completing the text. The data and code
can be found at \url{https://github.com/Leezekun/Adv-Instruct-Eval}.
"
Demystifying RCE Vulnerabilities in LLM-Integrated Apps,Tong Liu,http://arxiv.org/pdf/2309.02926v2.pdf,2023-09-06,['cs.cr'],2309.02926v2.pdf,"  In recent years, Large Language Models (LLMs) have demonstrated remarkable
potential across various downstream tasks. LLM-integrated frameworks, which
serve as the essential infrastructure, have given rise to many LLM-integrated
web apps. However, some of these frameworks suffer from Remote Code Execution
(RCE) vulnerabilities, allowing attackers to execute arbitrary code on apps'
servers remotely via prompt injections. Despite the severity of these
vulnerabilities, no existing work has been conducted for a systematic
investigation of them. This leaves a great challenge on how to detect
vulnerabilities in frameworks as well as LLM-integrated apps in real-world
scenarios. To fill this gap, we present two novel strategies, including 1) a
static analysis-based tool called LLMSmith to scan the source code of the
framework to detect potential RCE vulnerabilities and 2) a prompt-based
automated testing approach to verify the vulnerability in LLM-integrated web
apps. We discovered 13 vulnerabilities in 6 frameworks, including 12 RCE
vulnerabilities and 1 arbitrary file read/write vulnerability. 11 of them are
confirmed by the framework developers, resulting in the assignment of 7 CVE
IDs. After testing 51 apps, we found vulnerabilities in 17 apps, 16 of which
are vulnerable to RCE and 1 to SQL injection. We responsibly reported all 17
issues to the corresponding developers and received acknowledgments.
Furthermore, we amplify the attack impact beyond achieving RCE by allowing
attackers to exploit other app users (e.g. app responses hijacking, user API
key leakage) without direct interaction between the attacker and the victim.
Lastly, we propose some mitigating strategies for improving the security
awareness of both framework and app developers, helping them to mitigate these
risks effectively.
"
Hydrogen-rich supernovae beyond the neutrino-driven core-collapse  paradigm,G. Terreran,http://arxiv.org/pdf/1709.10475v1.pdf,2017-09-29,['astro-ph.sr'],1709.10475v1.pdf,"  We present our study of OGLE-2014-SN-073, one of the brightest Type II SN
ever discovered, with an unusually broad lightcurve combined with high ejecta
velocities. From our hydrodynamical modelling we infer a remarkable ejecta mass
of $60^{+42}_{-16}$~M$_\odot$, and a relatively high explosion energy of
$12.4^{+13.0}_{-5.9} \times10^{51}$~erg. We show that this object belongs, with
a very small number of other hydrogen-rich SNe, to an energy regime that is not
explained by standard core-collapse (CC) neutrino-driven explosions. We compare
the quantities inferred by the hydrodynamical modelling with the expectations
of various exploding scenarios, trying to explain the high energy and
luminosity released. We find some qualitative similarities with
pair-instabilities SNe, although a prompt injection of energy by a magnetar
seems also a viable alternative to explain such extreme event.
"
Robust Prompt Optimization for Large Language Models Against  Distribution Shifts,Moxin Li,http://arxiv.org/pdf/2305.13954v2.pdf,2023-05-23,"['cs.cl', 'cs.ai']",2305.13954v2.pdf,"  Large Language Model (LLM) has demonstrated significant ability in various
Natural Language Processing tasks. However, their effectiveness is highly
dependent on the phrasing of the task prompt, leading to research on automatic
prompt optimization using labeled task data. We reveal that these prompt
optimization techniques are vulnerable to distribution shifts such as
subpopulation shifts, which are common for LLMs in real-world scenarios such as
customer reviews analysis. In this light, we propose a new problem of robust
prompt optimization for LLMs against distribution shifts, which requires the
prompt optimized over the labeled source group can simultaneously generalize to
an unlabeled target group. To solve this problem, we propose Generalized Prompt
Optimization framework, which incorporates the unlabeled data from the target
group into prompt optimization. Extensive experimental results demonstrate the
effectiveness of the proposed framework with significant performance
improvement on the target group and comparable performance on the source group.
"
MultiPrompter: Cooperative Prompt Optimization with Multi-Agent  Reinforcement Learning,Dong-Ki Kim,http://arxiv.org/pdf/2310.16730v1.pdf,2023-10-25,['cs.lg'],2310.16730v1.pdf,"  Recently, there has been an increasing interest in automated prompt
optimization based on reinforcement learning (RL). This approach offers
important advantages, such as generating interpretable prompts and being
compatible with black-box foundation models. However, the substantial prompt
space size poses challenges for RL-based methods, often leading to suboptimal
policy convergence. This paper introduces MultiPrompter, a new framework that
views prompt optimization as a cooperative game between prompters which take
turns composing a prompt together. Our cooperative prompt optimization
effectively reduces the problem size and helps prompters learn optimal prompts.
We test our method on the text-to-image task and show its ability to generate
higher-quality images than baselines.
"
Dialogue for Prompting: a Policy-Gradient-Based Discrete Prompt  Optimization for Few-shot Learning,Chengzhengxu Li,http://arxiv.org/pdf/2308.07272v1.pdf,2023-08-14,"['cs.lg', 'cs.cl']",2308.07272v1.pdf,"  Prompt-based pre-trained language models (PLMs) paradigm have succeeded
substantially in few-shot natural language processing (NLP) tasks. However,
prior discrete prompt optimization methods require expert knowledge to design
the base prompt set and identify high-quality prompts, which is costly,
inefficient, and subjective. Meanwhile, existing continuous prompt optimization
methods improve the performance by learning the ideal prompts through the
gradient information of PLMs, whose high computational cost, and low
readability and generalizability are often concerning. To address the research
gap, we propose a Dialogue-comprised Policy-gradient-based Discrete Prompt
Optimization ($DP_2O$) method. We first design a multi-round dialogue alignment
strategy for readability prompt set generation based on GPT-4. Furthermore, we
propose an efficient prompt screening metric to identify high-quality prompts
with linear complexity. Finally, we construct a reinforcement learning (RL)
framework based on policy gradients to match the prompts to inputs optimally.
By training a policy network with only 0.67% of the PLM parameter size on the
tasks in the few-shot setting, $DP_2O$ outperforms the state-of-the-art (SOTA)
method by 1.52% in accuracy on average on four open-source datasets. Moreover,
subsequent experiments also demonstrate that $DP_2O$ has good universality,
robustness, and generalization ability.
"
PromptAgent: Strategic Planning with Language Models Enables  Expert-level Prompt Optimization,Xinyuan Wang,http://arxiv.org/pdf/2310.16427v1.pdf,2023-10-25,['cs.cl'],2310.16427v1.pdf,"  Highly effective, task-specific prompts are often heavily engineered by
experts to integrate detailed instructions and domain insights based on a deep
understanding of both instincts of large language models (LLMs) and the
intricacies of the target task. However, automating the generation of such
expert-level prompts remains elusive. Existing prompt optimization methods tend
to overlook the depth of domain knowledge and struggle to efficiently explore
the vast space of expert-level prompts. Addressing this, we present
PromptAgent, an optimization method that autonomously crafts prompts equivalent
in quality to those handcrafted by experts. At its core, PromptAgent views
prompt optimization as a strategic planning problem and employs a principled
planning algorithm, rooted in Monte Carlo tree search, to strategically
navigate the expert-level prompt space. Inspired by human-like trial-and-error
exploration, PromptAgent induces precise expert-level insights and in-depth
instructions by reflecting on model errors and generating constructive error
feedback. Such a novel framework allows the agent to iteratively examine
intermediate prompts (states), refine them based on error feedbacks (actions),
simulate future rewards, and search for high-reward paths leading to expert
prompts. We apply PromptAgent to 12 tasks spanning three practical domains:
BIG-Bench Hard (BBH), as well as domain-specific and general NLP tasks, showing
it significantly outperforms strong Chain-of-Thought and recent prompt
optimization baselines. Extensive analyses emphasize its capability to craft
expert-level, detailed, and domain-insightful prompts with great efficiency and
generalizability.
"
"Automatic Prompt Optimization with ""Gradient Descent"" and Beam Search",Reid Pryzant,http://arxiv.org/pdf/2305.03495v2.pdf,2023-05-04,"['cs.cl', 'cs.ai', 'cs.lg']",2305.03495v2.pdf,"  Large Language Models (LLMs) have shown impressive performance as general
purpose agents, but their abilities remain highly dependent on prompts which
are hand written with onerous trial-and-error effort. We propose a simple and
nonparametric solution to this problem, Automatic Prompt Optimization (APO),
which is inspired by numerical gradient descent to automatically improve
prompts, assuming access to training data and an LLM API. The algorithm uses
minibatches of data to form natural language ""gradients"" that criticize the
current prompt. The gradients are then ""propagated"" into the prompt by editing
the prompt in the opposite semantic direction of the gradient. These gradient
descent steps are guided by a beam search and bandit selection procedure which
significantly improves algorithmic efficiency. Preliminary results across three
benchmark NLP tasks and the novel problem of LLM jailbreak detection suggest
that Automatic Prompt Optimization can outperform prior prompt editing
techniques and improve an initial prompt's performance by up to 31%, by using
data to rewrite vague task descriptions into more precise annotation
instructions.
"
Discrete Prompt Optimization via Constrained Generation for Zero-shot  Re-ranker,Sukmin Cho,http://arxiv.org/pdf/2305.13729v1.pdf,2023-05-23,"['cs.ir', 'cs.ai', 'cs.cl']",2305.13729v1.pdf,"  Re-rankers, which order retrieved documents with respect to the relevance
score on the given query, have gained attention for the information retrieval
(IR) task. Rather than fine-tuning the pre-trained language model (PLM), the
large-scale language model (LLM) is utilized as a zero-shot re-ranker with
excellent results. While LLM is highly dependent on the prompts, the impact and
the optimization of the prompts for the zero-shot re-ranker are not explored
yet. Along with highlighting the impact of optimization on the zero-shot
re-ranker, we propose a novel discrete prompt optimization method, Constrained
Prompt generation (Co-Prompt), with the metric estimating the optimum for
re-ranking. Co-Prompt guides the generated texts from PLM toward optimal
prompts based on the metric without parameter update. The experimental results
demonstrate that Co-Prompt leads to outstanding re-ranking performance against
the baselines. Also, Co-Prompt generates more interpretable prompts for humans
against other prompt optimization methods.
"
Query-Dependent Prompt Evaluation and Optimization with Offline Inverse  RL,Hao Sun,http://arxiv.org/pdf/2309.06553v3.pdf,2023-09-13,"['cs.cl', 'cs.ai', 'cs.lg']",2309.06553v3.pdf,"  In this study, we aim to enhance the arithmetic reasoning ability of Large
Language Models (LLMs) through zero-shot prompt optimization. We identify a
previously overlooked objective of query dependency in such optimization and
elucidate two ensuing challenges that impede the successful and economical
design of prompt optimization techniques. One primary issue is the absence of
an effective method to evaluate prompts during inference when the golden answer
is unavailable. Concurrently, learning via interactions with the LLMs to
navigate the expansive natural language prompting space proves to be
resource-intensive. To address this, we introduce Prompt-OIRL, which harnesses
offline inverse reinforcement learning to draw insights from offline prompting
demonstration data. Such data exists as by-products when diverse prompts are
benchmarked on open-accessible datasets. With Prompt-OIRL, the query-dependent
prompt optimization objective is achieved by first learning an offline reward
model. This model can evaluate any query-prompt pairs without accessing LLMs.
Subsequently, a best-of-N strategy is deployed to recommend the optimal prompt.
Our experimental evaluations across various LLM scales and arithmetic reasoning
datasets underscore both the efficacy and economic viability of the proposed
approach.
"
ATT3D: Amortized Text-to-3D Object Synthesis,Jonathan Lorraine,http://arxiv.org/pdf/2306.07349v1.pdf,2023-06-06,"['cs.lg', 'cs.ai', 'cs.cv', '68t45', 'i.2.6; i.2.7; i.3.6; i.3.7']",2306.07349v1.pdf,"  Text-to-3D modelling has seen exciting progress by combining generative
text-to-image models with image-to-3D methods like Neural Radiance Fields.
DreamFusion recently achieved high-quality results but requires a lengthy,
per-prompt optimization to create 3D objects. To address this, we amortize
optimization over text prompts by training on many prompts simultaneously with
a unified model, instead of separately. With this, we share computation across
a prompt set, training in less time than per-prompt optimization. Our framework
- Amortized text-to-3D (ATT3D) - enables knowledge-sharing between prompts to
generalize to unseen setups and smooth interpolations between text for novel
assets and simple animations.
"
Temporally-Extended Prompts Optimization for SAM in Interactive Medical  Image Segmentation,Chuyun Shen,http://arxiv.org/pdf/2306.08958v1.pdf,2023-06-15,"['cs.cv', 'cs.ai', 'cs.lg']",2306.08958v1.pdf,"  The Segmentation Anything Model (SAM) has recently emerged as a foundation
model for addressing image segmentation. Owing to the intrinsic complexity of
medical images and the high annotation cost, the medical image segmentation
(MIS) community has been encouraged to investigate SAM's zero-shot capabilities
to facilitate automatic annotation. Inspired by the extraordinary
accomplishments of interactive medical image segmentation (IMIS) paradigm, this
paper focuses on assessing the potential of SAM's zero-shot capabilities within
the IMIS paradigm to amplify its benefits in the MIS domain. Regrettably, we
observe that SAM's vulnerability to prompt forms (e.g., points, bounding boxes)
becomes notably pronounced in IMIS. This leads us to develop a framework that
adaptively offers suitable prompt forms for human experts. We refer to the
framework above as temporally-extended prompts optimization (TEPO) and model it
as a Markov decision process, solvable through reinforcement learning.
Numerical experiments on the standardized benchmark BraTS2020 demonstrate that
the learned TEPO agent can further enhance SAM's zero-shot capability in the
MIS context.
"
Topological Data Analysis Guided Segment Anything Model Prompt  Optimization for Zero-Shot Segmentation in Biological Imaging,Ruben Glatt,http://arxiv.org/pdf/2306.17400v1.pdf,2023-06-30,"['cs.cv', '68t45', 'i.4.6']",2306.17400v1.pdf,"  Emerging foundation models in machine learning are models trained on vast
amounts of data that have been shown to generalize well to new tasks. Often
these models can be prompted with multi-modal inputs that range from natural
language descriptions over images to point clouds. In this paper, we propose
topological data analysis (TDA) guided prompt optimization for the Segment
Anything Model (SAM) and show preliminary results in the biological image
segmentation domain. Our approach replaces the standard grid search approach
that is used in the original implementation and finds point locations based on
their topological significance. Our results show that the TDA optimized point
cloud is much better suited for finding small objects and massively reduces
computational complexity despite the extra step in scenarios which require many
segmentations.
"
Emotion-Conditioned Text Generation through Automatic Prompt  Optimization,Yarik Menchaca Resendiz,http://arxiv.org/pdf/2308.04857v1.pdf,2023-08-09,['cs.cl'],2308.04857v1.pdf,"  Conditional natural language generation methods often require either
expensive fine-tuning or training a large language model from scratch. Both are
unlikely to lead to good results without a substantial amount of data and
computational resources. Prompt learning without changing the parameters of a
large language model presents a promising alternative. It is a cost-effective
approach, while still achieving competitive results. While this procedure is
now established for zero- and few-shot text classification and structured
prediction, it has received limited attention in conditional text generation.
We present the first automatic prompt optimization approach for
emotion-conditioned text generation with instruction-fine-tuned models. Our
method uses an iterative optimization procedure that changes the prompt by
adding, removing, or replacing tokens. As objective function, we only require a
text classifier that measures the realization of the conditional variable in
the generated text. We evaluate the method on emotion-conditioned text
generation with a focus on event reports and compare it to manually designed
prompts that also act as the seed for the optimization procedure. The optimized
prompts achieve 0.75 macro-average F1 to fulfill the emotion condition in
contrast to manually designed seed prompts with only 0.22 macro-average F1.
"
Read-only Prompt Optimization for Vision-Language Few-shot Learning,Dongjun Lee,http://arxiv.org/pdf/2308.14960v1.pdf,2023-08-29,['cs.cv'],2308.14960v1.pdf,"  In recent years, prompt tuning has proven effective in adapting pre-trained
vision-language models to downstream tasks. These methods aim to adapt the
pre-trained models by introducing learnable prompts while keeping pre-trained
weights frozen. However, learnable prompts can affect the internal
representation within the self-attention module, which may negatively impact
performance variance and generalization, especially in data-deficient settings.
To address these issues, we propose a novel approach, Read-only Prompt
Optimization (RPO). RPO leverages masked attention to prevent the internal
representation shift in the pre-trained model. Further, to facilitate the
optimization of RPO, the read-only prompts are initialized based on special
tokens of the pre-trained model. Our extensive experiments demonstrate that RPO
outperforms CLIP and CoCoOp in base-to-new generalization and domain
generalization while displaying better robustness. Also, the proposed method
achieves better generalization on extremely data-deficient settings, while
improving parameter efficiency and computational overhead. Code is available at
https://github.com/mlvlab/RPO.
"
Large Language Models as Optimizers,Chengrun Yang,http://arxiv.org/pdf/2309.03409v1.pdf,2023-09-07,"['cs.lg', 'cs.ai', 'cs.cl']",2309.03409v1.pdf,"  Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks.
"
Connecting Large Language Models with Evolutionary Algorithms Yields  Powerful Prompt Optimizers,Qingyan Guo,http://arxiv.org/pdf/2309.08532v1.pdf,2023-09-15,"['cs.cl', 'cs.ai']",2309.08532v1.pdf,"  Large Language Models (LLMs) excel in various tasks, but they rely on
carefully crafted prompts that often demand substantial human effort. To
automate this process, in this paper, we propose a novel framework for discrete
prompt optimization, called EvoPrompt, which borrows the idea of evolutionary
algorithms (EAs) as they exhibit good performance and fast convergence. To
enable EAs to work on discrete prompts, which are natural language expressions
that need to be coherent and human-readable, we connect LLMs with EAs. This
approach allows us to simultaneously leverage the powerful language processing
capabilities of LLMs and the efficient optimization performance of EAs.
Specifically, abstaining from any gradients or parameters, EvoPrompt starts
from a population of prompts and iteratively generates new prompts with LLMs
based on the evolutionary operators, improving the population based on the
development set. We optimize prompts for both closed- and open-source LLMs
including GPT-3.5 and Alpaca, on 9 datasets spanning language understanding and
generation tasks. EvoPrompt significantly outperforms human-engineered prompts
and existing methods for automatic prompt generation by up to 25% and 14%
respectively. Furthermore, EvoPrompt demonstrates that connecting LLMs with EAs
creates synergies, which could inspire further research on the combination of
LLMs and conventional algorithms.
"
Black-Box Prompt Optimization: Aligning Large Language Models without  Model Training,Jiale Cheng,http://arxiv.org/pdf/2311.04155v2.pdf,2023-11-07,['cs.cl'],2311.04155v2.pdf,"  Large language models (LLMs) have shown impressive success in various
applications. However, these models are often not well aligned with human
intents, which calls for additional treatments on them, that is, the alignment
problem. To make LLMs better follow user instructions, existing alignment
methods mostly focus on further training them. However, the extra training of
LLMs are usually expensive in terms of GPU compute; worse still, LLMs of
interest are oftentimes not accessible for user-demanded training, such as
GPTs. In this work, we take a different perspective -- Black-Box Prompt
Optimization (BPO) -- to perform alignments. The idea is to optimize user
prompts to suit LLMs' input understanding, so as to best realize users' intents
without updating LLMs' parameters. BPO is model-agnostic and the empirical
results demonstrate that the BPO-aligned ChatGPT yields a 22% increase in the
win rate against its original version, and 10% for GPT-4. Importantly, the
BPO-aligned LLMs can outperform the same models aligned by PPO and DPO, and it
also brings additional performance gains when combining BPO with PPO or DPO.
Code and datasets are released at https://github.com/thu-coai/BPO.
"
In-context Examples Selection for Machine Translation,Sweta Agrawal,http://arxiv.org/pdf/2212.02437v1.pdf,2022-12-05,['cs.cl'],2212.02437v1.pdf,"  Large-scale generative models show an impressive ability to perform a wide
range of Natural Language Processing (NLP) tasks using in-context learning,
where a few examples are used to describe a task to the model. For Machine
Translation (MT), these examples are typically randomly sampled from the
development dataset with a similar distribution as the evaluation set. However,
it is unclear how the choice of these in-context examples and their ordering
impacts the output translation quality. In this work, we aim to understand the
properties of good in-context examples for MT in both in-domain and
out-of-domain settings. We show that the translation quality and the domain of
the in-context examples matter and that 1-shot noisy unrelated example can have
a catastrophic impact on output quality. While concatenating multiple random
examples reduces the effect of noise, a single good prompt optimized to
maximize translation quality on the development dataset can elicit learned
information from the pre-trained language model. Adding similar examples based
on an n-gram overlap with the test source significantly and consistently
improves the translation quality of the outputs, outperforming a strong kNN-MT
baseline in 2 out of 4 out-of-domain datasets.
"
ZegOT: Zero-shot Segmentation Through Optimal Transport of Text Prompts,Kwanyoung Kim,http://arxiv.org/pdf/2301.12171v2.pdf,2023-01-28,"['cs.cv', 'cs.ai', 'cs.lg', 'stat.ml']",2301.12171v2.pdf,"  Recent success of large-scale Contrastive Language-Image Pre-training (CLIP)
has led to great promise in zero-shot semantic segmentation by transferring
image-text aligned knowledge to pixel-level classification. However, existing
methods usually require an additional image encoder or retraining/tuning the
CLIP module. Here, we propose a novel Zero-shot segmentation with Optimal
Transport (ZegOT) method that matches multiple text prompts with frozen image
embeddings through optimal transport. In particular, we introduce a novel
Multiple Prompt Optimal Transport Solver (MPOT), which is designed to learn an
optimal mapping between multiple text prompts and visual feature maps of the
frozen image encoder hidden layers. This unique mapping method facilitates each
of the multiple text prompts to effectively focus on distinct visual semantic
attributes. Through extensive experiments on benchmark datasets, we show that
our method achieves the state-of-the-art (SOTA) performance over existing
Zero-shot Semantic Segmentation (ZS3) approaches.
"
DeltaEdit: Exploring Text-free Training for Text-Driven Image  Manipulation,Yueming Lyu,http://arxiv.org/pdf/2303.06285v1.pdf,2023-03-11,['cs.cv'],2303.06285v1.pdf,"  Text-driven image manipulation remains challenging in training or inference
flexibility. Conditional generative models depend heavily on expensive
annotated training data. Meanwhile, recent frameworks, which leverage
pre-trained vision-language models, are limited by either per text-prompt
optimization or inference-time hyper-parameters tuning. In this work, we
propose a novel framework named \textit{DeltaEdit} to address these problems.
Our key idea is to investigate and identify a space, namely delta image and
text space that has well-aligned distribution between CLIP visual feature
differences of two images and CLIP textual embedding differences of source and
target texts. Based on the CLIP delta space, the DeltaEdit network is designed
to map the CLIP visual features differences to the editing directions of
StyleGAN at training phase. Then, in inference phase, DeltaEdit predicts the
StyleGAN's editing directions from the differences of the CLIP textual
features. In this way, DeltaEdit is trained in a text-free manner. Once
trained, it can well generalize to various text prompts for zero-shot inference
without bells and whistles. Code is available at
https://github.com/Yueming6568/DeltaEdit.
"
Deep Language Networks: Joint Prompt Training of Stacked LLMs using  Variational Inference,Alessandro Sordoni,http://arxiv.org/pdf/2306.12509v1.pdf,2023-06-21,"['cs.cl', 'cs.lg']",2306.12509v1.pdf,"  We view large language models (LLMs) as stochastic \emph{language layers} in
a network, where the learnable parameters are the natural language
\emph{prompts} at each layer. We stack two such layers, feeding the output of
one layer to the next. We call the stacked architecture a \emph{Deep Language
Network} (DLN). We first show how to effectively perform prompt optimization
for a 1-Layer language network (DLN-1). We then show how to train 2-layer DLNs
(DLN-2), where two prompts must be learnt. We consider the output of the first
layer as a latent variable to marginalize, and devise a variational inference
algorithm for joint prompt training. A DLN-2 reaches higher performance than a
single layer, sometimes comparable to few-shot GPT-4 even when each LLM in the
network is smaller and less powerful. The DLN code is open source:
https://github.com/microsoft/deep-language-networks .
"
Unnatural language processing: How do language models handle  machine-generated prompts?,Corentin Kervadec,http://arxiv.org/pdf/2310.15829v1.pdf,2023-10-24,['cs.cl'],2310.15829v1.pdf,"  Language model prompt optimization research has shown that semantically and
grammatically well-formed manually crafted prompts are routinely outperformed
by automatically generated token sequences with no apparent meaning or
syntactic structure, including sequences of vectors from a model's embedding
space. We use machine-generated prompts to probe how models respond to input
that is not composed of natural language expressions. We study the behavior of
models of different sizes in multiple semantic tasks in response to both
continuous and discrete machine-generated prompts, and compare it to the
behavior in response to human-generated natural-language prompts. Even when
producing a similar output, machine-generated and human prompts trigger
different response patterns through the network processing pathways, including
different perplexities, different attention and output entropy distributions,
and different unit activation profiles. We provide preliminary insight into the
nature of the units activated by different prompt types, suggesting that only
natural language prompts recruit a genuinely linguistic circuit.
"
Give Me the Facts! A Survey on Factual Knowledge Probing in Pre-trained  Language Models,Paul Youssef,http://arxiv.org/pdf/2310.16570v1.pdf,2023-10-25,['cs.cl'],2310.16570v1.pdf,"  Pre-trained Language Models (PLMs) are trained on vast unlabeled data, rich
in world knowledge. This fact has sparked the interest of the community in
quantifying the amount of factual knowledge present in PLMs, as this explains
their performance on downstream tasks, and potentially justifies their use as
knowledge bases. In this work, we survey methods and datasets that are used to
probe PLMs for factual knowledge. Our contributions are: (1) We propose a
categorization scheme for factual probing methods that is based on how their
inputs, outputs and the probed PLMs are adapted; (2) We provide an overview of
the datasets used for factual probing; (3) We synthesize insights about
knowledge retention and prompt optimization in PLMs, analyze obstacles to
adopting PLMs as knowledge bases and outline directions for future work.
"
Task-driven Prompt Evolution for Foundation Models,Rachana Sathish,http://arxiv.org/pdf/2310.17128v1.pdf,2023-10-26,['cs.cv'],2310.17128v1.pdf,"  Promptable foundation models, particularly Segment Anything Model (SAM), have
emerged as a promising alternative to the traditional task-specific supervised
learning for image segmentation. However, many evaluation studies have found
that their performance on medical imaging modalities to be underwhelming
compared to conventional deep learning methods. In the world of large
pre-trained language and vision-language models, learning prompt from
downstream tasks has achieved considerable success in improving performance. In
this work, we propose a plug-and-play Prompt Optimization Technique for
foundation models like SAM (SAMPOT) that utilizes the downstream segmentation
task to optimize the human-provided prompt to obtain improved performance. We
demonstrate the utility of SAMPOT on lung segmentation in chest X-ray images
and obtain an improvement on a significant number of cases ($\sim75\%$) over
human-provided initial prompts. We hope this work will lead to further
investigations in the nascent field of automatic visual prompt-tuning.
"
RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning,Mingkai Deng,http://arxiv.org/pdf/2205.12548v3.pdf,2022-05-25,"['cs.cl', 'cs.lg']",2205.12548v3.pdf,"  Prompting has shown impressive success in enabling large pretrained language
models (LMs) to perform diverse NLP tasks, especially when only few downstream
data are available. Automatically finding the optimal prompt for each task,
however, is challenging. Most existing work resorts to tuning soft prompt
(e.g., embeddings) which falls short of interpretability, reusability across
LMs, and applicability when gradients are not accessible. Discrete prompt, on
the other hand, is difficult to optimize, and is often created by ""enumeration
(e.g., paraphrasing)-then-selection"" heuristics that do not explore the prompt
space systematically. This paper proposes RLPrompt, an efficient discrete
prompt optimization approach with reinforcement learning (RL). RLPrompt
formulates a parameter-efficient policy network that generates the desired
discrete prompt after training with reward. To overcome the complexity and
stochasticity of reward signals by the large LM environment, we incorporate
effective reward stabilization that substantially enhances the training
efficiency. RLPrompt is flexibly applicable to different types of LMs, such as
masked (e.g., BERT) and left-to-right models (e.g., GPTs), for both
classification and generation tasks. Experiments on few-shot classification and
unsupervised text style transfer show superior performance over a wide range of
existing finetuning or prompting methods. Interestingly, the resulting
optimized prompts are often ungrammatical gibberish text; and surprisingly,
those gibberish prompts are transferrable between different LMs to retain
significant performance, indicating LM prompting may not follow human language
patterns.
"
Diversity-Aware Meta Visual Prompting,Qidong Huang,http://arxiv.org/pdf/2303.08138v1.pdf,2023-03-14,['cs.cv'],2303.08138v1.pdf,"  We present Diversity-Aware Meta Visual Prompting~(DAM-VP), an efficient and
effective prompting method for transferring pre-trained models to downstream
tasks with frozen backbone. A challenging issue in visual prompting is that
image datasets sometimes have a large data diversity whereas a per-dataset
generic prompt can hardly handle the complex distribution shift toward the
original pretraining data distribution properly. To address this issue, we
propose a dataset Diversity-Aware prompting strategy whose initialization is
realized by a Meta-prompt. Specifically, we cluster the downstream dataset into
small homogeneity subsets in a diversity-adaptive way, with each subset has its
own prompt optimized separately. Such a divide-and-conquer design reduces the
optimization difficulty greatly and significantly boosts the prompting
performance. Furthermore, all the prompts are initialized with a meta-prompt,
which is learned across several datasets. It is a bootstrapped paradigm, with
the key observation that the prompting knowledge learned from previous datasets
could help the prompt to converge faster and perform better on a new dataset.
During inference, we dynamically select a proper prompt for each input, based
on the feature distance between the input and each subset. Through extensive
experiments, our DAM-VP demonstrates superior efficiency and effectiveness,
clearly surpassing previous prompting methods in a series of downstream
datasets for different pretraining models. Our code is available at:
\url{https://github.com/shikiw/DAM-VP}.
"
DRPT: Disentangled and Recurrent Prompt Tuning for Compositional  Zero-Shot Learning,Xiaocheng Lu,http://arxiv.org/pdf/2305.01239v1.pdf,2023-05-02,"['cs.cv', 'cs.ai']",2305.01239v1.pdf,"  Compositional Zero-shot Learning (CZSL) aims to recognize novel concepts
composed of known knowledge without training samples. Standard CZSL either
identifies visual primitives or enhances unseen composed entities, and as a
result, entanglement between state and object primitives cannot be fully
utilized. Admittedly, vision-language models (VLMs) could naturally cope with
CZSL through tuning prompts, while uneven entanglement leads prompts to be
dragged into local optimum. In this paper, we take a further step to introduce
a novel Disentangled and Recurrent Prompt Tuning framework termed DRPT to
better tap the potential of VLMs in CZSL. Specifically, the state and object
primitives are deemed as learnable tokens of vocabulary embedded in prompts and
tuned on seen compositions. Instead of jointly tuning state and object, we
devise a disentangled and recurrent tuning strategy to suppress the traction
force caused by entanglement and gradually optimize the token parameters,
leading to a better prompt space. Notably, we develop a progressive fine-tuning
procedure that allows for incremental updates to the prompts, optimizing the
object first, then the state, and vice versa. Meanwhile, the optimization of
state and object is independent, thus clearer features can be learned to
further alleviate the issue of entangling misleading optimization. Moreover, we
quantify and analyze the entanglement in CZSL and supplement entanglement
rebalancing optimization schemes. DRPT surpasses representative
state-of-the-art methods on extensive benchmark datasets, demonstrating
superiority in both accuracy and efficiency.
"
Getting MoRE out of Mixture of Language Model Reasoning Experts,Chenglei Si,http://arxiv.org/pdf/2305.14628v2.pdf,2023-05-24,"['cs.cl', 'cs.ai']",2305.14628v2.pdf,"  While recent large language models (LLMs) improve on various question
answering (QA) datasets, it remains difficult for a single model to generalize
across question types that require distinct reasoning abilities. We provide
empirical evidence that state-of-the-art LLMs suffer from poor generalizability
on reasoning types beyond those seen in the prompt. To remedy this, we propose
a Mixture-of-Reasoning-Experts (MoRE) framework that ensembles diverse
specialized language models. We specialize the backbone language model with
prompts optimized for different reasoning categories, including factual,
multihop, mathematical, and commonsense reasoning. Our key insight is to
leverage agreement among the specialized experts to select the best answer for
each question, or to abstain from answering. This gives MoRE higher accuracy
than any single specialized model on a collection of 12 QA datasets from four
reasoning types. Beyond generalizability, the interpretable design of MoRE
improves selective question answering results compared to baselines without
incorporating inter-expert agreement. This framework is also more interpretable
and useful to human consumers of QA outputs. Our human study confirms that
presenting expert predictions and the answer selection process helps annotators
more accurately calibrate when to trust the system's output. We release all
code and data to facilitate future work.
"
Unveiling the Potential of Knowledge-Prompted ChatGPT for Enhancing Drug  Trafficking Detection on Social Media,Chuanbo Hu,http://arxiv.org/pdf/2307.03699v1.pdf,2023-07-07,"['cs.cl', 'cs.ai', 'cs.si']",2307.03699v1.pdf,"  Social media platforms such as Instagram and Twitter have emerged as critical
channels for drug marketing and illegal sale. Detecting and labeling online
illicit drug trafficking activities becomes important in addressing this issue.
However, the effectiveness of conventional supervised learning methods in
detecting drug trafficking heavily relies on having access to substantial
amounts of labeled data, while data annotation is time-consuming and
resource-intensive. Furthermore, these models often face challenges in
accurately identifying trafficking activities when drug dealers use deceptive
language and euphemisms to avoid detection. To overcome this limitation, we
conduct the first systematic study on leveraging large language models (LLMs),
such as ChatGPT, to detect illicit drug trafficking activities on social media.
We propose an analytical framework to compose \emph{knowledge-informed
prompts}, which serve as the interface that humans can interact with and use
LLMs to perform the detection task. Additionally, we design a Monte Carlo
dropout based prompt optimization method to further to improve performance and
interpretability. Our experimental findings demonstrate that the proposed
framework outperforms other baseline language models in terms of drug
trafficking detection accuracy, showing a remarkable improvement of nearly
12\%. By integrating prior knowledge and the proposed prompts, ChatGPT can
effectively identify and label drug trafficking activities on social networks,
even in the presence of deceptive language and euphemisms used by drug dealers
to evade detection. The implications of our research extend to social networks,
emphasizing the importance of incorporating prior knowledge and scenario-based
prompts into analytical tools to improve online security and public safety.
"
AutoHint: Automatic Prompt Optimization with Hint Generation,Hong Sun,http://arxiv.org/pdf/2307.07415v2.pdf,2023-07-13,"['cs.cl', 'cs.ai']",2307.07415v2.pdf,"  This paper presents AutoHint, a novel framework for automatic prompt
engineering and optimization for Large Language Models (LLM). While LLMs have
demonstrated remarkable ability in achieving high-quality annotation in various
tasks, the key to applying this ability to specific tasks lies in developing
high-quality prompts. Thus we propose a framework to inherit the merits of both
in-context learning and zero-shot learning by incorporating enriched
instructions derived from input-output demonstrations to optimize original
prompt. We refer to the enrichment as the hint and propose a framework to
automatically generate the hint from labeled data. More concretely, starting
from an initial prompt, our method first instructs a LLM to deduce new hints
for selected samples from incorrect predictions, and then summarizes from
per-sample hints and adds the results back to the initial prompt to form a new,
enriched instruction. The proposed method is evaluated on the BIG-Bench
Instruction Induction dataset for both zero-shot and few-short prompts, where
experiments demonstrate our method is able to significantly boost accuracy for
multiple tasks.
"
"Optimizing Mobile-Edge AI-Generated Everything (AIGX) Services by Prompt  Engineering: Fundamental, Framework, and Case Study",Yinqiu Liu,http://arxiv.org/pdf/2309.01065v1.pdf,2023-09-03,['cs.ni'],2309.01065v1.pdf,"  As the next-generation paradigm for content creation, AI-Generated Content
(AIGC), i.e., generating content automatically by Generative AI (GAI) based on
user prompts, has gained great attention and success recently. With the
ever-increasing power of GAI, especially the emergence of Pretrained Foundation
Models (PFMs) that contain billions of parameters and prompt engineering
methods (i.e., finding the best prompts for the given task), the application
range of AIGC is rapidly expanding, covering various forms of information for
human, systems, and networks, such as network designs, channel coding, and
optimization solutions. In this article, we present the concept of mobile-edge
AI-Generated Everything (AIGX). Specifically, we first review the building
blocks of AIGX, the evolution from AIGC to AIGX, as well as practical AIGX
applications. Then, we present a unified mobile-edge AIGX framework, which
employs edge devices to provide PFM-empowered AIGX services and optimizes such
services via prompt engineering. More importantly, we demonstrate that
suboptimal prompts lead to poor generation quality, which adversely affects
user satisfaction, edge network performance, and resource utilization.
Accordingly, we conduct a case study, showcasing how to train an effective
prompt optimizer using ChatGPT and investigating how much improvement is
possible with prompt engineering in terms of user experience, quality of
generation, and network performance.
"
Automatic Data Transformation Using Large Language Model: An  Experimental Study on Building Energy Data,Ankita Sharma,http://arxiv.org/pdf/2309.01957v2.pdf,2023-09-05,['cs.db'],2309.01957v2.pdf,"  Existing approaches to automatic data transformation are insufficient to meet
the requirements in many real-world scenarios, such as the building sector.
First, there is no convenient interface for domain experts to provide domain
knowledge easily. Second, they require significant training data collection
overheads. Third, the accuracy suffers from complicated schema changes. To
bridge this gap, we present a novel approach that leverages the unique
capabilities of large language models (LLMs) in coding, complex reasoning, and
zero-shot learning to generate SQL code that transforms the source datasets
into the target datasets. We demonstrate the viability of this approach by
designing an LLM-based framework, termed SQLMorpher, which comprises a prompt
generator that integrates the initial prompt with optional domain knowledge and
historical patterns in external databases. It also implements an iterative
prompt optimization mechanism that automatically improves the prompt based on
flaw detection. The key contributions of this work include (1) pioneering an
end-to-end LLM-based solution for data transformation, (2) developing a
benchmark dataset of 105 real-world building energy data transformation
problems, and (3) conducting an extensive empirical evaluation where our
approach achieved 96% accuracy in all 105 problems. SQLMorpher demonstrates the
effectiveness of utilizing LLMs in complex, domain-specific challenges,
highlighting the potential of their potential to drive sustainable solutions.
"
Automatic Prompt Rewriting for Personalized Text Generation,Cheng Li,http://arxiv.org/pdf/2310.00152v1.pdf,2023-09-29,['cs.cl'],2310.00152v1.pdf,"  Facilitated by large language models (LLMs), personalized text generation has
become a rapidly growing research direction. Most existing studies focus on
designing specialized models for a particular domain, or they require
fine-tuning the LLMs to generate personalized text. We consider a typical
scenario in which the large language model, which generates personalized
output, is frozen and can only be accessed through APIs. Under this constraint,
all one can do is to improve the input text (i.e., text prompts) sent to the
LLM, a procedure that is usually done manually. In this paper, we propose a
novel method to automatically revise prompts for personalized text generation.
The proposed method takes the initial prompts generated by a state-of-the-art,
multistage framework for personalized generation and rewrites a few critical
components that summarize and synthesize the personal context. The prompt
rewriter employs a training paradigm that chains together supervised learning
(SL) and reinforcement learning (RL), where SL reduces the search space of RL
and RL facilitates end-to-end training of the rewriter. Using datasets from
three representative domains, we demonstrate that the rewritten prompts
outperform both the original prompts and the prompts optimized via supervised
learning or reinforcement learning alone. In-depth analysis of the rewritten
prompts shows that they are not only human readable, but also able to guide
manual revision of prompts when there is limited resource to employ
reinforcement learning to train the prompt rewriter, or when it is costly to
deploy an automatic prompt rewriter for inference.
"
DeltaSpace: A Semantic-aligned Feature Space for Flexible Text-guided  Image Editing,Yueming Lyu,http://arxiv.org/pdf/2310.08785v1.pdf,2023-10-12,"['cs.cv', 'cs.ai']",2310.08785v1.pdf,"  Text-guided image editing faces significant challenges to training and
inference flexibility. Much literature collects large amounts of annotated
image-text pairs to train text-conditioned generative models from scratch,
which is expensive and not efficient. After that, some approaches that leverage
pre-trained vision-language models are put forward to avoid data collection,
but they are also limited by either per text-prompt optimization or
inference-time hyper-parameters tuning. To address these issues, we investigate
and identify a specific space, referred to as CLIP DeltaSpace, where the CLIP
visual feature difference of two images is semantically aligned with the CLIP
textual feature difference of their corresponding text descriptions. Based on
DeltaSpace, we propose a novel framework called DeltaEdit, which maps the CLIP
visual feature differences to the latent space directions of a generative model
during the training phase, and predicts the latent space directions from the
CLIP textual feature differences during the inference phase. And this design
endows DeltaEdit with two advantages: (1) text-free training; (2)
generalization to various text prompts for zero-shot inference. Extensive
experiments validate the effectiveness and versatility of DeltaEdit with
different generative models, including both the GAN model and the diffusion
model, in achieving flexible text-guided image editing. Code is available at
https://github.com/Yueming6568/DeltaEdit.
"
InstructPix2NeRF: Instructed 3D Portrait Editing from a Single Image,Jianhui Li,http://arxiv.org/pdf/2311.02826v1.pdf,2023-11-06,['cs.cv'],2311.02826v1.pdf,"  With the success of Neural Radiance Field (NeRF) in 3D-aware portrait
editing, a variety of works have achieved promising results regarding both
quality and 3D consistency. However, these methods heavily rely on per-prompt
optimization when handling natural language as editing instructions. Due to the
lack of labeled human face 3D datasets and effective architectures, the area of
human-instructed 3D-aware editing for open-world portraits in an end-to-end
manner remains under-explored. To solve this problem, we propose an end-to-end
diffusion-based framework termed InstructPix2NeRF, which enables instructed
3D-aware portrait editing from a single open-world image with human
instructions. At its core lies a conditional latent 3D diffusion process that
lifts 2D editing to 3D space by learning the correlation between the paired
images' difference and the instructions via triplet data. With the help of our
proposed token position randomization strategy, we could even achieve
multi-semantic editing through one single pass with the portrait identity
well-preserved. Besides, we further propose an identity consistency module that
directly modulates the extracted identity signals into our diffusion process,
which increases the multi-view 3D identity consistency. Extensive experiments
verify the effectiveness of our method and show its superiority against strong
baselines quantitatively and qualitatively.
"
What Changes Can Large-scale Language Models Bring? Intensive Study on  HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers,Boseop Kim,http://arxiv.org/pdf/2109.04650v2.pdf,2021-09-10,['cs.cl'],2109.04650v2.pdf,"  GPT-3 shows remarkable in-context learning ability of large-scale language
models (LMs) trained on hundreds of billion scale data. Here we address some
remaining issues less reported by the GPT-3 paper, such as a non-English LM,
the performances of different sized models, and the effect of recently
introduced prompt optimization on in-context learning. To achieve this, we
introduce HyperCLOVA, a Korean variant of 82B GPT-3 trained on a Korean-centric
corpus of 560B tokens. Enhanced by our Korean-specific tokenization, HyperCLOVA
with our training configuration shows state-of-the-art in-context zero-shot and
few-shot learning performances on various downstream tasks in Korean. Also, we
show the performance benefits of prompt-based learning and demonstrate how it
can be integrated into the prompt engineering pipeline. Then we discuss the
possibility of materializing the No Code AI paradigm by providing AI
prototyping capabilities to non-experts of ML by introducing HyperCLOVA studio,
an interactive prompt engineering interface. Lastly, we demonstrate the
potential of our methods with three successful in-house applications.
"
MLLM-DataEngine: An Iterative Refinement Approach for MLLM,Zhiyuan Zhao,http://arxiv.org/pdf/2308.13566v2.pdf,2023-08-25,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.cv']",2308.13566v2.pdf,"  Despite the great advance of Multimodal Large Language Models (MLLMs) in both
instruction dataset building and benchmarking, the independence of training and
evaluation makes current MLLMs hard to further improve their capability under
the guidance of evaluation results with a relatively low human cost. In this
paper, we propose MLLM-DataEngine, a novel closed-loop system that bridges data
generation, model training, and evaluation. Within each loop iteration, the
MLLM-DataEngine first analyze the weakness of the model based on the evaluation
results, then generate a proper incremental dataset for the next training
iteration and enhance the model capability iteratively. Compared with previous
data collection methods which are separate from the benchmarking, the data
generated by MLLM-DataEngine shows better targeting, quality, and correctness.
For targeting, we propose an Adaptive Bad-case Sampling module, which adjusts
the ratio of different types of data within each incremental dataset based on
the benchmarking results. For quality, we resort to GPT-4 to generate
high-quality data with each given data type. For correctness, prompt design is
critical for the data generation results. Rather than previous hand-crafted
prompt, we propose an Interactive Prompt Optimization strategy, which optimizes
the prompt with the multi-round interaction between human and GPT, and improve
the correctness of generated data greatly. Through extensive experiments, we
find our MLLM-DataEngine could boost the MLLM capability in a targeted and
automatic manner, with only a few human participation. We hope it could be a
general solution for the following MLLMs building. The MLLM-DataEngine has been
open-sourced and is now available at
https://github.com/opendatalab/MLLM-DataEngine.
"
Unleashing the potential of prompt engineering in Large Language Models:  a comprehensive review,Banghao Chen,http://arxiv.org/pdf/2310.14735v2.pdf,2023-10-23,"['cs.cl', 'cs.ai', 'i.2.7']",2310.14735v2.pdf,"  This paper delves into the pivotal role of prompt engineering in unleashing
the capabilities of Large Language Models (LLMs). Prompt engineering is the
process of structuring input text for LLMs and is a technique integral to
optimizing the efficacy of LLMs. This survey elucidates foundational principles
of prompt engineering, such as role-prompting, one-shot, and few-shot
prompting, as well as more advanced methodologies such as the chain-of-thought
and tree-of-thoughts prompting. The paper sheds light on how external
assistance in the form of plugins can assist in this task, and reduce machine
hallucination by retrieving external knowledge. We subsequently delineate
prospective directions in prompt engineering research, emphasizing the need for
a deeper understanding of structures and the role of agents in Artificial
Intelligence-Generated Content (AIGC) tools. We discuss how to assess the
efficacy of prompt methods from different perspectives and using different
methods. Finally, we gather information about the application of prompt
engineering in such fields as education and programming, showing its
transformative potential. This comprehensive survey aims to serve as a friendly
guide for anyone venturing through the big world of LLMs and prompt
engineering.
"
Prompt Engineering For Students of Medicine and Their Teachers,Thomas F. Heston,http://arxiv.org/pdf/2308.11628v1.pdf,2023-08-08,['cs.hc'],2308.11628v1.pdf,"  ""Prompt Engineering for Students of Medicine and Their Teachers"" brings the
principles of prompt engineering for large language models such as ChatGPT and
Google Bard to medical education. This book contains a comprehensive guide to
prompt engineering to help both teachers and students improve education in the
medical field. Just as prompt engineering is critical in getting good
information out of an AI, it is also critical to get students to think and
understand more deeply. The principles of prompt engineering that we have
learned from AI systems have the potential to simultaneously revolutionize
learning in the healthcare field. The book analyzes from multiple angles the
anatomy of a good prompt for both AI models and students. The different types
of prompts are examined, showing how each style has unique characteristics and
applications. The principles of prompt engineering, applied properly, are
demonstrated to be effective in teaching across the diverse fields of anatomy,
physiology, pathology, pharmacology, and clinical skills. Just like ChatGPT and
similar large language AI models, students need clear and detailed prompting in
order for them to fully understand a topic. Using identical principles, a
prompt that gets good information from an AI will also cause a student to think
more deeply and accurately. The process of prompt engineering facilitates this
process. Because each chapter contains multiple examples and key takeaways, it
is a practical guide for implementing prompt engineering in the learning
process. It provides a hands-on approach to ensure readers can immediately
apply the concepts they learn
"
Prompting AI Art: An Investigation into the Creative Skill of Prompt  Engineering,Jonas Oppenlaender,http://arxiv.org/pdf/2303.13534v1.pdf,2023-03-13,"['cs.hc', 'h.m']",2303.13534v1.pdf,"  Humankind is entering a novel era of creativity - an era in which anybody can
synthesize digital content. The paradigm under which this revolution takes
place is prompt-based learning (or in-context learning). This paradigm has
found fruitful application in text-to-image generation where it is being used
to synthesize digital images from zero-shot text prompts in natural language
for the purpose of creating AI art. This activity is referred to as prompt
engineering - the practice of iteratively crafting prompts to generate and
improve images. In this paper, we investigate prompt engineering as a novel
creative skill for creating prompt-based art. In three studies with
participants recruited from a crowdsourcing platform, we explore whether
untrained participants could 1) recognize the quality of prompts, 2) write
prompts, and 3) improve their prompts. Our results indicate that participants
could assess the quality of prompts and respective images. This ability
increased with the participants' experience and interest in art. Participants
further were able to write prompts in rich descriptive language. However, even
though participants were specifically instructed to generate artworks,
participants' prompts were missing the specific vocabulary needed to apply a
certain style to the generated images. Our results suggest that prompt
engineering is a learned skill that requires expertise and practice. Based on
our findings and experience with running our studies with participants
recruited from a crowdsourcing platform, we provide ten recommendations for
conducting experimental research on text-to-image generation and prompt
engineering with a paid crowd. Our studies offer a deeper understanding of
prompt engineering thereby opening up avenues for research on the future of
prompt engineering. We conclude by speculating on four possible futures of
prompt engineering.
"
Review of Large Vision Models and Visual Prompt Engineering,Jiaqi Wang,http://arxiv.org/pdf/2307.00855v1.pdf,2023-07-03,"['cs.cv', 'cs.ai']",2307.00855v1.pdf,"  Visual prompt engineering is a fundamental technology in the field of visual
and image Artificial General Intelligence, serving as a key component for
achieving zero-shot capabilities. As the development of large vision models
progresses, the importance of prompt engineering becomes increasingly evident.
Designing suitable prompts for specific visual tasks has emerged as a
meaningful research direction. This review aims to summarize the methods
employed in the computer vision domain for large vision models and visual
prompt engineering, exploring the latest advancements in visual prompt
engineering. We present influential large models in the visual domain and a
range of prompt engineering methods employed on these models. It is our hope
that this review provides a comprehensive and systematic description of prompt
engineering methods based on large visual models, offering valuable insights
for future researchers in their exploration of this field.
"
Prompt Engineering for Healthcare: Methodologies and Applications,Jiaqi Wang,http://arxiv.org/pdf/2304.14670v1.pdf,2023-04-28,['cs.ai'],2304.14670v1.pdf,"  This review will introduce the latest advances in prompt engineering in the
field of natural language processing (NLP) for the medical domain. First, we
will provide a brief overview of the development of prompt engineering and
emphasize its significant contributions to healthcare NLP applications such as
question-answering systems, text summarization, and machine translation. With
the continuous improvement of general large language models, the importance of
prompt engineering in the healthcare domain is becoming increasingly prominent.
The aim of this article is to provide useful resources and bridges for
healthcare NLP researchers to better explore the application of prompt
engineering in this field. We hope that this review can provide new ideas and
inspire ample possibilities for research and application in medical NLP.
"
A Brief History of Prompt: Leveraging Language Models,Golam Md Muktadir,http://arxiv.org/pdf/2310.04438v1.pdf,2023-09-30,"['cs.cl', 'cs.ai']",2310.04438v1.pdf,"  This paper presents a comprehensive exploration of the evolution of prompt
engineering and generation in the field of natural language processing (NLP).
Starting from the early language models and information retrieval systems, we
trace the key developments that have shaped prompt engineering over the years.
The introduction of attention mechanisms in 2015 revolutionized language
understanding, leading to advancements in controllability and
context-awareness. Subsequent breakthroughs in reinforcement learning
techniques further enhanced prompt engineering, addressing issues like exposure
bias and biases in generated text. We examine the significant contributions in
2018 and 2019, focusing on fine-tuning strategies, control codes, and
template-based generation. The paper also discusses the growing importance of
fairness, human-AI collaboration, and low-resource adaptation. In 2020 and
2021, contextual prompting and transfer learning gained prominence, while 2022
and 2023 witnessed the emergence of advanced techniques like unsupervised
pre-training and novel reward shaping. Throughout the paper, we reference
specific research studies that exemplify the impact of various developments on
prompt engineering. The journey of prompt engineering continues, with ethical
considerations being paramount for the responsible and inclusive future of AI
systems.
"
A Systematic Survey of Prompt Engineering on Vision-Language Foundation  Models,Jindong Gu,http://arxiv.org/pdf/2307.12980v1.pdf,2023-07-24,['cs.cv'],2307.12980v1.pdf,"  Prompt engineering is a technique that involves augmenting a large
pre-trained model with task-specific hints, known as prompts, to adapt the
model to new tasks. Prompts can be created manually as natural language
instructions or generated automatically as either natural language instructions
or vector representations. Prompt engineering enables the ability to perform
predictions based solely on prompts without updating model parameters, and the
easier application of large pre-trained models in real-world tasks. In past
years, Prompt engineering has been well-studied in natural language processing.
Recently, it has also been intensively studied in vision-language modeling.
However, there is currently a lack of a systematic overview of prompt
engineering on pre-trained vision-language models. This paper aims to provide a
comprehensive survey of cutting-edge research in prompt engineering on three
types of vision-language models: multimodal-to-text generation models (e.g.
Flamingo), image-text matching models (e.g. CLIP), and text-to-image generation
models (e.g. Stable Diffusion). For each type of model, a brief model summary,
prompting methods, prompting-based applications, and the corresponding
responsibility and integrity issues are summarized and discussed. Furthermore,
the commonalities and differences between prompting on vision-language models,
language models, and vision models are also discussed. The challenges, future
directions, and research opportunities are summarized to foster future research
on this topic.
"
Prompt Engineering and Calibration for Zero-Shot Commonsense Reasoning,Chenkai Ma,http://arxiv.org/pdf/2304.06962v1.pdf,2023-04-14,"['cs.cl', 'cs.ai']",2304.06962v1.pdf,"  Prompt engineering and calibration make large language models excel at
reasoning tasks, including multiple choice commonsense reasoning. From a
practical perspective, we investigate and evaluate these strategies on smaller
language models. Through experiments on five commonsense reasoning benchmarks,
we find that each strategy favors certain models, but their joint effects are
mostly negative.
"
Just Tell Me: Prompt Engineering in Business Process Management,Kiran Busch,http://arxiv.org/pdf/2304.07183v1.pdf,2023-04-14,"['cs.ai', 'cs.cl', 'cs.lg']",2304.07183v1.pdf,"  GPT-3 and several other language models (LMs) can effectively address various
natural language processing (NLP) tasks, including machine translation and text
summarization. Recently, they have also been successfully employed in the
business process management (BPM) domain, e.g., for predictive process
monitoring and process extraction from text. This, however, typically requires
fine-tuning the employed LM, which, among others, necessitates large amounts of
suitable training data. A possible solution to this problem is the use of
prompt engineering, which leverages pre-trained LMs without fine-tuning them.
Recognizing this, we argue that prompt engineering can help bring the
capabilities of LMs to BPM research. We use this position paper to develop a
research agenda for the use of prompt engineering for BPM research by
identifying the associated potentials and challenges.
"
Revisiting Prompt Engineering via Declarative Crowdsourcing,Aditya G. Parameswaran,http://arxiv.org/pdf/2308.03854v1.pdf,2023-08-07,"['cs.db', 'cs.ai', 'cs.hc', 'cs.lg']",2308.03854v1.pdf,"  Large language models (LLMs) are incredibly powerful at comprehending and
generating data in the form of text, but are brittle and error-prone. There has
been an advent of toolkits and recipes centered around so-called prompt
engineering-the process of asking an LLM to do something via a series of
prompts. However, for LLM-powered data processing workflows, in particular,
optimizing for quality, while keeping cost bounded, is a tedious, manual
process. We put forth a vision for declarative prompt engineering. We view LLMs
like crowd workers and leverage ideas from the declarative crowdsourcing
literature-including leveraging multiple prompting strategies, ensuring
internal consistency, and exploring hybrid-LLM-non-LLM approaches-to make
prompt engineering a more principled process. Preliminary case studies on
sorting, entity resolution, and imputation demonstrate the promise of our
approach
"
How understanding large language models can inform their use in physics  education,Giulia Polverini,http://arxiv.org/pdf/2309.12074v1.pdf,2023-09-21,['physics.ed-ph'],2309.12074v1.pdf,"  The paper aims to fulfil three main functions: (1) to serve as an
introduction for the physics education community to the functioning of Large
Language Models (LLMs), (2) to present a series of illustrative examples
demonstrating how prompt-engineering techniques can impact LLMs performance on
conceptual physics tasks and (3) to discuss potential implications of the
understanding of LLMs and prompt engineering for physics teaching and learning.
We first summarise existing research on the performance of a popular LLM-based
chatbot (ChatGPT) on physics tasks. We then give a basic account of how LLMs
work, illustrate essential features of their functioning, and discuss their
strengths and limitations. Equipped with this knowledge, we discuss some
challenges with generating useful output with ChatGPT-4 in the context of
introductory physics, paying special attention to conceptual questions and
problems. We then provide a condensed overview of relevant literature on prompt
engineering and demonstrate through illustrative examples how selected
prompt-engineering techniques can be employed to improve ChatGPT-4's output on
conceptual introductory physics problems. Qualitatively studying these examples
provides additional insights into ChatGPT's functioning and its utility in
physics problem solving. Finally, we consider how insights from the paper can
inform the use of LMMs in the teaching and learning of physics.
"
Data-Driven Approach for Formality-Sensitive Machine Translation:  Language-Specific Handling and Synthetic Data Generation,Seugnjun Lee,http://arxiv.org/pdf/2306.14514v2.pdf,2023-06-26,"['cs.cl', 'cs.ai']",2306.14514v2.pdf,"  In this paper, we introduce a data-driven approach for Formality-Sensitive
Machine Translation (FSMT) that caters to the unique linguistic properties of
four target languages. Our methodology centers on two core strategies: 1)
language-specific data handling, and 2) synthetic data generation using
large-scale language models and empirical prompt engineering. This approach
demonstrates a considerable improvement over the baseline, highlighting the
effectiveness of data-centric techniques. Our prompt engineering strategy
further improves performance by producing superior synthetic translation
examples.
"
Exploring the Intersection of Large Language Models and Agent-Based  Modeling via Prompt Engineering,Edward Junprung,http://arxiv.org/pdf/2308.07411v1.pdf,2023-08-14,"['cs.ai', 'cs.ma']",2308.07411v1.pdf,"  The final frontier for simulation is the accurate representation of complex,
real-world social systems. While agent-based modeling (ABM) seeks to study the
behavior and interactions of agents within a larger system, it is unable to
faithfully capture the full complexity of human-driven behavior. Large language
models (LLMs), like ChatGPT, have emerged as a potential solution to this
bottleneck by enabling researchers to explore human-driven interactions in
previously unimaginable ways. Our research investigates simulations of human
interactions using LLMs. Through prompt engineering, inspired by Park et al.
(2023), we present two simulations of believable proxies of human behavior: a
two-agent negotiation and a six-agent murder mystery game.
"
Large Language Models Are Human-Level Prompt Engineers,Yongchao Zhou,http://arxiv.org/pdf/2211.01910v2.pdf,2022-11-03,"['cs.lg', 'cs.ai', 'cs.cl']",2211.01910v2.pdf,"  By conditioning on natural language instructions, large language models
(LLMs) have displayed impressive capabilities as general-purpose computers.
However, task performance depends significantly on the quality of the prompt
used to steer the model, and most effective prompts have been handcrafted by
humans. Inspired by classical program synthesis and the human approach to
prompt engineering, we propose Automatic Prompt Engineer (APE) for automatic
instruction generation and selection. In our method, we treat the instruction
as the ""program,"" optimized by searching over a pool of instruction candidates
proposed by an LLM in order to maximize a chosen score function. To evaluate
the quality of the selected instruction, we evaluate the zero-shot performance
of another LLM following the selected instruction. Experiments on 24 NLP tasks
show that our automatically generated instructions outperform the prior LLM
baseline by a large margin and achieve better or comparable performance to the
instructions generated by human annotators on 19/24 tasks. We conduct extensive
qualitative and quantitative analyses to explore the performance of APE. We
show that APE-engineered prompts can be applied to steer models toward
truthfulness and/or informativeness, as well as to improve few-shot learning
performance by simply prepending them to standard in-context learning prompts.
Please check out our webpage at
https://sites.google.com/view/automatic-prompt-engineer.
"
Grimm in Wonderland: Prompt Engineering with Midjourney to Illustrate  Fairytales,Martin Ruskov,http://arxiv.org/pdf/2302.08961v2.pdf,2023-02-17,"['cs.cl', 'cs.ai', 'cs.hc', 'i.2']",2302.08961v2.pdf,"  The quality of text-to-image generation is continuously improving, yet the
boundaries of its applicability are still unclear. In particular, refinement of
the text input with the objective of achieving better results - commonly called
prompt engineering - so far seems to have not been geared towards work with
pre-existing texts. We investigate whether text-to-image generation and prompt
engineering could be used to generate basic illustrations of popular
fairytales. Using Midjourney v4, we engage in action research with a dual aim:
to attempt to generate 5 believable illustrations for each of 5 popular
fairytales, and to define a prompt engineering process that starts from a
pre-existing text and arrives at an illustration of it. We arrive at a
tentative 4-stage process: i) initial prompt, ii) composition adjustment, iii)
style refinement, and iv) variation selection. We also discuss three reasons
why the generation model struggles with certain illustrations: difficulties
with counts, bias from stereotypical configurations and inability to depict
overly fantastic situations. Our findings are not limited to the specific
generation model and are intended to be generalisable to future ones.
"
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT,Jules White,http://arxiv.org/pdf/2302.11382v1.pdf,2023-02-21,"['cs.se', 'cs.ai']",2302.11382v1.pdf,"  Prompt engineering is an increasingly important skill set needed to converse
effectively with large language models (LLMs), such as ChatGPT. Prompts are
instructions given to an LLM to enforce rules, automate processes, and ensure
specific qualities (and quantities) of generated output. Prompts are also a
form of programming that can customize the outputs and interactions with an
LLM. This paper describes a catalog of prompt engineering techniques presented
in pattern form that have been applied to solve common problems when conversing
with LLMs. Prompt patterns are a knowledge transfer method analogous to
software patterns since they provide reusable solutions to common problems
faced in a particular context, i.e., output generation and interaction when
working with LLMs. This paper provides the following contributions to research
on prompt engineering that apply LLMs to automate software development tasks.
First, it provides a framework for documenting patterns for structuring prompts
to solve a range of problems so that they can be adapted to different domains.
Second, it presents a catalog of patterns that have been applied successfully
to improve the outputs of LLM conversations. Third, it explains how prompts can
be built from multiple patterns and illustrates prompt patterns that benefit
from combination with other prompt patterns.
"
Prompt Space Optimizing Few-shot Reasoning Success with Large Language  Models,Fobo Shi,http://arxiv.org/pdf/2306.03799v1.pdf,2023-06-06,['cs.cl'],2306.03799v1.pdf,"  Prompt engineering is an essential technique for enhancing the abilities of
large language models (LLMs) by providing explicit and specific instructions.
It enables LLMs to excel in various tasks, such as arithmetic reasoning,
question answering, summarization, relation extraction, machine translation,
and sentiment analysis. Researchers have been actively exploring different
prompt engineering strategies, such as Chain of Thought (CoT), Zero-CoT, and
In-context learning. However, an unresolved problem arises from the fact that
current approaches lack a solid theoretical foundation for determining optimal
prompts. To address this issue in prompt engineering, we propose a new and
effective approach called Prompt Space. Our methodology utilizes text
embeddings to obtain basis vectors by matrix decomposition, and then constructs
a space for representing all prompts. Prompt Space significantly outperforms
state-of-the-art prompt paradigms on ten public reasoning benchmarks. Notably,
without the help of the CoT method and the prompt ""Let's think step by step"",
Prompt Space shows superior performance over the few-shot method. Overall, our
approach provides a robust and fundamental theoretical framework for selecting
simple and effective prompts. This advancement marks a significant step towards
improving prompt engineering for a wide variety of applications in LLMs.
"
An Empirical Evaluation of Prompting Strategies for Large Language  Models in Zero-Shot Clinical Natural Language Processing,Sonish Sivarajkumar,http://arxiv.org/pdf/2309.08008v1.pdf,2023-09-14,"['cs.cl', 'cs.ai']",2309.08008v1.pdf,"  Large language models (LLMs) have shown remarkable capabilities in Natural
Language Processing (NLP), especially in domains where labeled data is scarce
or expensive, such as clinical domain. However, to unlock the clinical
knowledge hidden in these LLMs, we need to design effective prompts that can
guide them to perform specific clinical NLP tasks without any task-specific
training data. This is known as in-context learning, which is an art and
science that requires understanding the strengths and weaknesses of different
LLMs and prompt engineering approaches. In this paper, we present a
comprehensive and systematic experimental study on prompt engineering for five
clinical NLP tasks: Clinical Sense Disambiguation, Biomedical Evidence
Extraction, Coreference Resolution, Medication Status Extraction, and
Medication Attribute Extraction. We assessed the prompts proposed in recent
literature, including simple prefix, simple cloze, chain of thought, and
anticipatory prompts, and introduced two new types of prompts, namely heuristic
prompting and ensemble prompting. We evaluated the performance of these prompts
on three state-of-the-art LLMs: GPT-3.5, BARD, and LLAMA2. We also contrasted
zero-shot prompting with few-shot prompting, and provide novel insights and
guidelines for prompt engineering for LLMs in clinical NLP. To the best of our
knowledge, this is one of the first works on the empirical evaluation of
different prompt engineering approaches for clinical NLP in this era of
generative AI, and we hope that it will inspire and inform future research in
this area.
"
Prompt Engineering or Fine Tuning: An Empirical Assessment of Large  Language Models in Automated Software Engineering Tasks,Jiho Shin,http://arxiv.org/pdf/2310.10508v1.pdf,2023-10-11,['cs.se'],2310.10508v1.pdf,"  In this paper, we investigate the effectiveness of state-of-the-art LLM,
i.e., GPT-4, with three different prompting engineering techniques (i.e., basic
prompting, in-context learning, and task-specific prompting) against 18
fine-tuned LLMs on three typical ASE tasks, i.e., code generation, code
summarization, and code translation. Our quantitative analysis of these
prompting strategies suggests that prompt engineering GPT-4 cannot necessarily
and significantly outperform fine-tuning smaller/older LLMs in all three tasks.
For comment generation, GPT-4 with the best prompting strategy (i.e.,
task-specific prompt) had outperformed the first-ranked fine-tuned model by
8.33% points on average in BLEU. However, for code generation, the first-ranked
fine-tuned model outperforms GPT-4 with best prompting by 16.61% and 28.3%
points, on average in BLEU. For code translation, GPT-4 and fine-tuned
baselines tie as they outperform each other on different translation tasks. To
explore the impact of different prompting strategies, we conducted a user study
with 27 graduate students and 10 industry practitioners. From our qualitative
analysis, we find that the GPT-4 with conversational prompts (i.e., when a
human provides feedback and instructions back and forth with a model to achieve
best results) showed drastic improvement compared to GPT-4 with automatic
prompting strategies. Moreover, we observe that participants tend to request
improvements, add more context, or give specific instructions as conversational
prompts, which goes beyond typical and generic prompting strategies. Our study
suggests that, at its current state, GPT-4 with conversational prompting has
great potential for ASE tasks, but fully automated prompt engineering with no
human in the loop requires more study and improvement.
"
An Information-theoretic Approach to Prompt Engineering Without Ground  Truth Labels,Taylor Sorensen,http://arxiv.org/pdf/2203.11364v1.pdf,2022-03-21,"['cs.cl', 'cs.lg']",2203.11364v1.pdf,"  Pre-trained language models derive substantial linguistic and factual
knowledge from the massive corpora on which they are trained, and prompt
engineering seeks to align these models to specific tasks. Unfortunately,
existing prompt engineering methods require significant amounts of labeled
data, access to model parameters, or both. We introduce a new method for
selecting prompt templates \textit{without labeled examples} and
\textit{without direct access to the model}. Specifically, over a set of
candidate templates, we choose the template that maximizes the mutual
information between the input and the corresponding model output. Across 8
datasets representing 7 distinct NLP tasks, we show that when a template has
high mutual information, it also has high accuracy on the task. On the largest
model, selecting prompts with our method gets 90\% of the way from the average
prompt accuracy to the best prompt accuracy and requires no ground truth
labels.
"
Unsupervised Prompt Learning for Vision-Language Models,Tony Huang,http://arxiv.org/pdf/2204.03649v2.pdf,2022-04-07,['cs.cv'],2204.03649v2.pdf,"  Contrastive vision-language models like CLIP have shown great progress in
transfer learning. In the inference stage, the proper text description, also
known as prompt, needs to be carefully designed to correctly classify the given
images. In order to avoid laborious prompt engineering, recent works such as
CoOp, CLIP-Adapter and Tip-Adapter propose to adapt vision-language models for
downstream image recognition tasks on a small set of labeled data. Though
promising improvements are achieved, requiring labeled data from the target
datasets may restrict the scalability. In this paper, we explore a different
scenario, in which the labels of the target datasets are unprovided, and we
present an unsupervised prompt learning (UPL) approach to avoid prompt
engineering while simultaneously improving transfer performance of CLIP-like
vision-language models. As far as we know, UPL is the first work to introduce
unsupervised learning into prompt learning. Experimentally, our UPL outperforms
original CLIP with prompt engineering on ImageNet as well as other 10 datasets.
An enhanced version of UPL is even competitive with the 8-shot CoOp and the
8-shot TIP-Adapter on most datasets. Code and models are available at
https://github.com/tonyhuang2022/UPL.
"
ChainForge: A Visual Toolkit for Prompt Engineering and LLM Hypothesis  Testing,Ian Arawjo,http://arxiv.org/pdf/2309.09128v1.pdf,2023-09-17,"['cs.hc', 'cs.ai', 'h.5.2; i.2']",2309.09128v1.pdf,"  Evaluating outputs of large language models (LLMs) is challenging, requiring
making -- and making sense of -- many responses. Yet tools that go beyond basic
prompting tend to require knowledge of programming APIs, focus on narrow
domains, or are closed-source. We present ChainForge, an open-source visual
toolkit for prompt engineering and on-demand hypothesis testing of text
generation LLMs. ChainForge provides a graphical interface for comparison of
responses across models and prompt variations. Our system was designed to
support three tasks: model selection, prompt template design, and hypothesis
testing (e.g., auditing). We released ChainForge early in its development and
iterated on its design with academics and online users. Through in-lab and
interview studies, we find that a range of people could use ChainForge to
investigate hypotheses that matter to them, including in real-world settings.
We identify three modes of prompt engineering and LLM hypothesis testing:
opportunistic exploration, limited evaluation, and iterative refinement.
"
CoPrompt: Supporting Prompt Sharing and Referring in Collaborative  Natural Language Programming,Felicia Li Feng,http://arxiv.org/pdf/2310.09235v1.pdf,2023-10-13,['cs.hc'],2310.09235v1.pdf,"  Natural language (NL) programming has become more approachable due to the
powerful code-generation capability of large language models (LLMs). This shift
to using NL to program enhances collaborative programming by reducing
communication barriers and context-switching among programmers from varying
backgrounds. However, programmers may face challenges during prompt engineering
in a collaborative setting as they need to actively keep aware of their
collaborators' progress and intents. In this paper, we aim to investigate ways
to assist programmers' prompt engineering in a collaborative context. We first
conducted a formative study to understand the workflows and challenges of
programmers when using NL for collaborative programming. Based on our findings,
we implemented a prototype, CoPrompt, to support collaborative prompt
engineering by providing referring, requesting, sharing, and linking
mechanisms. Our user study indicates that CoPrompt assists programmers in
comprehending collaborators' prompts and building on their collaborators' work,
reducing repetitive updates and communication costs.
"
Prompt-Engineering and Transformer-based Question Generation and  Evaluation,Rubaba Amyeen,http://arxiv.org/pdf/2310.18867v1.pdf,2023-10-29,"['cs.cl', 'cs.ai']",2310.18867v1.pdf,"  Question generation has numerous applications in the educational context.
Question generation can prove helpful for students when reviewing content and
testing themselves. Furthermore, a question generation model can aid teachers
by lessening the burden of creating assessments and other practice material.
This paper aims to find the best method to generate questions from textual data
through a transformer model and prompt engineering. In this research, we
finetuned a pretrained distilBERT model on the SQuAD question answering dataset
to generate questions. In addition to training a transformer model, prompt
engineering was applied to generate questions effectively using the LLaMA
model. The generated questions were compared against the baseline questions in
the SQuAD dataset to evaluate the effectiveness of four different prompts. All
four prompts demonstrated over 60% similarity on average. Of the
prompt-generated questions, 30% achieved a high similarity score greater than
70%.
"
A Simple Zero-shot Prompt Weighting Technique to Improve Prompt  Ensembling in Text-Image Models,James Urquhart Allingham,http://arxiv.org/pdf/2302.06235v2.pdf,2023-02-13,"['cs.lg', 'cs.cv', 'stat.ml']",2302.06235v2.pdf,"  Contrastively trained text-image models have the remarkable ability to
perform zero-shot classification, that is, classifying previously unseen images
into categories that the model has never been explicitly trained to identify.
However, these zero-shot classifiers need prompt engineering to achieve high
accuracy. Prompt engineering typically requires hand-crafting a set of prompts
for individual downstream tasks. In this work, we aim to automate this prompt
engineering and improve zero-shot accuracy through prompt ensembling. In
particular, we ask ""Given a large pool of prompts, can we automatically score
the prompts and ensemble those that are most suitable for a particular
downstream dataset, without needing access to labeled validation data?"". We
demonstrate that this is possible. In doing so, we identify several pathologies
in a naive prompt scoring method where the score can be easily overconfident
due to biases in pre-training and test data, and we propose a novel prompt
scoring method that corrects for the biases. Using our proposed scoring method
to create a weighted average prompt ensemble, our method outperforms equal
average ensemble, as well as hand-crafted prompts, on ImageNet, 4 of its
variants, and 11 fine-grained classification benchmarks, all while being fully
automatic, optimization-free, and not requiring access to labeled validation
data.
"
Large Language Models in the Workplace: A Case Study on Prompt  Engineering for Job Type Classification,Benjamin Clavié,http://arxiv.org/pdf/2303.07142v3.pdf,2023-03-13,['cs.cl'],2303.07142v3.pdf,"  This case study investigates the task of job classification in a real-world
setting, where the goal is to determine whether an English-language job posting
is appropriate for a graduate or entry-level position. We explore multiple
approaches to text classification, including supervised approaches such as
traditional models like Support Vector Machines (SVMs) and state-of-the-art
deep learning methods such as DeBERTa. We compare them with Large Language
Models (LLMs) used in both few-shot and zero-shot classification settings. To
accomplish this task, we employ prompt engineering, a technique that involves
designing prompts to guide the LLMs towards the desired output. Specifically,
we evaluate the performance of two commercially available state-of-the-art
GPT-3.5-based language models, text-davinci-003 and gpt-3.5-turbo. We also
conduct a detailed analysis of the impact of different aspects of prompt
engineering on the model's performance. Our results show that, with a
well-designed prompt, a zero-shot gpt-3.5-turbo classifier outperforms all
other models, achieving a 6% increase in Precision@95% Recall compared to the
best supervised approach. Furthermore, we observe that the wording of the
prompt is a critical factor in eliciting the appropriate ""reasoning"" in the
model, and that seemingly minor aspects of the prompt significantly affect the
model's performance.
"
Simulating H.P. Lovecraft horror literature with the ChatGPT large  language model,Eduardo C. Garrido-Merchán,http://arxiv.org/pdf/2305.03429v1.pdf,2023-05-05,['cs.cl'],2305.03429v1.pdf,"  In this paper, we present a novel approach to simulating H.P. Lovecraft's
horror literature using the ChatGPT large language model, specifically the
GPT-4 architecture. Our study aims to generate text that emulates Lovecraft's
unique writing style and themes, while also examining the effectiveness of
prompt engineering techniques in guiding the model's output. To achieve this,
we curated a prompt containing several specialized literature references and
employed advanced prompt engineering methods. We conducted an empirical
evaluation of the generated text by administering a survey to a sample of
undergraduate students. Utilizing statistical hypothesis testing, we assessed
the students ability to distinguish between genuine Lovecraft works and those
generated by our model. Our findings demonstrate that the participants were
unable to reliably differentiate between the two, indicating the effectiveness
of the GPT-4 model and our prompt engineering techniques in emulating
Lovecraft's literary style. In addition to presenting the GPT model's
capabilities, this paper provides a comprehensive description of its underlying
architecture and offers a comparative analysis with related work that simulates
other notable authors and philosophers, such as Dennett. By exploring the
potential of large language models in the context of literary emulation, our
study contributes to the body of research on the applications and limitations
of these models in various creative domains.
"
CXR-LLaVA: Multimodal Large Language Model for Interpreting Chest X-ray  Images,Seowoo Lee,http://arxiv.org/pdf/2310.18341v2.pdf,2023-10-22,"['cs.cl', 'cs.ai']",2310.18341v2.pdf,"  Purpose: Recent advancements in large language models (LLMs) have expanded
their capabilities in a multimodal fashion, potentially replicating the image
interpretation of human radiologists. This study aimed to develop open-source
multimodal large language model for interpreting chest X-ray images
(CXR-LLaVA). We also examined the effect of prompt engineering and model
parameters such as temperature and nucleus sampling.
  Materials and Methods: For training, we collected 659,287 publicly available
CXRs: 417,336 CXRs had labels for certain radiographic abnormalities (dataset
1); 241,951 CXRs provided free-text radiology reports (dataset 2). After
pre-training the Resnet50 as an image encoder, the contrastive language-image
pre-training was used to align CXRs and corresponding radiographic
abnormalities. Then, the Large Language Model Meta AI-2 was fine-tuned using
dataset 2, which were refined using GPT-4, with generating various question
answering scenarios. The code can be found at
https://github.com/ECOFRI/CXR_LLaVA.
  Results: In the test set, we observed that the model's performance fluctuated
based on its parameters. On average, it achieved F1 score of 0.34 for five
pathologic findings (atelectasis, cardiomegaly, consolidation, edema, and
pleural effusion), which was improved to 0.46 through prompt engineering. In
the independent set, the model achieved an average F1 score of 0.30 for the
same pathologic findings. Notably, for the pediatric chest radiograph dataset,
which was unseen during training, the model differentiated abnormal radiographs
with an F1 score ranging from 0.84 to 0.85.
  Conclusion: CXR-LLaVA demonstrates promising potential in CXR interpretation.
Both prompt engineering and model parameter adjustments can play pivotal roles
in interpreting CXRs.
"
A Taxonomy of Prompt Modifiers for Text-To-Image Generation,Jonas Oppenlaender,http://arxiv.org/pdf/2204.13988v3.pdf,2022-04-20,"['cs.mm', 'cs.cl', 'cs.hc', 'h.5; h.m; j.5']",2204.13988v3.pdf,"  Text-to-image generation has seen an explosion of interest since 2021. Today,
beautiful and intriguing digital images and artworks can be synthesized from
textual inputs (""prompts"") with deep generative models. Online communities
around text-to-image generation and AI generated art have quickly emerged. This
paper identifies six types of prompt modifiers used by practitioners in the
online community based on a 3-month ethnographic study. The novel taxonomy of
prompt modifiers provides researchers a conceptual starting point for
investigating the practice of text-to-image generation, but may also help
practitioners of AI generated art improve their images. We further outline how
prompt modifiers are applied in the practice of ""prompt engineering."" We
discuss research opportunities of this novel creative practice in the field of
Human-Computer Interaction (HCI). The paper concludes with a discussion of
broader implications of prompt engineering from the perspective of Human-AI
Interaction (HAI) in future applications beyond the use case of text-to-image
generation and AI generated art.
"
What GPT Knows About Who is Who,Xiaohan Yang,http://arxiv.org/pdf/2205.07407v1.pdf,2022-05-16,"['cs.cl', 'cs.lg']",2205.07407v1.pdf,"  Coreference resolution -- which is a crucial task for understanding discourse
and language at large -- has yet to witness widespread benefits from large
language models (LLMs). Moreover, coreference resolution systems largely rely
on supervised labels, which are highly expensive and difficult to annotate,
thus making it ripe for prompt engineering. In this paper, we introduce a
QA-based prompt-engineering method and discern \textit{generative}, pre-trained
LLMs' abilities and limitations toward the task of coreference resolution. Our
experiments show that GPT-2 and GPT-Neo can return valid answers, but that
their capabilities to identify coreferent mentions are limited and
prompt-sensitive, leading to inconsistent results.
"
Looking for a Handsome Carpenter! Debiasing GPT-3 Job Advertisements,Conrad Borchers,http://arxiv.org/pdf/2205.11374v1.pdf,2022-05-23,"['cs.cl', 'cs.ai']",2205.11374v1.pdf,"  The growing capability and availability of generative language models has
enabled a wide range of new downstream tasks. Academic research has identified,
quantified and mitigated biases present in language models but is rarely
tailored to downstream tasks where wider impact on individuals and society can
be felt. In this work, we leverage one popular generative language model,
GPT-3, with the goal of writing unbiased and realistic job advertisements. We
first assess the bias and realism of zero-shot generated advertisements and
compare them to real-world advertisements. We then evaluate prompt-engineering
and fine-tuning as debiasing methods. We find that prompt-engineering with
diversity-encouraging prompts gives no significant improvement to bias, nor
realism. Conversely, fine-tuning, especially on unbiased real advertisements,
can improve realism and reduce bias.
"
Arguments to Key Points Mapping with Prompt-based Learning,Ahnaf Mozib Samin,http://arxiv.org/pdf/2211.14995v1.pdf,2022-11-28,['cs.cl'],2211.14995v1.pdf,"  Handling and digesting a huge amount of information in an efficient manner
has been a long-term demand in modern society. Some solutions to map key points
(short textual summaries capturing essential information and filtering
redundancies) to a large number of arguments/opinions have been provided
recently (Bar-Haim et al., 2020). To complement the full picture of the
argument-to-keypoint mapping task, we mainly propose two approaches in this
paper. The first approach is to incorporate prompt engineering for fine-tuning
the pre-trained language models (PLMs). The second approach utilizes
prompt-based learning in PLMs to generate intermediary texts, which are then
combined with the original argument-keypoint pairs and fed as inputs to a
classifier, thereby mapping them. Furthermore, we extend the experiments to
cross/in-domain to conduct an in-depth analysis. In our evaluation, we find
that i) using prompt engineering in a more direct way (Approach 1) can yield
promising results and improve the performance; ii) Approach 2 performs
considerably worse than Approach 1 due to the negation issue of the PLM.
"
Legal Prompt Engineering for Multilingual Legal Judgement Prediction,Dietrich Trautmann,http://arxiv.org/pdf/2212.02199v1.pdf,2022-12-05,"['cs.cl', 'cs.ai']",2212.02199v1.pdf,"  Legal Prompt Engineering (LPE) or Legal Prompting is a process to guide and
assist a large language model (LLM) with performing a natural legal language
processing (NLLP) skill. Our goal is to use LPE with LLMs over long legal
documents for the Legal Judgement Prediction (LJP) task. We investigate the
performance of zero-shot LPE for given facts in case-texts from the European
Court of Human Rights (in English) and the Federal Supreme Court of Switzerland
(in German, French and Italian). Our results show that zero-shot LPE is better
compared to the baselines, but it still falls short compared to current state
of the art supervised approaches. Nevertheless, the results are important,
since there was 1) no explicit domain-specific data used - so we show that the
transfer to the legal domain is possible for general-purpose LLMs, and 2) the
LLMs where directly applied without any further training or fine-tuning - which
in turn saves immensely in terms of additional computational costs.
"
The Infinite Index: Information Retrieval on Generative Text-To-Image  Models,Niklas Deckers,http://arxiv.org/pdf/2212.07476v2.pdf,2022-12-14,"['cs.ir', 'cs.cl', 'cs.cv']",2212.07476v2.pdf,"  Conditional generative models such as DALL-E and Stable Diffusion generate
images based on a user-defined text, the prompt. Finding and refining prompts
that produce a desired image has become the art of prompt engineering.
Generative models do not provide a built-in retrieval model for a user's
information need expressed through prompts. In light of an extensive literature
review, we reframe prompt engineering for generative models as interactive
text-based retrieval on a novel kind of ""infinite index"". We apply these
insights for the first time in a case study on image generation for game design
with an expert. Finally, we envision how active learning may help to guide the
retrieval of generated images.
"
"Artificial Intelligence for Health Message Generation: Theory, Method,  and an Empirical Study Using Prompt Engineering",Sue Lim,http://arxiv.org/pdf/2212.07507v1.pdf,2022-12-14,['cs.cl'],2212.07507v1.pdf,"  This study introduces and examines the potential of an AI system to generate
health awareness messages. The topic of folic acid, a vitamin that is critical
during pregnancy, served as a test case. Using prompt engineering, we generated
messages that could be used to raise awareness and compared them to retweeted
human-generated messages via computational and human evaluation methods. The
system was easy to use and prolific, and computational analyses revealed that
the AI-generated messages were on par with human-generated ones in terms of
sentiment, reading ease, and semantic content. Also, the human evaluation study
showed that AI-generated messages ranked higher in message quality and clarity.
We discuss the theoretical, practical, and ethical implications of these
results.
"
What does CLIP know about a red circle? Visual prompt engineering for  VLMs,Aleksandar Shtedritski,http://arxiv.org/pdf/2304.06712v2.pdf,2023-04-13,['cs.cv'],2304.06712v2.pdf,"  Large-scale Vision-Language Models, such as CLIP, learn powerful image-text
representations that have found numerous applications, from zero-shot
classification to text-to-image generation. Despite that, their capabilities
for solving novel discriminative tasks via prompting fall behind those of large
language models, such as GPT-3. Here we explore the idea of visual prompt
engineering for solving computer vision tasks beyond classification by editing
in image space instead of text. In particular, we discover an emergent ability
of CLIP, where, by simply drawing a red circle around an object, we can direct
the model's attention to that region, while also maintaining global
information. We show the power of this simple approach by achieving
state-of-the-art in zero-shot referring expressions comprehension and strong
performance in keypoint localization tasks. Finally, we draw attention to some
potential ethical concerns of large language-vision models.
"
Prompt Engineering for Transformer-based Chemical Similarity Search  Identifies Structurally Distinct Functional Analogues,Clayton W. Kosonocky,http://arxiv.org/pdf/2305.16330v1.pdf,2023-05-17,"['physics.chem-ph', 'cs.lg']",2305.16330v1.pdf,"  Chemical similarity searches are widely used in-silico methods for
identifying new drug-like molecules. These methods have historically relied on
structure-based comparisons to compute molecular similarity. Here, we use a
chemical language model to create a vector-based chemical search. We extend
implementations by creating a prompt engineering strategy that utilizes two
different chemical string representation algorithms: one for the query and the
other for the database. We explore this method by reviewing the search results
from five drug-like query molecules (penicillin G, nirmatrelvir, zidovudine,
lysergic acid diethylamide, and fentanyl) and three dye-like query molecules
(acid blue 25, avobenzone, and 2-diphenylaminocarbazole). We find that this
novel method identifies molecules that are functionally similar to the query,
indicated by the associated patent literature, and that many of these molecules
are structurally distinct from the query, making them unlikely to be found with
traditional chemical similarity search methods. This method may aid in the
discovery of novel structural classes of molecules that achieve target
functionality.
"
Submodular Minimax Optimization: Finding Effective Sets,Loay Mualem,http://arxiv.org/pdf/2305.16903v1.pdf,2023-05-26,"['cs.lg', 'cs.dm', 'math.oc', '68r05 (primary) 90c26, 90c20, 68t20, 68w40 (secondary)', 'g.2.1; i.2.m; f.2.2']",2305.16903v1.pdf,"  Despite the rich existing literature about minimax optimization in continuous
settings, only very partial results of this kind have been obtained for
combinatorial settings. In this paper, we fill this gap by providing a
characterization of submodular minimax optimization, the problem of finding a
set (for either the min or the max player) that is effective against every
possible response. We show when and under what conditions we can find such
sets. We also demonstrate how minimax submodular optimization provides robust
solutions for downstream machine learning applications such as (i) efficient
prompt engineering for question answering, (ii) prompt engineering for dialog
state tracking, (iii) identifying robust waiting locations for ride-sharing,
(iv) ride-share difficulty kernelization, and (v) finding adversarial images.
Our experiments demonstrate that our proposed algorithms consistently
outperform other baselines.
"
Unsupervised Human Activity Recognition through Two-stage Prompting with  ChatGPT,Qingxin Xia,http://arxiv.org/pdf/2306.02140v1.pdf,2023-06-03,"['cs.hc', 'cs.cl']",2306.02140v1.pdf,"  Wearable sensor devices, which offer the advantage of recording daily objects
used by a person while performing an activity, enable the feasibility of
unsupervised Human Activity Recognition (HAR). Unfortunately, previous
unsupervised approaches using the usage sequence of objects usually require a
proper description of activities manually prepared by humans. Instead, we
leverage the knowledge embedded in a Large Language Model (LLM) of ChatGPT.
Because the sequence of objects robustly characterizes the activity identity,
it is possible that ChatGPT already learned the association between activities
and objects from existing contexts. However, previous prompt engineering for
ChatGPT exhibits limited generalization ability when dealing with a list of
words (i.e., sequence of objects) due to the similar weighting assigned to each
word in the list. In this study, we propose a two-stage prompt engineering,
which first guides ChatGPT to generate activity descriptions associated with
objects while emphasizing important objects for distinguishing similar
activities; then outputs activity classes and explanations for enhancing the
contexts that are helpful for HAR. To the best of our knowledge, this is the
first study that utilizes ChatGPT to recognize activities using objects in an
unsupervised manner. We conducted our approach on three datasets and
demonstrated the state-of-the-art performance.
"
User-friendly Image Editing with Minimal Text Input: Leveraging  Captioning and Injection Techniques,Sunwoo Kim,http://arxiv.org/pdf/2306.02717v1.pdf,2023-06-05,['cs.cv'],2306.02717v1.pdf,"  Recent text-driven image editing in diffusion models has shown remarkable
success. However, the existing methods assume that the user's description
sufficiently grounds the contexts in the source image, such as objects,
background, style, and their relations. This assumption is unsuitable for
real-world applications because users have to manually engineer text prompts to
find optimal descriptions for different images. From the users' standpoint,
prompt engineering is a labor-intensive process, and users prefer to provide a
target word for editing instead of a full sentence. To address this problem, we
first demonstrate the importance of a detailed text description of the source
image, by dividing prompts into three categories based on the level of semantic
details. Then, we propose simple yet effective methods by combining prompt
generation frameworks, thereby making the prompt engineering process more
user-friendly. Extensive qualitative and quantitative experiments demonstrate
the importance of prompts in text-driven image editing and our method is
comparable to ground-truth prompts.
"
PromptMagician: Interactive Prompt Engineering for Text-to-Image  Creation,Yingchaojie Feng,http://arxiv.org/pdf/2307.09036v2.pdf,2023-07-18,"['cs.ai', 'cs.hc']",2307.09036v2.pdf,"  Generative text-to-image models have gained great popularity among the public
for their powerful capability to generate high-quality images based on natural
language prompts. However, developing effective prompts for desired images can
be challenging due to the complexity and ambiguity of natural language. This
research proposes PromptMagician, a visual analysis system that helps users
explore the image results and refine the input prompts. The backbone of our
system is a prompt recommendation model that takes user prompts as input,
retrieves similar prompt-image pairs from DiffusionDB, and identifies special
(important and relevant) prompt keywords. To facilitate interactive prompt
refinement, PromptMagician introduces a multi-level visualization for the
cross-modal embedding of the retrieved images and recommended keywords, and
supports users in specifying multiple criteria for personalized exploration.
Two usage scenarios, a user study, and expert interviews demonstrate the
effectiveness and usability of our system, suggesting it facilitates prompt
engineering and improves the creativity support of the generative text-to-image
model.
"
Is GPT a Computational Model of Emotion? Detailed Analysis,Ala N. Tak,http://arxiv.org/pdf/2307.13779v1.pdf,2023-07-25,"['cs.cl', 'cs.ai', 'cs.cy', 'cs.hc']",2307.13779v1.pdf,"  This paper investigates the emotional reasoning abilities of the GPT family
of large language models via a component perspective. The paper first examines
how the model reasons about autobiographical memories. Second, it
systematically varies aspects of situations to impact emotion intensity and
coping tendencies. Even without the use of prompt engineering, it is shown that
GPT's predictions align significantly with human-provided appraisals and
emotional labels. However, GPT faces difficulties predicting emotion intensity
and coping responses. GPT-4 showed the highest performance in the initial study
but fell short in the second, despite providing superior results after minor
prompt engineering. This assessment brings up questions on how to effectively
employ the strong points and address the weak areas of these models,
particularly concerning response variability. These studies underscore the
merits of evaluating models from a componential perspective.
"
Prompts Matter: Insights and Strategies for Prompt Engineering in  Automated Software Traceability,Alberto D. Rodriguez,http://arxiv.org/pdf/2308.00229v1.pdf,2023-08-01,['cs.se'],2308.00229v1.pdf,"  Large Language Models (LLMs) have the potential to revolutionize automated
traceability by overcoming the challenges faced by previous methods and
introducing new possibilities. However, the optimal utilization of LLMs for
automated traceability remains unclear. This paper explores the process of
prompt engineering to extract link predictions from an LLM. We provide detailed
insights into our approach for constructing effective prompts, offering our
lessons learned. Additionally, we propose multiple strategies for leveraging
LLMs to generate traceability links, improving upon previous zero-shot methods
on the ranking of candidate links after prompt refinement. The primary
objective of this paper is to inspire and assist future researchers and
engineers by highlighting the process of constructing traceability prompts to
effectively harness LLMs for advancing automatic traceability.
"
CoT-BERT: Enhancing Unsupervised Sentence Representation through  Chain-of-Thought,Bowen Zhang,http://arxiv.org/pdf/2309.11143v1.pdf,2023-09-20,"['cs.cl', 'cs.ai']",2309.11143v1.pdf,"  Unsupervised sentence representation learning aims to transform input
sentences into fixed-length vectors enriched with intricate semantic
information while obviating the reliance on labeled data. Recent progress
within this field, propelled by contrastive learning and prompt engineering,
has significantly bridged the gap between unsupervised and supervised
strategies. Nonetheless, the potential utilization of Chain-of-Thought, remains
largely untapped within this trajectory. To unlock latent capabilities within
pre-trained models, such as BERT, we propose a two-stage approach for sentence
representation: comprehension and summarization. Subsequently, the output of
the latter phase is harnessed as the vectorized representation of the input
sentence. For further performance enhancement, we meticulously refine both the
contrastive learning loss function and the template denoising technique for
prompt engineering. Rigorous experimentation substantiates our method,
CoT-BERT, transcending a suite of robust baselines without necessitating other
text representation models or external databases.
"
How does prompt engineering affect ChatGPT performance on unsupervised  entity resolution?,Khanin Sisaengsuwanchai,http://arxiv.org/pdf/2310.06174v1.pdf,2023-10-09,"['cs.ai', 'cs.se']",2310.06174v1.pdf,"  Entity Resolution (ER) is the problem of semi-automatically determining when
two entities refer to the same underlying entity, with applications ranging
from healthcare to e-commerce. Traditional ER solutions required considerable
manual expertise, including feature engineering, as well as identification and
curation of training data. In many instances, such techniques are highly
dependent on the domain. With recent advent in large language models (LLMs),
there is an opportunity to make ER much more seamless and domain-independent.
However, it is also well known that LLMs can pose risks, and that the quality
of their outputs can depend on so-called prompt engineering. Unfortunately, a
systematic experimental study on the effects of different prompting methods for
addressing ER, using LLMs like ChatGPT, has been lacking thus far. This paper
aims to address this gap by conducting such a study. Although preliminary in
nature, our results show that prompting can significantly affect the quality of
ER, although it affects some metrics more than others, and can also be dataset
dependent.
"
Interactive Task Planning with Language Models,Boyi Li,http://arxiv.org/pdf/2310.10645v1.pdf,2023-10-16,"['cs.ro', 'cs.ai', 'cs.cl', 'cs.hc']",2310.10645v1.pdf,"  An interactive robot framework accomplishes long-horizon task planning and
can easily generalize to new goals or distinct tasks, even during execution.
However, most traditional methods require predefined module design, which makes
it hard to generalize to different goals. Recent large language model based
approaches can allow for more open-ended planning but often require heavy
prompt engineering or domain-specific pretrained models. To tackle this, we
propose a simple framework that achieves interactive task planning with
language models. Our system incorporates both high-level planning and low-level
function execution via language. We verify the robustness of our system in
generating novel high-level instructions for unseen objectives and its ease of
adaptation to different tasks by merely substituting the task guidelines,
without the need for additional complex prompt engineering. Furthermore, when
the user sends a new request, our system is able to replan accordingly with
precision based on the new request, task guidelines and previously executed
steps. Please check more details on our https://wuphilipp.github.io/itp_site
and https://youtu.be/TrKLuyv26_g.
"
Prompt Engineering Through the Lens of Optimal Control,Yifan Luo,http://arxiv.org/pdf/2310.14201v2.pdf,2023-10-22,"['cs.lg', 'math.oc']",2310.14201v2.pdf,"  Prompt Engineering (PE) has emerged as a critical technique for guiding Large
Language Models (LLMs) in solving intricate tasks. Its importance is
highlighted by its potential to significantly enhance the efficiency and
effectiveness of human-machine interaction. As tasks grow increasingly complex,
recent advanced PE methods have extended beyond the limitations of single-round
interactions to embrace multi-round interactions, which allows for a deeper and
more nuanced engagement with LLMs. In this paper, we propose an optimal control
framework tailored for multi-round interactions with LLMs. This framework
provides a unified mathematical structure that not only systematizes the
existing PE methods but also sets the stage for rigorous analytical
improvements. Furthermore, we extend this framework to include PE via ensemble
methods and multi-agent collaboration, thereby enlarging the scope of
applicability. By adopting an optimal control perspective, we offer fresh
insights into existing PE methods and highlight theoretical challenges that
warrant future research. Besides, our work lays a foundation for the
development of more effective and interpretable PE methods.
"
A Communication Theory Perspective on Prompting Engineering Methods for  Large Language Models,Yuanfeng Song,http://arxiv.org/pdf/2310.18358v1.pdf,2023-10-24,"['cs.cl', 'cs.ai']",2310.18358v1.pdf,"  The springing up of Large Language Models (LLMs) has shifted the community
from single-task-orientated natural language processing (NLP) research to a
holistic end-to-end multi-task learning paradigm. Along this line of research
endeavors in the area, LLM-based prompting methods have attracted much
attention, partially due to the technological advantages brought by prompt
engineering (PE) as well as the underlying NLP principles disclosed by various
prompting methods. Traditional supervised learning usually requires training a
model based on labeled data and then making predictions. In contrast, PE
methods directly use the powerful capabilities of existing LLMs (i.e., GPT-3
and GPT-4) via composing appropriate prompts, especially under few-shot or
zero-shot scenarios. Facing the abundance of studies related to the prompting
and the ever-evolving nature of this field, this article aims to (i) illustrate
a novel perspective to review existing PE methods, within the well-established
communication theory framework; (ii) facilitate a better/deeper understanding
of developing trends of existing PE methods used in four typical tasks; (iii)
shed light on promising research directions for future PE methods.
"
Apollo: Zero-shot MultiModal Reasoning with Multiple Experts,Daniela Ben-David,http://arxiv.org/pdf/2310.18369v1.pdf,2023-10-25,"['cs.cl', 'cs.ai', 'cs.cv', 'i.2.7; i.5.4']",2310.18369v1.pdf,"  We propose a modular framework that leverages the expertise of different
foundation models over different modalities and domains in order to perform a
single, complex, multi-modal task, without relying on prompt engineering or
otherwise tailor-made multi-modal training. Our approach enables decentralized
command execution and allows each model to both contribute and benefit from the
expertise of the other models. Our method can be extended to a variety of
foundation models (including audio and vision), above and beyond only language
models, as it does not depend on prompts. We demonstrate our approach on two
tasks. On the well-known task of stylized image captioning, our experiments
show that our approach outperforms semi-supervised state-of-the-art models,
while being zero-shot and avoiding costly training, data collection, and prompt
engineering. We further demonstrate this method on a novel task, audio-aware
image captioning, in which an image and audio are given and the task is to
generate text that describes the image within the context of the provided
audio. Our code is available on GitHub.
"
Towards Zero-Shot and Few-Shot Table Question Answering using GPT-3,Pragya Srivastava,http://arxiv.org/pdf/2210.17284v1.pdf,2022-10-31,"['cs.lg', '14j60 (primary)']",2210.17284v1.pdf,"  We present very early results on using GPT-3 to perform question answering on
tabular data. We find that stock pre-trained GPT-3 is able to zero-shot learn
the table structure from a serialized JSON array-of-arrays representation, and
able to answer lookup queries and simple comparison questions in natural
language without any fine-tuning. We further find that simple prompt
engineering to include few-shot static Q&A examples significantly improves
accuracy. Lastly, we find that intermixing passage text improves accuracy even
further on heterogeneous data. We apply our approach on a novel dataset of
simple tables in newspaper infographics with promising results. Overall, we
find much cause for optimism in this basic approach.
"
Investigating Prompt Engineering in Diffusion Models,Sam Witteveen,http://arxiv.org/pdf/2211.15462v1.pdf,2022-11-21,"['cs.cv', 'cs.ai', 'cs.cl']",2211.15462v1.pdf,"  With the spread of the use of Text2Img diffusion models such as DALL-E 2,
Imagen, Mid Journey and Stable Diffusion, one challenge that artists face is
selecting the right prompts to achieve the desired artistic output. We present
techniques for measuring the effect that specific words and phrases in prompts
have, and (in the Appendix) present guidance on the selection of prompts to
produce desired effects.
"
Refining the Responses of LLMs by Themselves,Tianqiang Yan,http://arxiv.org/pdf/2305.04039v1.pdf,2023-05-06,"['cs.cl', 'cs.ai']",2305.04039v1.pdf,"  In this paper, we propose a simple yet efficient approach based on prompt
engineering that leverages the large language model itself to optimize its
answers without relying on auxiliary models. We introduce an iterative
self-evaluating optimization mechanism, with the potential for improved output
quality as iterations progress, removing the need for manual intervention. The
experiment's findings indicate that utilizing our response refinement framework
on the GPT-3.5 model yields results that are on par with, or even surpass,
those generated by the cutting-edge GPT-4 model. Detailed implementation
strategies and illustrative examples are provided to demonstrate the
superiority of our proposed solution.
"
Efficient Black-Box Adversarial Attacks on Neural Text Detectors,Vitalii Fishchuk,http://arxiv.org/pdf/2311.01873v1.pdf,2023-11-03,['cs.cl'],2311.01873v1.pdf,"  Neural text detectors are models trained to detect whether a given text was
generated by a language model or written by a human. In this paper, we
investigate three simple and resource-efficient strategies (parameter tweaking,
prompt engineering, and character-level mutations) to alter texts generated by
GPT-3.5 that are unsuspicious or unnoticeable for humans but cause
misclassification by neural text detectors. The results show that especially
parameter tweaking and character-level mutations are effective strategies.
"
Prompted Software Engineering in the Era of AI Models,Dae-Kyoo Kim,http://arxiv.org/pdf/2311.03359v1.pdf,2023-09-07,['cs.se'],2311.03359v1.pdf,"  This paper introduces prompted software engineering (PSE), which integrates
prompt engineering to build effective prompts for language-based AI models, to
enhance the software development process. PSE enables the use of AI models in
software development to produce high-quality software with fewer resources,
automating tedious tasks and allowing developers to focus on more innovative
aspects. However, effective prompts are necessary to guide software development
in generating accurate, relevant, and useful responses, while mitigating risks
of misleading outputs. This paper describes how productive prompts should be
built throughout the software development cycle.
"
Conversing with Copilot: Exploring Prompt Engineering for Solving CS1  Problems Using Natural Language,Paul Denny,http://arxiv.org/pdf/2210.15157v1.pdf,2022-10-27,"['cs.hc', 'cs.ai']",2210.15157v1.pdf,"  GitHub Copilot is an artificial intelligence model for automatically
generating source code from natural language problem descriptions. Since June
2022, Copilot has officially been available for free to all students as a
plug-in to development environments like Visual Studio Code. Prior work
exploring OpenAI Codex, the underlying model that powers Copilot, has shown it
performs well on typical CS1 problems thus raising concerns about the impact it
will have on how introductory programming courses are taught. However, little
is known about the types of problems for which Copilot does not perform well,
or about the natural language interactions that a student might have with
Copilot when resolving errors. We explore these questions by evaluating the
performance of Copilot on a publicly available dataset of 166 programming
problems. We find that it successfully solves around half of these problems on
its very first attempt, and that it solves 60\% of the remaining problems using
only natural language changes to the problem description. We argue that this
type of prompt engineering, which we believe will become a standard interaction
between human and Copilot when it initially fails, is a potentially useful
learning activity that promotes computational thinking skills, and is likely to
change the nature of code writing skill development.
"
ChatGPT4PCG Competition: Character-like Level Generation for Science  Birds,Pittawat Taveekitworachai,http://arxiv.org/pdf/2303.15662v2.pdf,2023-03-28,"['cs.ai', 'cs.cl', 'i.2.7; i.2.8']",2303.15662v2.pdf,"  This paper presents the first ChatGPT4PCG Competition at the 2023 IEEE
Conference on Games. The objective of this competition is for participants to
create effective prompts for ChatGPT--enabling it to generate Science Birds
levels with high stability and character-like qualities--fully using their
creativity as well as prompt engineering skills. ChatGPT is a conversational
agent developed by OpenAI. Science Birds is selected as the competition
platform because designing an Angry Birds-like level is not a trivial task due
to the in-game gravity; the quality of the levels is determined by their
stability. To lower the entry barrier to the competition, we limit the task to
the generation of capitalized English alphabetical characters. We also allow
only a single prompt to be used for generating all the characters. Here, the
quality of the generated levels is determined by their stability and similarity
to the given characters. A sample prompt is provided to participants for their
reference. An experiment is conducted to determine the effectiveness of several
modified versions of this sample prompt on level stability and similarity by
testing them on several characters. To the best of our knowledge, we believe
that ChatGPT4PCG is the first competition of its kind and hope to inspire
enthusiasm for prompt engineering in procedural content generation.
"
Enhancing Automated Program Repair through Fine-tuning and Prompt  Engineering,Rishov Paul,http://arxiv.org/pdf/2304.07840v2.pdf,2023-04-16,"['cs.lg', 'cs.se']",2304.07840v2.pdf,"  Sequence-to-sequence models have been used to transform erroneous programs
into correct ones when trained with a large enough dataset. Some recent studies
also demonstrated strong empirical evidence that code review could improve the
program repair further. Large language models, trained with Natural Language
(NL) and Programming Language (PL), can contain inherent knowledge of both. In
this study, we investigate if this inherent knowledge of PL and NL can be
utilized to improve automated program repair. We applied PLBART and CodeT5, two
state-of-the-art language models that are pre-trained with both PL and NL, on
two such natural language-based program repair datasets and found that the
pre-trained language models fine-tuned with datasets containing both code
review and subsequent code changes notably outperformed each of the previous
models. With the advent of code generative models like Codex and GPT-3.5-Turbo,
we also performed zero-shot and few-shots learning-based prompt engineering to
assess their performance on these datasets. However, the practical application
of using LLMs in the context of automated program repair is still a long way
off based on our manual analysis of the generated repaired codes by the
learning models.
"
Conceptual Design Generation Using Large Language Models,Kevin Ma,http://arxiv.org/pdf/2306.01779v1.pdf,2023-05-30,"['cs.cl', 'cs.ai']",2306.01779v1.pdf,"  Concept generation is a creative step in the conceptual design phase, where
designers often turn to brainstorming, mindmapping, or crowdsourcing design
ideas to complement their own knowledge of the domain. Recent advances in
natural language processing (NLP) and machine learning (ML) have led to the
rise of Large Language Models (LLMs) capable of generating seemingly creative
outputs from textual prompts. The success of these models has led to their
integration and application across a variety of domains, including art,
entertainment, and other creative work. In this paper, we leverage LLMs to
generate solutions for a set of 12 design problems and compare them to a
baseline of crowdsourced solutions. We evaluate the differences between
generated and crowdsourced design solutions through multiple perspectives,
including human expert evaluations and computational metrics. Expert
evaluations indicate that the LLM-generated solutions have higher average
feasibility and usefulness while the crowdsourced solutions have more novelty.
We experiment with prompt engineering and find that leveraging few-shot
learning can lead to the generation of solutions that are more similar to the
crowdsourced solutions. These findings provide insight into the quality of
design solutions generated with LLMs and begins to evaluate prompt engineering
techniques that could be leveraged by practitioners to generate higher-quality
design solutions synergistically with LLMs.
"
Cheap-fake Detection with LLM using Prompt Engineering,Guangyang Wu,http://arxiv.org/pdf/2306.02776v1.pdf,2023-06-05,['cs.cv'],2306.02776v1.pdf,"  The misuse of real photographs with conflicting image captions in news items
is an example of the out-of-context (OOC) misuse of media. In order to detect
OOC media, individuals must determine the accuracy of the statement and
evaluate whether the triplet (~\textit{i.e.}, the image and two captions)
relates to the same event. This paper presents a novel learnable approach for
detecting OOC media in ICME'23 Grand Challenge on Detecting Cheapfakes. The
proposed method is based on the COSMOS structure, which assesses the coherence
between an image and captions, as well as between two captions. We enhance the
baseline algorithm by incorporating a Large Language Model (LLM), GPT3.5, as a
feature extractor. Specifically, we propose an innovative approach to feature
extraction utilizing prompt engineering to develop a robust and reliable
feature extractor with GPT3.5 model. The proposed method captures the
correlation between two captions and effectively integrates this module into
the COSMOS baseline model, which allows for a deeper understanding of the
relationship between captions. By incorporating this module, we demonstrate the
potential for significant improvements in cheap-fakes detection performance.
The proposed methodology holds promising implications for various applications
such as natural language processing, image captioning, and text-to-image
synthesis. Docker for submission is available at
https://hub.docker.com/repository/docker/mulns/ acmmmcheapfakes.
"
Improving Knowledge Extraction from LLMs for Task Learning through Agent  Analysis,James R. Kirk,http://arxiv.org/pdf/2306.06770v3.pdf,2023-06-11,"['cs.ai', 'cs.hc', 'cs.ro', 'i.2.6; i.2.7']",2306.06770v3.pdf,"  Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
"
ChatGPT for Robotics: Design Principles and Model Abilities,Sai Vemprala,http://arxiv.org/pdf/2306.17582v2.pdf,2023-02-20,"['cs.ai', 'cs.cl', 'cs.hc', 'cs.lg', 'cs.ro']",2306.17582v2.pdf,"  This paper presents an experimental study regarding the use of OpenAI's
ChatGPT for robotics applications. We outline a strategy that combines design
principles for prompt engineering and the creation of a high-level function
library which allows ChatGPT to adapt to different robotics tasks, simulators,
and form factors. We focus our evaluations on the effectiveness of different
prompt engineering techniques and dialog strategies towards the execution of
various types of robotics tasks. We explore ChatGPT's ability to use free-form
dialog, parse XML tags, and to synthesize code, in addition to the use of
task-specific prompting functions and closed-loop reasoning through dialogues.
Our study encompasses a range of tasks within the robotics domain, from basic
logical, geometrical, and mathematical reasoning all the way to complex domains
such as aerial navigation, manipulation, and embodied agents. We show that
ChatGPT can be effective at solving several of such tasks, while allowing users
to interact with it primarily via natural language instructions. In addition to
these studies, we introduce an open-sourced research tool called PromptCraft,
which contains a platform where researchers can collaboratively upload and vote
on examples of good prompting schemes for robotics applications, as well as a
sample robotics simulator with ChatGPT integration, making it easier for users
to get started with using ChatGPT for robotics.
"
Cases of EFL Secondary Students' Prompt Engineering Pathways to Complete  a Writing Task with ChatGPT,David James Woo,http://arxiv.org/pdf/2307.05493v1.pdf,2023-06-19,"['cs.hc', 'cs.ai', 'cs.cl']",2307.05493v1.pdf,"  ChatGPT is a state-of-the-art (SOTA) chatbot. Although it has potential to
support English as a foreign language (EFL) students' writing, to effectively
collaborate with it, a student must learn to engineer prompts, that is, the
skill of crafting appropriate instructions so that ChatGPT produces desired
outputs. However, writing an appropriate prompt for ChatGPT is not
straightforward for non-technical users who suffer a trial-and-error process.
This paper examines the content of EFL students' ChatGPT prompts when
completing a writing task and explores patterns in the quality and quantity of
the prompts. The data come from iPad screen recordings of secondary school EFL
students who used ChatGPT and other SOTA chatbots for the first time to
complete the same writing task. The paper presents a case study of four
distinct pathways that illustrate the trial-and-error process and show
different combinations of prompt content and quantity. The cases contribute
evidence for the need to provide prompt engineering education in the context of
the EFL writing classroom, if students are to move beyond an individual
trial-and-error process, learning a greater variety of prompt content and more
sophisticated prompts to support their writing.
"
"Multi-party Goal Tracking with LLMs: Comparing Pre-training,  Fine-tuning, and Prompt Engineering",Angus Addlesee,http://arxiv.org/pdf/2308.15231v1.pdf,2023-08-29,"['cs.cl', 'cs.hc']",2308.15231v1.pdf,"  This paper evaluates the extent to which current Large Language Models (LLMs)
can capture task-oriented multi-party conversations (MPCs). We have recorded
and transcribed 29 MPCs between patients, their companions, and a social robot
in a hospital. We then annotated this corpus for multi-party goal-tracking and
intent-slot recognition. People share goals, answer each other's goals, and
provide other people's goals in MPCs - none of which occur in dyadic
interactions. To understand user goals in MPCs, we compared three methods in
zero-shot and few-shot settings: we fine-tuned T5, created pre-training tasks
to train DialogLM using LED, and employed prompt engineering techniques with
GPT-3.5-turbo, to determine which approach can complete this novel task with
limited data. GPT-3.5-turbo significantly outperformed the others in a few-shot
setting. The `reasoning' style prompt, when given 7% of the corpus as example
annotated conversations, was the best performing method. It correctly annotated
62.32% of the goal tracking MPCs, and 69.57% of the intent-slot recognition
MPCs. A `story' style prompt increased model hallucination, which could be
detrimental if deployed in safety-critical settings. We conclude that
multi-party conversations still challenge state-of-the-art LLMs.
"
Text-to-SQL Empowered by Large Language Models: A Benchmark Evaluation,Dawei Gao,http://arxiv.org/pdf/2308.15363v3.pdf,2023-08-29,"['cs.db', 'cs.cl', 'cs.lg']",2308.15363v3.pdf,"  Large language models (LLMs) have emerged as a new paradigm for Text-to-SQL
task. However, the absence of a systematical benchmark inhibits the development
of designing effective, efficient and economic LLM-based Text-to-SQL solutions.
To address this challenge, in this paper, we first conduct a systematical and
extensive comparison over existing prompt engineering methods, including
question representation, example selection and example organization, and with
these experimental results, we elaborate their pros and cons. Based on these
findings, we propose a new integrated solution, named DAIL-SQL, which refreshes
the Spider leaderboard with 86.6% execution accuracy and sets a new bar. To
explore the potential of open-source LLM, we investigate them in various
scenarios, and further enhance their performance with supervised fine-tuning.
Our explorations highlight open-source LLMs' potential in Text-to-SQL, as well
as the advantages and disadvantages of the supervised fine-tuning.
Additionally, towards an efficient and economic LLM-based Text-to-SQL solution,
we emphasize the token efficiency in prompt engineering and compare the prior
studies under this metric. We hope that our work provides a deeper
understanding of Text-to-SQL with LLMs, and inspires further investigations and
broad applications.
"
PRE: Vision-Language Prompt Learning with Reparameterization Encoder,Anh Pham Thi Minh,http://arxiv.org/pdf/2309.07760v2.pdf,2023-09-14,"['cs.cv', 'cs.ai', 'cs.lg', 'i.4.0']",2309.07760v2.pdf,"  Large pre-trained vision-language models such as CLIP have demonstrated great
potential in zero-shot transferability to downstream tasks. However, to attain
optimal performance, the manual selection of prompts is necessary to improve
alignment between the downstream image distribution and the textual class
descriptions. This manual prompt engineering is the major challenge for
deploying such models in practice since it requires domain expertise and is
extremely time-consuming. To avoid non-trivial prompt engineering, recent work
Context Optimization (CoOp) introduced the concept of prompt learning to the
vision domain using learnable textual tokens. While CoOp can achieve
substantial improvements over manual prompts, its learned context is worse
generalizable to wider unseen classes within the same dataset. In this work, we
present Prompt Learning with Reparameterization Encoder (PRE) - a simple and
efficient method that enhances the generalization ability of the learnable
prompt to unseen classes while maintaining the capacity to learn Base classes.
Instead of directly optimizing the prompts, PRE employs a prompt encoder to
reparameterize the input prompt embeddings, enhancing the exploration of
task-specific knowledge from few-shot samples. Experiments and extensive
ablation studies on 8 benchmarks demonstrate that our approach is an efficient
method for prompt learning. Specifically, PRE achieves a notable enhancement of
5.60% in average accuracy on New classes and 3% in Harmonic mean compared to
CoOp in the 16-shot setting, all achieved within a good training time.
"
PEACE: Prompt Engineering Automation for CLIPSeg Enhancement in Aerial  Robotics,Haechan Mark Bong,http://arxiv.org/pdf/2310.00085v1.pdf,2023-09-29,['cs.ro'],2310.00085v1.pdf,"  From industrial to space robotics, safe landing is an essential component for
flight operations. With the growing interest in artificial intelligence, we
direct our attention to learning based safe landing approaches. This paper
extends our previous work, DOVESEI, which focused on a reactive UAV system by
harnessing the capabilities of open vocabulary image segmentation. Prompt-based
safe landing zone segmentation using an open vocabulary based model is no more
just an idea, but proven to be feasible by the work of DOVESEI. However, a
heuristic selection of words for prompt is not a reliable solution since it
cannot take the changing environment into consideration and detrimental
consequences can occur if the observed environment is not well represented by
the given prompt. Therefore, we introduce PEACE (Prompt Engineering Automation
for CLIPSeg Enhancement), powering DOVESEI to automate the prompt generation
and engineering to adapt to data distribution shifts. Our system is capable of
performing safe landing operations with collision avoidance at altitudes as low
as 20 meters using only monocular cameras and image segmentation. We take
advantage of DOVESEI's dynamic focus to circumvent abrupt fluctuations in the
terrain segmentation between frames in a video stream. PEACE shows promising
improvements in prompt generation and engineering for aerial images compared to
the standard prompt used for CLIP and CLIPSeg. Combining DOVESEI and PEACE, our
system was able improve successful safe landing zone selections by 58.62%
compared to using only DOVESEI. All the source code is open source and
available online.
"
Understanding prompt engineering may not require rethinking  generalization,Victor Akinwande,http://arxiv.org/pdf/2310.03957v1.pdf,2023-10-06,"['cs.lg', 'cs.cv']",2310.03957v1.pdf,"  Zero-shot learning in prompted vision-language models, the practice of
crafting prompts to build classifiers without an explicit training process, has
achieved impressive performance in many settings. This success presents a
seemingly surprising observation: these methods suffer relatively little from
overfitting, i.e., when a prompt is manually engineered to achieve low error on
a given training set (thus rendering the method no longer actually zero-shot),
the approach still performs well on held-out test data. In this paper, we show
that we can explain such performance well via recourse to classical PAC-Bayes
bounds. Specifically, we show that the discrete nature of prompts, combined
with a PAC-Bayes prior given by a language model, results in generalization
bounds that are remarkably tight by the standards of the literature: for
instance, the generalization bound of an ImageNet classifier is often within a
few percentage points of the true test error. We demonstrate empirically that
this holds for existing handcrafted prompts and prompts generated through
simple greedy search. Furthermore, the resulting bound is well-suited for model
selection: the models with the best bound typically also have the best test
performance. This work thus provides a possible justification for the
widespread practice of prompt engineering, even if it seems that such methods
could potentially overfit the training data.
"
What's the Magic Word? A Control Theory of LLM Prompting,Aman Bhargava,http://arxiv.org/pdf/2310.04444v2.pdf,2023-10-02,"['cs.cl', 'cs.ai', 'cs.lg', 'cs.ne']",2310.04444v2.pdf,"  Prompt engineering is effective and important in the deployment of LLMs but
is poorly understood mathematically. Here, we formalize prompt engineering as
an optimal control problem on LLMs -- where the prompt is considered a control
variable for modulating the output distribution of the LLM. Within this
framework, we ask a simple question: given a sequence of tokens, does there
always exist a prompt we can prepend that will steer the LLM toward accurately
predicting the final token? We call such an optimal prompt the magic word since
prepending the prompt causes the LLM to output the correct answer. If magic
words exist, can we find them? If so, what are their properties? We offer
analytic analysis on the controllability of the self-attention head where we
prove a bound on controllability as a function of the singular values of its
weight matrices. We take inspiration from control theory to propose a metric
called $k-\epsilon$ controllability to characterize LLM steerability. We
compute the $k-\epsilon$ controllability of a panel of large language models,
including Falcon-7b, Llama-7b, and Falcon-40b on 5000 WikiText causal language
modeling tasks. Remarkably, we find that magic words of 10 tokens or less exist
for over 97% of WikiText instances surveyed for each model.
"
Configuration Validation with Large Language Models,Xinyu Lian,http://arxiv.org/pdf/2310.09690v1.pdf,2023-10-15,"['cs.se', 'cs.ai', 'cs.os']",2310.09690v1.pdf,"  Misconfigurations are the major causes of software failures. Existing
configuration validation techniques rely on manually written rules or test
cases, which are expensive to implement and maintain, and are hard to be
comprehensive. Leveraging machine learning (ML) and natural language processing
(NLP) for configuration validation is considered a promising direction, but has
been facing challenges such as the need of not only large-scale configuration
data, but also system-specific features and models which are hard to
generalize. Recent advances in Large Language Models (LLMs) show the promises
to address some of the long-lasting limitations of ML/NLP-based configuration
validation techniques. In this paper, we present an exploratory analysis on the
feasibility and effectiveness of using LLMs like GPT and Codex for
configuration validation. Specifically, we take a first step to empirically
evaluate LLMs as configuration validators without additional fine-tuning or
code generation. We develop a generic LLM-based validation framework, named
Ciri, which integrates different LLMs. Ciri devises effective prompt
engineering with few-shot learning based on both valid configuration and
misconfiguration data. Ciri also validates and aggregates the outputs of LLMs
to generate validation results, coping with known hallucination and
nondeterminism of LLMs. We evaluate the validation effectiveness of Ciri on
five popular LLMs using configuration data of six mature, widely deployed
open-source systems. Our analysis (1) confirms the potential of using LLMs for
configuration validation, (2) understands the design space of LLMbased
validators like Ciri, especially in terms of prompt engineering with few-shot
learning, and (3) reveals open challenges such as ineffectiveness in detecting
certain types of misconfigurations and biases to popular configuration
parameters.
"
Learning to Prompt for Vision-Language Models,Kaiyang Zhou,http://arxiv.org/pdf/2109.01134v6.pdf,2021-09-02,"['cs.cv', 'cs.ai', 'cs.lg']",2109.01134v6.pdf,"  Large pre-trained vision-language models like CLIP have shown great potential
in learning representations that are transferable across a wide range of
downstream tasks. Different from the traditional representation learning that
is based mostly on discretized labels, vision-language pre-training aligns
images and texts in a common feature space, which allows zero-shot transfer to
a downstream task via prompting, i.e., classification weights are synthesized
from natural language describing classes of interest. In this work, we show
that a major challenge for deploying such models in practice is prompt
engineering, which requires domain expertise and is extremely time-consuming --
one needs to spend a significant amount of time on words tuning since a slight
change in wording could have a huge impact on performance. Inspired by recent
advances in prompt learning research in natural language processing (NLP), we
propose Context Optimization (CoOp), a simple approach specifically for
adapting CLIP-like vision-language models for downstream image recognition.
Concretely, CoOp models a prompt's context words with learnable vectors while
the entire pre-trained parameters are kept fixed. To handle different image
recognition tasks, we provide two implementations of CoOp: unified context and
class-specific context. Through extensive experiments on 11 datasets, we
demonstrate that CoOp requires as few as one or two shots to beat hand-crafted
prompts with a decent margin and is able to gain significant improvements over
prompt engineering with more shots, e.g., with 16 shots the average gain is
around 15% (with the highest reaching over 45%). Despite being a learning-based
approach, CoOp achieves superb domain generalization performance compared with
the zero-shot model using hand-crafted prompts.
"
"Prompt-Free Diffusion: Taking ""Text"" out of Text-to-Image Diffusion  Models",Xingqian Xu,http://arxiv.org/pdf/2305.16223v2.pdf,2023-05-25,['cs.cv'],2305.16223v2.pdf,"  Text-to-image (T2I) research has grown explosively in the past year, owing to
the large-scale pre-trained diffusion models and many emerging personalization
and editing approaches. Yet, one pain point persists: the text prompt
engineering, and searching high-quality text prompts for customized results is
more art than science. Moreover, as commonly argued: ""an image is worth a
thousand words"" - the attempt to describe a desired image with texts often ends
up being ambiguous and cannot comprehensively cover delicate visual details,
hence necessitating more additional controls from the visual domain. In this
paper, we take a bold step forward: taking ""Text"" out of a pre-trained T2I
diffusion model, to reduce the burdensome prompt engineering efforts for users.
Our proposed framework, Prompt-Free Diffusion, relies on only visual inputs to
generate new images: it takes a reference image as ""context"", an optional image
structural conditioning, and an initial noise, with absolutely no text prompt.
The core architecture behind the scene is Semantic Context Encoder (SeeCoder),
substituting the commonly used CLIP-based or LLM-based text encoder. The
reusability of SeeCoder also makes it a convenient drop-in component: one can
also pre-train a SeeCoder in one T2I model and reuse it for another. Through
extensive experiments, Prompt-Free Diffusion is experimentally found to (i)
outperform prior exemplar-based image synthesis approaches; (ii) perform on par
with state-of-the-art T2I models using prompts following the best practice; and
(iii) be naturally extensible to other downstream applications such as anime
figure generation and virtual try-on, with promising quality. Our code and
models are open-sourced at https://github.com/SHI-Labs/Prompt-Free-Diffusion.
"
Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with  Language Models,Robert L. Logan IV,http://arxiv.org/pdf/2106.13353v2.pdf,2021-06-24,"['cs.cl', 'cs.lg']",2106.13353v2.pdf,"  Prompting language models (LMs) with training examples and task descriptions
has been seen as critical to recent successes in few-shot learning. In this
work, we show that finetuning LMs in the few-shot setting can considerably
reduce the need for prompt engineering. In fact, one can use null prompts,
prompts that contain neither task-specific templates nor training examples, and
achieve competitive accuracy to manually-tuned prompts across a wide range of
tasks. While finetuning LMs does introduce new parameters for each downstream
task, we show that this memory overhead can be substantially reduced:
finetuning only the bias terms can achieve comparable or better accuracy than
standard finetuning while only updating 0.1% of the parameters. All in all, we
recommend finetuning LMs for few-shot learning as it is more accurate, robust
to different prompts, and can be made nearly as efficient as using frozen LMs.
"
An Empirical Study on Few-shot Knowledge Probing for Pretrained Language  Models,Tianxing He,http://arxiv.org/pdf/2109.02772v2.pdf,2021-09-06,['cs.ai'],2109.02772v2.pdf,"  Prompt-based knowledge probing for 1-hop relations has been used to measure
how much world knowledge is stored in pretrained language models. Existing work
uses considerable amounts of data to tune the prompts for better performance.
In this work, we compare a variety of approaches under a few-shot knowledge
probing setting, where only a small number (e.g., 10 or 20) of example triples
are available. In addition, we create a new dataset named TREx-2p, which
contains 2-hop relations. We report that few-shot examples can strongly boost
the probing performance for both 1-hop and 2-hop relations. In particular, we
find that a simple-yet-effective approach of finetuning the bias vectors in the
model outperforms existing prompt-engineering methods. Our dataset and code are
available at \url{https://github.com/cloudygoose/fewshot_lama}.
"
Design Guidelines for Prompt Engineering Text-to-Image Generative Models,Vivian Liu,http://arxiv.org/pdf/2109.06977v3.pdf,2021-09-14,['cs.hc'],2109.06977v3.pdf,"  Text-to-image generative models are a new and powerful way to generate visual
artwork. However, the open-ended nature of text as interaction is double-edged;
while users can input anything and have access to an infinite range of
generations, they also must engage in brute-force trial and error with the text
prompt when the result quality is poor. We conduct a study exploring what
prompt keywords and model hyperparameters can help produce coherent outputs. In
particular, we study prompts structured to include subject and style keywords
and investigate success and failure modes of these prompts. Our evaluation of
5493 generations over the course of five experiments spans 51 abstract and
concrete subjects as well as 51 abstract and figurative styles. From this
evaluation, we present design guidelines that can help people produce better
outcomes from text-to-image generative models.
"
Cut the CARP: Fishing for zero-shot story evaluation,Shahbuland Matiana,http://arxiv.org/pdf/2110.03111v3.pdf,2021-10-06,['cs.cl'],2110.03111v3.pdf,"  Recent advances in large-scale language models (Raffel et al., 2019; Brown et
al., 2020) have brought significant qualitative and quantitative improvements
in machine-driven text generation. Despite this, generation and evaluation of
machine-generated narrative text remains a challenging problem. Objective
evaluation of computationally-generated stories may be prohibitively expensive,
require meticulously annotated datasets, or may not adequately measure the
logical coherence of a generated story's narratological structure.
  Informed by recent advances in contrastive learning (Radford et al., 2021),
we present Contrastive Authoring and Reviewing Pairing (CARP): a scalable,
efficient method for performing qualitatively superior, zero-shot evaluation of
stories. We show a strong correlation between human evaluation of stories and
those of CARP. Model outputs more significantly correlate with corresponding
human input than those language-model based methods which utilize finetuning or
prompt engineering approaches. We also present and analyze the Story-Critique
Dataset, a new corpora composed of 1.3 million aligned story-critique pairs
derived from over 80,000 stories. We expect this corpus to be of interest to
NLP researchers.
"
Solving Probability and Statistics Problems by Program Synthesis,Leonard Tang,http://arxiv.org/pdf/2111.08267v1.pdf,2021-11-16,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.pl']",2111.08267v1.pdf,"  We solve university level probability and statistics questions by program
synthesis using OpenAI's Codex, a Transformer trained on text and fine-tuned on
code. We transform course problems from MIT's 18.05 Introduction to Probability
and Statistics and Harvard's STAT110 Probability into programming tasks. We
then execute the generated code to get a solution. Since these course questions
are grounded in probability, we often aim to have Codex generate probabilistic
programs that simulate a large number of probabilistic dependencies to compute
its solution. Our approach requires prompt engineering to transform the
question from its original form to an explicit, tractable form that results in
a correct program and solution. To estimate the amount of work needed to
translate an original question into its tractable form, we measure the
similarity between original and transformed questions. Our work is the first to
introduce a new dataset of university-level probability and statistics problems
and solve these problems in a scalable fashion using the program synthesis
capabilities of large language models.
"
StyleMC: Multi-Channel Based Fast Text-Guided Image Generation and  Manipulation,Umut Kocasari,http://arxiv.org/pdf/2112.08493v1.pdf,2021-12-15,"['cs.cv', 'cs.lg']",2112.08493v1.pdf,"  Discovering meaningful directions in the latent space of GANs to manipulate
semantic attributes typically requires large amounts of labeled data. Recent
work aims to overcome this limitation by leveraging the power of Contrastive
Language-Image Pre-training (CLIP), a joint text-image model. While promising,
these methods require several hours of preprocessing or training to achieve the
desired manipulations. In this paper, we present StyleMC, a fast and efficient
method for text-driven image generation and manipulation. StyleMC uses a
CLIP-based loss and an identity loss to manipulate images via a single text
prompt without significantly affecting other attributes. Unlike prior work,
StyleMC requires only a few seconds of training per text prompt to find stable
global directions, does not require prompt engineering and can be used with any
pre-trained StyleGAN2 model. We demonstrate the effectiveness of our method and
compare it to state-of-the-art methods. Our code can be found at
http://catlab-team.github.io/stylemc.
"
QaNER: Prompting Question Answering Models for Few-shot Named Entity  Recognition,Andy T. Liu,http://arxiv.org/pdf/2203.01543v2.pdf,2022-03-03,"['cs.cl', 'cs.ai', 'cs.lg']",2203.01543v2.pdf,"  Recently, prompt-based learning for pre-trained language models has succeeded
in few-shot Named Entity Recognition (NER) by exploiting prompts as task
guidance to increase label efficiency. However, previous prompt-based methods
for few-shot NER have limitations such as a higher computational complexity,
poor zero-shot ability, requiring manual prompt engineering, or lack of prompt
robustness. In this work, we address these shortcomings by proposing a new
prompt-based learning NER method with Question Answering (QA), called QaNER.
Our approach includes 1) a refined strategy for converting NER problems into
the QA formulation; 2) NER prompt generation for QA models; 3) prompt-based
tuning with QA models on a few annotated NER examples; 4) zero-shot NER by
prompting the QA model. Comparing the proposed approach with previous methods,
QaNER is faster at inference, insensitive to the prompt quality, and robust to
hyper-parameters, as well as demonstrating significantly better low-resource
performance and zero-shot capability.
"
Executive Function: A Contrastive Value Policy for Resampling and  Relabeling Perceptions via Hindsight Summarization?,Chris Lengerich,http://arxiv.org/pdf/2204.12639v1.pdf,2022-04-27,['cs.cl'],2204.12639v1.pdf,"  We develop the few-shot continual learning task from first principles and
hypothesize an evolutionary motivation and mechanism of action for executive
function as a contrastive value policy which resamples and relabels perception
data via hindsight summarization to minimize attended prediction error, similar
to an online prompt engineering problem. This is made feasible by the use of a
memory policy and a pretrained network with inductive biases for a grammar of
learning and is trained to maximize evolutionary survival. We show how this
model of executive function can be used to implement hypothesis testing as a
stream of consciousness and may explain observations of human few-shot learning
and neuroanatomy.
"
Polyglot Prompt: Multilingual Multitask PrompTraining,Jinlan Fu,http://arxiv.org/pdf/2204.14264v2.pdf,2022-04-29,['cs.cl'],2204.14264v2.pdf,"  This paper aims for a potential architectural improvement for multilingual
learning and asks: Can different tasks from different languages be modeled in a
monolithic framework, i.e. without any task/language-specific module? The
benefit of achieving this could open new doors for future multilingual
research, including allowing systems trained on low resources to be further
assisted by other languages as well as other tasks. We approach this goal by
developing a learning framework named Polyglot Prompting to exploit prompting
methods for learning a unified semantic space for different languages and tasks
with multilingual prompt engineering. We performed a comprehensive evaluation
of 6 tasks, namely topic classification, sentiment classification, named entity
recognition, question answering, natural language inference, and summarization,
covering 24 datasets and 49 languages. The experimental results demonstrated
the efficacy of multilingual multitask prompt-based learning and led to
inspiring observations. We also present an interpretable multilingual
evaluation methodology and show how the proposed framework, multilingual
multitask prompt training, works. We release all datasets prompted in the best
setting and code.
"
CLIP-CLOP: CLIP-Guided Collage and Photomontage,Piotr Mirowski,http://arxiv.org/pdf/2205.03146v3.pdf,2022-05-06,"['cs.cv', 'cs.ai']",2205.03146v3.pdf,"  The unabated mystique of large-scale neural networks, such as the CLIP dual
image-and-text encoder, popularized automatically generated art. Increasingly
more sophisticated generators enhanced the artworks' realism and visual
appearance, and creative prompt engineering enabled stylistic expression.
Guided by an artist-in-the-loop ideal, we design a gradient-based generator to
produce collages. It requires the human artist to curate libraries of image
patches and to describe (with prompts) the whole image composition, with the
option to manually adjust the patches' positions during generation, thereby
allowing humans to reclaim some control of the process and achieve greater
creative freedom. We explore the aesthetic potentials of high-resolution
collages, and provide an open-source Google Colab as an artistic tool.
"
Toxicity Detection with Generative Prompt-based Inference,Yau-Shian Wang,http://arxiv.org/pdf/2205.12390v1.pdf,2022-05-24,"['cs.cl', 'cs.ai']",2205.12390v1.pdf,"  Due to the subtleness, implicity, and different possible interpretations
perceived by different people, detecting undesirable content from text is a
nuanced difficulty. It is a long-known risk that language models (LMs), once
trained on corpus containing undesirable content, have the power to manifest
biases and toxicity. However, recent studies imply that, as a remedy, LMs are
also capable of identifying toxic content without additional fine-tuning.
Prompt-methods have been shown to effectively harvest this surprising
self-diagnosing capability. However, existing prompt-based methods usually
specify an instruction to a language model in a discriminative way. In this
work, we explore the generative variant of zero-shot prompt-based toxicity
detection with comprehensive trials on prompt engineering. We evaluate on three
datasets with toxicity labels annotated on social media posts. Our analysis
highlights the strengths of our generative classification approach both
quantitatively and qualitatively. Interesting aspects of self-diagnosis and its
ethical implications are discussed.
"
The Creativity of Text-to-Image Generation,Jonas Oppenlaender,http://arxiv.org/pdf/2206.02904v4.pdf,2022-05-13,"['cs.hc', 'cs.gr', 'h.5; h.m']",2206.02904v4.pdf,"  Text-guided synthesis of images has made a giant leap towards becoming a
mainstream phenomenon. With text-to-image generation systems, anybody can
create digital images and artworks. This provokes the question of whether
text-to-image generation is creative. This paper expounds on the nature of
human creativity involved in text-to-image art (so-called ""AI art"") with a
specific focus on the practice of prompt engineering. The paper argues that the
current product-centered view of creativity falls short in the context of
text-to-image generation. A case exemplifying this shortcoming is provided and
the importance of online communities for the creative ecosystem of
text-to-image art is highlighted. The paper provides a high-level summary of
this online ecosystem drawing on Rhodes' conceptual four P model of creativity.
Challenges for evaluating the creativity of text-to-image generation and
opportunities for research on text-to-image generation in the field of
Human-Computer Interaction (HCI) are discussed.
"
Rationale-Augmented Ensembles in Language Models,Xuezhi Wang,http://arxiv.org/pdf/2207.00747v1.pdf,2022-07-02,['cs.cl'],2207.00747v1.pdf,"  Recent research has shown that rationales, or step-by-step chains of thought,
can be used to improve performance in multi-step reasoning tasks. We reconsider
rationale-augmented prompting for few-shot in-context learning, where (input ->
output) prompts are expanded to (input, rationale -> output) prompts. For
rationale-augmented prompting we demonstrate how existing approaches, which
rely on manual prompt engineering, are subject to sub-optimal rationales that
may harm performance. To mitigate this brittleness, we propose a unified
framework of rationale-augmented ensembles, where we identify rationale
sampling in the output space as the key component to robustly improve
performance. This framework is general and can easily be extended to common
natural language processing tasks, even those that do not traditionally
leverage intermediate steps, such as question answering, word sense
disambiguation, and sentiment analysis. We demonstrate that rationale-augmented
ensembles achieve more accurate and interpretable results than existing
prompting approaches--including standard prompting without rationales and
rationale-based chain-of-thought prompting--while simultaneously improving
interpretability of model predictions through the associated rationales.
"
Text-Guided Synthesis of Artistic Images with Retrieval-Augmented  Diffusion Models,Robin Rombach,http://arxiv.org/pdf/2207.13038v1.pdf,2022-07-26,['cs.cv'],2207.13038v1.pdf,"  Novel architectures have recently improved generative image synthesis leading
to excellent visual quality in various tasks. Of particular note is the field
of ``AI-Art'', which has seen unprecedented growth with the emergence of
powerful multimodal models such as CLIP. By combining speech and image
synthesis models, so-called ``prompt-engineering'' has become established, in
which carefully selected and composed sentences are used to achieve a certain
visual style in the synthesized image. In this note, we present an alternative
approach based on retrieval-augmented diffusion models (RDMs). In RDMs, a set
of nearest neighbors is retrieved from an external database during training for
each training instance, and the diffusion model is conditioned on these
informative samples. During inference (sampling), we replace the retrieval
database with a more specialized database that contains, for example, only
images of a particular visual style. This provides a novel way to prompt a
general trained model after training and thereby specify a particular visual
style. As shown by our experiments, this approach is superior to specifying the
visual style within the text prompt. We open-source code and model weights at
https://github.com/CompVis/latent-diffusion .
"
Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation  with Large Language Models,Hendrik Strobelt,http://arxiv.org/pdf/2208.07852v1.pdf,2022-08-16,"['cs.cl', 'cs.hc', 'cs.lg']",2208.07852v1.pdf,"  State-of-the-art neural language models can now be used to solve ad-hoc
language tasks through zero-shot prompting without the need for supervised
training. This approach has gained popularity in recent years, and researchers
have demonstrated prompts that achieve strong accuracy on specific NLP tasks.
However, finding a prompt for new tasks requires experimentation. Different
prompt templates with different wording choices lead to significant accuracy
differences. PromptIDE allows users to experiment with prompt variations,
visualize prompt performance, and iteratively optimize prompts. We developed a
workflow that allows users to first focus on model feedback using small data
before moving on to a large data regime that allows empirical grounding of
promising prompts using quantitative measures of the task. The tool then allows
easy deployment of the newly created ad-hoc models. We demonstrate the utility
of PromptIDE (demo at http://prompt.vizhub.ai) and our workflow using several
real-world use cases.
"
Will It Blend? Mixing Training Paradigms & Prompting for Argument  Quality Prediction,Michiel van der Meer,http://arxiv.org/pdf/2209.08966v2.pdf,2022-09-19,"['cs.cl', 'cs.ai']",2209.08966v2.pdf,"  This paper describes our contributions to the Shared Task of the 9th Workshop
on Argument Mining (2022). Our approach uses Large Language Models for the task
of Argument Quality Prediction. We perform prompt engineering using GPT-3, and
also investigate the training paradigms multi-task learning, contrastive
learning, and intermediate-task training. We find that a mixed prediction setup
outperforms single models. Prompting GPT-3 works best for predicting argument
validity, and argument novelty is best estimated by a model trained using all
three training paradigms.
"
Legal Prompting: Teaching a Language Model to Think Like a Lawyer,Fangyi Yu,http://arxiv.org/pdf/2212.01326v2.pdf,2022-12-02,"['cs.cl', 'cs.ai', 'i.2.7']",2212.01326v2.pdf,"  Large language models that are capable of zero or few-shot prompting
approaches have given rise to the new research area of prompt engineering.
Recent advances showed that for example Chain-of-Thought (CoT) prompts can
improve arithmetic or common sense tasks significantly. We explore how such
approaches fare with legal reasoning tasks and take the COLIEE entailment task
based on the Japanese Bar exam for testing zero-shot/few-shot and fine-tuning
approaches. Our findings show that while CoT prompting and fine-tuning with
explanations approaches show improvements, the best results are produced by
prompts that are derived from specific legal reasoning techniques such as IRAC
(Issue, Rule, Application, Conclusion). Based on our experiments we improve the
2021 best result from 0.7037 accuracy to 0.8148 accuracy and beat the 2022 best
system of 0.6789 accuracy with an accuracy of 0.7431.
"
Controllable Image Captioning via Prompting,Ning Wang,http://arxiv.org/pdf/2212.01803v1.pdf,2022-12-04,['cs.cv'],2212.01803v1.pdf,"  Despite the remarkable progress of image captioning, existing captioners
typically lack the controllable capability to generate desired image captions,
e.g., describing the image in a rough or detailed manner, in a factual or
emotional view, etc. In this paper, we show that a unified model is qualified
to perform well in diverse domains and freely switch among multiple styles.
Such a controllable capability is achieved by embedding the prompt learning
into the image captioning framework. To be specific, we design a set of prompts
to fine-tune the pre-trained image captioner. These prompts allow the model to
absorb stylized data from different domains for joint training, without
performance degradation in each domain. Furthermore, we optimize the prompts
with learnable vectors in the continuous word embedding space, avoiding the
heuristic prompt engineering and meanwhile exhibiting superior performance. In
the inference stage, our model is able to generate desired stylized captions by
choosing the corresponding prompts. Extensive experiments verify the
controllable capability of the proposed method. Notably, we achieve outstanding
performance on two diverse image captioning benchmarks including COCO Karpathy
split and TextCaps using a unified model.
"
Fake it till you make it: Learning transferable representations from  synthetic ImageNet clones,Mert Bulent Sariyildiz,http://arxiv.org/pdf/2212.08420v2.pdf,2022-12-16,"['cs.cv', 'cs.lg']",2212.08420v2.pdf,"  Recent image generation models such as Stable Diffusion have exhibited an
impressive ability to generate fairly realistic images starting from a simple
text prompt. Could such models render real images obsolete for training image
prediction models? In this paper, we answer part of this provocative question
by investigating the need for real images when training models for ImageNet
classification. Provided only with the class names that have been used to build
the dataset, we explore the ability of Stable Diffusion to generate synthetic
clones of ImageNet and measure how useful these are for training classification
models from scratch. We show that with minimal and class-agnostic prompt
engineering, ImageNet clones are able to close a large part of the gap between
models produced by synthetic images and models trained with real images, for
the several standard classification benchmarks that we consider in this study.
More importantly, we show that models trained on synthetic images exhibit
strong generalization properties and perform on par with models trained on real
data for transfer. Project page: https://europe.naverlabs.com/imagenet-sd/
"
Explanation Regeneration via Information Bottleneck,Qintong Li,http://arxiv.org/pdf/2212.09603v2.pdf,2022-12-19,['cs.cl'],2212.09603v2.pdf,"  Explaining the black-box predictions of NLP models naturally and accurately
is an important open problem in natural language generation. These free-text
explanations are expected to contain sufficient and carefully-selected evidence
to form supportive arguments for predictions. Due to the superior generative
capacity of large pretrained language models, recent work built on prompt
engineering enables explanation generation without specific training. However,
explanation generated through single-pass prompting often lacks sufficiency and
conciseness. To address this problem, we develop an information bottleneck
method EIB to produce refined explanations that are sufficient and concise. Our
approach regenerates the free-text explanation by polishing the single-pass
output from the pretrained language model but retaining the information that
supports the contents being explained. Experiments on two out-of-domain tasks
verify the effectiveness of EIB through automatic evaluation and
thoroughly-conducted human evaluation.
"
Optimizing Prompts for Text-to-Image Generation,Yaru Hao,http://arxiv.org/pdf/2212.09611v1.pdf,2022-12-19,"['cs.cl', 'cs.cv']",2212.09611v1.pdf,"  Well-designed prompts can guide text-to-image models to generate amazing
images. However, the performant prompts are often model-specific and misaligned
with user input. Instead of laborious human engineering, we propose prompt
adaptation, a general framework that automatically adapts original user input
to model-preferred prompts. Specifically, we first perform supervised
fine-tuning with a pretrained language model on a small collection of manually
engineered prompts. Then we use reinforcement learning to explore better
prompts. We define a reward function that encourages the policy to generate
more aesthetically pleasing images while preserving the original user
intentions. Experimental results on Stable Diffusion show that our method
outperforms manual prompt engineering in terms of both automatic metrics and
human preference ratings. Moreover, reinforcement learning further boosts
performance, especially on out-of-domain prompts. The pretrained checkpoints
are available at https://aka.ms/promptist. The demo can be found at
https://aka.ms/promptist-demo.
"
Using Large Language Models to Generate Engaging Captions for Data  Visualizations,Ashley Liew,http://arxiv.org/pdf/2212.14047v1.pdf,2022-12-27,"['cs.cl', 'cs.ai', 'cs.hc']",2212.14047v1.pdf,"  Creating compelling captions for data visualizations has been a longstanding
challenge. Visualization researchers are typically untrained in journalistic
reporting and hence the captions that are placed below data visualizations tend
to be not overly engaging and rather just stick to basic observations about the
data. In this work we explore the opportunities offered by the newly emerging
crop of large language models (LLM) which use sophisticated deep learning
technology to produce human-like prose. We ask, can these powerful software
devices be purposed to produce engaging captions for generic data
visualizations like a scatterplot. It turns out that the key challenge lies in
designing the most effective prompt for the LLM, a task called prompt
engineering. We report on first experiments using the popular LLM GPT-3 and
deliver some promising results.
"
Fixing Hardware Security Bugs with Large Language Models,Baleegh Ahmad,http://arxiv.org/pdf/2302.01215v1.pdf,2023-02-02,['cs.cr'],2302.01215v1.pdf,"  Novel AI-based code-writing Large Language Models (LLMs) such as OpenAI's
Codex have demonstrated capabilities in many coding-adjacent domains. In this
work we consider how LLMs maybe leveraged to automatically repair security
relevant bugs present in hardware designs. We focus on bug repair in code
written in the Hardware Description Language Verilog. For this study we build a
corpus of domain-representative hardware security bugs. We then design and
implement a framework to quantitatively evaluate the performance of any LLM
tasked with fixing the specified bugs. The framework supports design space
exploration of prompts (i.e., prompt engineering) and identifying the best
parameters for the LLM. We show that an ensemble of LLMs can repair all ten of
our benchmarks. This ensemble outperforms the state-of-the-art Cirfix hardware
bug repair tool on its own suite of bugs. These results show that LLMs can
repair hardware security bugs and the framework is an important step towards
the ultimate goal of an automated end-to-end bug repair framework.
"
UPRISE: Universal Prompt Retrieval for Improving Zero-Shot Evaluation,Daixuan Cheng,http://arxiv.org/pdf/2303.08518v3.pdf,2023-03-15,['cs.cl'],2303.08518v3.pdf,"  Large Language Models (LLMs) are popular for their impressive abilities, but
the need for model-specific fine-tuning or task-specific prompt engineering can
hinder their generalization. We propose UPRISE (Universal Prompt Retrieval for
Improving zero-Shot Evaluation), which tunes a lightweight and versatile
retriever that automatically retrieves prompts for a given zero-shot task
input. Specifically, we demonstrate universality in a cross-task and
cross-model scenario: the retriever is tuned on a diverse set of tasks, but
tested on unseen task types; we use a small frozen LLM, GPT-Neo-2.7B, for
tuning the retriever, but test the retriever on different LLMs of much larger
scales, such as BLOOM-7.1B, OPT-66B and GPT3-175B. Additionally, we show that
UPRISE mitigates the hallucination problem in our experiments with ChatGPT,
suggesting its potential to improve even the strongest LLMs. Our model and code
are available at https://github.com/microsoft/LMOps.
"
Patch-Token Aligned Bayesian Prompt Learning for Vision-Language Models,Xinyang Liu,http://arxiv.org/pdf/2303.09100v1.pdf,2023-03-16,"['cs.cv', 'cs.cl', 'cs.lg']",2303.09100v1.pdf,"  For downstream applications of vision-language pre-trained models, there has
been significant interest in constructing effective prompts. Existing works on
prompt engineering, which either require laborious manual designs or optimize
the prompt tuning as a point estimation problem, may fail to describe diverse
characteristics of categories and limit their applications. We introduce a
Bayesian probabilistic resolution to prompt learning, where the label-specific
stochastic prompts are generated hierarchically by first sampling a latent
vector from an underlying distribution and then employing a lightweight
generative model. Importantly, we semantically regularize prompt learning with
the visual knowledge and view images and the corresponding prompts as patch and
token sets under optimal transport, which pushes the prompt tokens to
faithfully capture the label-specific visual concepts, instead of overfitting
the training categories. Moreover, the proposed model can also be
straightforwardly extended to the conditional case where the
instance-conditional prompts are generated to improve the generalizability.
Extensive experiments on 15 datasets show promising transferability and
generalization performance of our proposed model.
"
Safety Analysis in the Era of Large Language Models: A Case Study of  STPA using ChatGPT,Yi Qi,http://arxiv.org/pdf/2304.01246v2.pdf,2023-04-03,"['cs.cl', 'cs.ai', 'cs.cy', 'cs.se']",2304.01246v2.pdf,"  Can safety analysis make use of Large Language Models (LLMs)? A case study
explores Systems Theoretic Process Analysis (STPA) applied to Automatic
Emergency Brake (AEB) and Electricity Demand Side Management (DSM) systems
using ChatGPT. We investigate how collaboration schemes, input semantic
complexity, and prompt guidelines influence STPA results. Comparative results
show that using ChatGPT without human intervention may be inadequate due to
reliability related issues, but with careful design, it may outperform human
experts. No statistically significant differences are found when varying the
input semantic complexity or using common prompt guidelines, which suggests the
necessity for developing domain-specific prompt engineering. We also highlight
future challenges, including concerns about LLM trustworthiness and the
necessity for standardisation and regulation in this domain.
"
Geotechnical Parrot Tales (GPT): Harnessing Large Language Models in  geotechnical engineering,Krishna Kumar,http://arxiv.org/pdf/2304.02138v3.pdf,2023-04-04,"['cs.cl', 'physics.geo-ph', 'i.2.7; j.2.6']",2304.02138v3.pdf,"  The widespread adoption of large language models (LLMs), such as OpenAI's
ChatGPT, could revolutionize various industries, including geotechnical
engineering. However, GPT models can sometimes generate plausible-sounding but
false outputs, leading to hallucinations. In this article, we discuss the
importance of prompt engineering in mitigating these risks and harnessing the
full potential of GPT for geotechnical applications. We explore the challenges
and pitfalls associated with LLMs and highlight the role of context in ensuring
accurate and valuable responses. Furthermore, we examine the development of
context-specific search engines and the potential of LLMs to become a natural
interface for complex tasks, such as data analysis and design. We also develop
a unified interface using natural language to handle complex geotechnical
engineering tasks and data analysis. By integrating GPT into geotechnical
engineering workflows, professionals can streamline their work and develop
sustainable and resilient infrastructure systems for the future.
"
Evaluation of ChatGPT Family of Models for Biomedical Reasoning and  Classification,Shan Chen,http://arxiv.org/pdf/2304.02496v1.pdf,2023-04-05,"['cs.cl', 'cs.ai']",2304.02496v1.pdf,"  Recent advances in large language models (LLMs) have shown impressive ability
in biomedical question-answering, but have not been adequately investigated for
more specific biomedical applications. This study investigates the performance
of LLMs such as the ChatGPT family of models (GPT-3.5s, GPT-4) in biomedical
tasks beyond question-answering. Because no patient data can be passed to the
OpenAI API public interface, we evaluated model performance with over 10000
samples as proxies for two fundamental tasks in the clinical domain -
classification and reasoning. The first task is classifying whether statements
of clinical and policy recommendations in scientific literature constitute
health advice. The second task is causal relation detection from the biomedical
literature. We compared LLMs with simpler models, such as bag-of-words (BoW)
with logistic regression, and fine-tuned BioBERT models. Despite the excitement
around viral ChatGPT, we found that fine-tuning for two fundamental NLP tasks
remained the best strategy. The simple BoW model performed on par with the most
complex LLM prompting. Prompt engineering required significant investment.
"
"VOICE: Visual Oracle for Interaction, Conversation, and Explanation",Donggang Jia,http://arxiv.org/pdf/2304.04083v1.pdf,2023-04-08,"['cs.hc', 'cs.gr']",2304.04083v1.pdf,"  We present VOICE, a novel approach for connecting large language models'
(LLM) conversational capabilities with interactive exploratory visualization.
VOICE introduces several innovative technical contributions that drive our
conversational visualization framework. Our foundation is a pack-of-bots that
can perform specific tasks, such as assigning tasks, extracting instructions,
and generating coherent content. We employ fine-tuning and prompt engineering
techniques to tailor bots' performance to their specific roles and accurately
respond to user queries, and a new prompt-based iterative scene-tree generation
establishes a coupling with a structural model. Our text-to-visualization
method generates a flythrough sequence matching the content explanation.
Finally, 3D natural language interaction provides capabilities to navigate and
manipulate the 3D models in real-time. The VOICE framework can receive
arbitrary voice commands from the user and responds verbally, tightly coupled
with corresponding visual representation with low latency and high accuracy. We
demonstrate the effectiveness and high generalizability potential of our
approach by applying it to two distinct domains: analyzing three 3D molecular
models with multi-scale and multi-instance attributes, and showcasing its
effectiveness on a cartographic map visualization. A free copy of this paper
and all supplemental materials are available at https://osf.io/g7fbr/.
"
Prompting the Hidden Talent of Web-Scale Speech Models for Zero-Shot  Task Generalization,Puyuan Peng,http://arxiv.org/pdf/2305.11095v3.pdf,2023-05-18,"['eess.as', 'cs.ai', 'cs.cl', 'cs.lg', 'cs.sd']",2305.11095v3.pdf,"  We investigate the emergent abilities of the recently proposed web-scale
speech model Whisper, by adapting it to unseen tasks with prompt engineering.
We selected three tasks: audio-visual speech recognition (AVSR), code-switched
speech recognition (CS-ASR), and speech translation (ST) on unseen language
pairs. We design task-specific prompts, by either leveraging another
large-scale model, or simply manipulating the special tokens in the default
prompts. Experiments show that compared to the default prompts, our proposed
prompts improve performance by 10% to 45% on the three zero-shot tasks, and
even outperform SotA supervised models on some datasets. In addition, our
experiments reveal many interesting properties of Whisper, including its
robustness to prompts, bias on accents, and the multilingual understanding in
its latent space. Code is available at
https://github.com/jasonppy/PromptingWhisper
"
Constructing Dreams using Generative AI,Safinah Ali,http://arxiv.org/pdf/2305.12013v1.pdf,2023-05-19,"['cs.hc', 'cs.ai', 'cs.cy']",2305.12013v1.pdf,"  Generative AI tools introduce new and accessible forms of media creation for
youth. They also raise ethical concerns about the generation of fake media,
data protection, privacy and ownership of AI-generated art. Since generative AI
is already being used in products used by youth, it is critical that they
understand how these tools work and how they can be used or misused. In this
work, we facilitated students' generative AI learning through expression of
their imagined future identities. We designed a learning workshop - Dreaming
with AI - where students learned about the inner workings of generative AI
tools, used text-to-image generation algorithms to create their imaged future
dreams, reflected on the potential benefits and harms of generative AI tools
and voiced their opinions about policies for the use of these tools in
classrooms. In this paper, we present the learning activities and experiences
of 34 high school students who engaged in our workshops. Students reached
creative learning objectives by using prompt engineering to create their future
dreams, gained technical knowledge by learning the abilities, limitations,
text-visual mappings and applications of generative AI, and identified most
potential societal benefits and harms of generative AI.
"
Interactive Data Synthesis for Systematic Vision Adaptation via  LLMs-AIGCs Collaboration,Qifan Yu,http://arxiv.org/pdf/2305.12799v1.pdf,2023-05-22,['cs.cv'],2305.12799v1.pdf,"  Recent text-to-image generation models have shown promising results in
generating high-fidelity photo-realistic images. In parallel, the problem of
data scarcity has brought a growing interest in employing AIGC technology for
high-quality data expansion. However, this paradigm requires well-designed
prompt engineering that cost-less data expansion and labeling remain
under-explored. Inspired by LLM's powerful capability in task guidance, we
propose a new paradigm of annotated data expansion named as ChatGenImage. The
core idea behind it is to leverage the complementary strengths of diverse
models to establish a highly effective and user-friendly pipeline for
interactive data augmentation. In this work, we extensively study how LLMs
communicate with AIGC model to achieve more controllable image generation and
make the first attempt to collaborate them for automatic data augmentation for
a variety of downstream tasks. Finally, we present fascinating results obtained
from our ChatGenImage framework and demonstrate the powerful potential of our
synthetic data for systematic vision adaptation. Our codes are available at
https://github.com/Yuqifan1117/Labal-Anything-Pipeline.
"
Making Language Models Better Tool Learners with Execution Feedback,Shuofei Qiao,http://arxiv.org/pdf/2305.13068v1.pdf,2023-05-22,"['cs.cl', 'cs.ai', 'cs.hc', 'cs.ir', 'cs.lg']",2305.13068v1.pdf,"  Tools serve as pivotal interfaces that enable humans to understand and
reshape the world. With the advent of foundational models, AI systems can
utilize tools to expand their capabilities and interact with the world.
Existing tool learning methodologies, encompassing supervised fine-tuning and
prompt engineering approaches, often induce language models to utilize tools
indiscriminately, as complex problems often exceed their own competencies.
However, introducing tools for simple tasks, which the models themselves can
readily resolve, can inadvertently propagate errors rather than enhance
performance. This leads to the research question: can we teach language models
when and how to use tools? To meet this need, we propose Tool leaRning wIth
exeCution fEedback (TRICE), a two-stage end-to-end framework that enables the
model to continually learn through feedback derived from tool execution,
thereby learning when and how to use tools effectively. Experimental results,
backed by further analysis, show that TRICE can make the language model to
selectively use tools by decreasing the model's dependency on tools while
enhancing the performance. Code and datasets will be available in
https://github.com/zjunlp/trice.
"
Prompt position really matters in few-shot and zero-shot NLU tasks,Junyu Mao,http://arxiv.org/pdf/2305.14493v2.pdf,2023-05-23,['cs.cl'],2305.14493v2.pdf,"  Prompt-based models have made remarkable advancements in the fields of
zero-shot and few-shot learning, attracting a lot of attention from
researchers. Developing an effective prompt template plays a critical role.
However, prior studies have mainly focused on prompt vocabulary selection or
embedding initialization with the reserved prompt position fixed. In this
empirical study, we conduct the most comprehensive analysis to date of prompt
position option for natural language understanding tasks. Our findings quantify
the substantial impact prompt position has on model performance. We observe
that the prompt position used in prior studies is often sub-optimal for both
zero-shot and few-shot settings. These findings suggest prompt position
optimisation as an interesting research direction alongside the existing focus
on prompt engineering.
"
ContrastNER: Contrastive-based Prompt Tuning for Few-shot NER,Amirhossein Layegh,http://arxiv.org/pdf/2305.17951v1.pdf,2023-05-29,"['cs.cl', 'cs.ai']",2305.17951v1.pdf,"  Prompt-based language models have produced encouraging results in numerous
applications, including Named Entity Recognition (NER) tasks. NER aims to
identify entities in a sentence and provide their types. However, the strong
performance of most available NER approaches is heavily dependent on the design
of discrete prompts and a verbalizer to map the model-predicted outputs to
entity categories, which are complicated undertakings. To address these
challenges, we present ContrastNER, a prompt-based NER framework that employs
both discrete and continuous tokens in prompts and uses a contrastive learning
approach to learn the continuous prompts and forecast entity types. The
experimental results demonstrate that ContrastNER obtains competitive
performance to the state-of-the-art NER methods in high-resource settings and
outperforms the state-of-the-art models in low-resource circumstances without
requiring extensive manual prompt engineering and verbalizer design.
"
Conformal Prediction with Large Language Models for Multi-Choice  Question Answering,Bhawesh Kumar,http://arxiv.org/pdf/2305.18404v3.pdf,2023-05-28,"['cs.cl', 'cs.lg', 'stat.ml']",2305.18404v3.pdf,"  As large language models continue to be widely developed, robust uncertainty
quantification techniques will become crucial for their safe deployment in
high-stakes scenarios. In this work, we explore how conformal prediction can be
used to provide uncertainty quantification in language models for the specific
task of multiple-choice question-answering. We find that the uncertainty
estimates from conformal prediction are tightly correlated with prediction
accuracy. This observation can be useful for downstream applications such as
selective classification and filtering out low-quality predictions. We also
investigate the exchangeability assumption required by conformal prediction to
out-of-subject questions, which may be a more realistic scenario for many
practical applications. Our work contributes towards more trustworthy and
reliable usage of large language models in safety-critical situations, where
robust guarantees of error rate are required.
"
Test-Time Training on Nearest Neighbors for Large Language Models,Moritz Hardt,http://arxiv.org/pdf/2305.18466v2.pdf,2023-05-29,"['cs.cl', 'cs.lg']",2305.18466v2.pdf,"  Many recent efforts aim to augment language models with relevant information
retrieved from a database at test time. We avoid the need for prompt
engineering by directly fine-tuning the model on data retrieved at test time
using its standard training setup. For this purpose, we build a large-scale
distributed nearest neighbor index based on text embeddings of the Pile
dataset. Given a query to a language model, our system retrieves the neighbors
of the query and fine-tunes the model on the text data corresponding to those
neighbors. Surprisingly, retrieving and training on as few as 20 neighbors,
each for only one gradient iteration, drastically improves performance across
more than twenty language modeling tasks in the Pile benchmark. For example,
test-time training significantly narrows the performance gap between a small
GPT2 model and a GPTNeo model, more than ten times larger, that was
specifically trained to convergence on the Pile. Sufficient index quality and
size, however, are important. Our work establishes a valuable first baseline
for implementing test-time training in the context of large language models,
opening the door to numerous promising research avenues.
"
CONA: A novel CONtext-Aware instruction paradigm for communication using  large language model,Nan Zhou,http://arxiv.org/pdf/2305.18620v1.pdf,2023-05-26,"['cs.cl', 'cs.ai', 'cs.hc']",2305.18620v1.pdf,"  We introduce CONA, a novel context-aware instruction paradigm for effective
knowledge dissemination using generative pre-trained transformer (GPT) models.
CONA is a flexible framework designed to leverage the capabilities of Large
Language Models (LLMs) and incorporate DIKW (Data, Information, Knowledge,
Wisdom) hierarchy to automatically instruct and optimise presentation content,
anticipate potential audience inquiries, and provide context-aware answers that
adaptive to the knowledge level of the audience group. The unique aspect of the
CONA paradigm lies in its combination of an independent advisory mechanism and
a recursive feedback loop rooted on the DIKW hierarchy. This synergy
significantly enhances context-aware contents, ensuring they are accessible and
easily comprehended by the audience. This paradigm is an early pioneer to
explore new methods for knowledge dissemination and communication in the LLM
era, offering effective support for everyday knowledge sharing scenarios. We
conduct experiments on a range of audience roles, along with materials from
various disciplines using GPT4. Both quantitative and qualitative results
demonstrated that the proposed CONA paradigm achieved remarkable performance
compared to the outputs guided by conventional prompt engineering.
"
GPT4Tools: Teaching Large Language Model to Use Tools via  Self-instruction,Rui Yang,http://arxiv.org/pdf/2305.18752v1.pdf,2023-05-30,"['cs.cv', 'cs.cl']",2305.18752v1.pdf,"  This paper aims to efficiently enable Large Language Models (LLMs) to use
multimodal tools. Advanced proprietary LLMs, such as ChatGPT and GPT-4, have
shown great potential for tool usage through sophisticated prompt engineering.
Nevertheless, these models typically rely on prohibitive computational costs
and publicly inaccessible data. To address these challenges, we propose the
GPT4Tools based on self-instruct to enable open-source LLMs, such as LLaMA and
OPT, to use tools. It generates an instruction-following dataset by prompting
an advanced teacher with various multi-modal contexts. By using the Low-Rank
Adaptation (LoRA) optimization, our approach facilitates the open-source LLMs
to solve a range of visual problems, including visual comprehension and image
generation. Moreover, we provide a benchmark to evaluate the ability of LLMs to
use tools, which is performed in both zero-shot and fine-tuning ways. Extensive
experiments demonstrate the effectiveness of our method on various language
models, which not only significantly improves the accuracy of invoking seen
tools, but also enables the zero-shot capacity for unseen tools. The code and
demo are available at https://github.com/StevenGrove/GPT4Tools.
"
Contextualizing Problems to Student Interests at Scale in Intelligent  Tutoring System Using Large Language Models,Gautam Yadav,http://arxiv.org/pdf/2306.00190v1.pdf,2023-05-31,['cs.hc'],2306.00190v1.pdf,"  Contextualizing problems to align with student interests can significantly
improve learning outcomes. However, this task often presents scalability
challenges due to resource and time constraints. Recent advancements in Large
Language Models (LLMs) like GPT-4 offer potential solutions to these issues.
This study explores the ability of GPT-4 in the contextualization of problems
within CTAT, an intelligent tutoring system, aiming to increase student
engagement and enhance learning outcomes. Through iterative prompt engineering,
we achieved meaningful contextualization that preserved the difficulty and
original intent of the problem, thereby not altering values or overcomplicating
the questions. While our research highlights the potential of LLMs in
educational settings, we acknowledge current limitations, particularly with
geometry problems, and emphasize the need for ongoing evaluation and research.
Future work includes systematic studies to measure the impact of this tool on
students' learning outcomes and enhancements to handle a broader range of
problems.
"
Exploring EFL students' prompt engineering in human-AI story writing: an  Activity Theory perspective,David James Woo,http://arxiv.org/pdf/2306.01798v1.pdf,2023-06-01,"['cs.cy', 'cs.ai']",2306.01798v1.pdf,"  This study applies Activity Theory to investigate how English as a foreign
language (EFL) students prompt generative artificial intelligence (AI) tools
during short story writing. Sixty-seven Hong Kong secondary school students
created generative-AI tools using open-source language models and wrote short
stories with them. The study collected and analyzed the students' generative-AI
tools, short stories, and written reflections on their conditions or purposes
for prompting. The research identified three main themes regarding the purposes
for which students prompt generative-AI tools during short story writing: a
lack of awareness of purposes, overcoming writer's block, and developing,
expanding, and improving the story. The study also identified common
characteristics of students' activity systems, including the sophistication of
their generative-AI tools, the quality of their stories, and their school's
overall academic achievement level, for their prompting of generative-AI tools
for the three purposes during short story writing. The study's findings suggest
that teachers should be aware of students' purposes for prompting generative-AI
tools to provide tailored instructions and scaffolded guidance. The findings
may also help designers provide differentiated instructions for users at
various levels of story development when using a generative-AI tool.
"
Prompting Is All You Need: Automated Android Bug Replay with Large  Language Models,Sidong Feng,http://arxiv.org/pdf/2306.01987v2.pdf,2023-06-03,['cs.se'],2306.01987v2.pdf,"  Bug reports are vital for software maintenance that allow users to inform
developers of the problems encountered while using the software. As such,
researchers have committed considerable resources toward automating bug replay
to expedite the process of software maintenance. Nonetheless, the success of
current automated approaches is largely dictated by the characteristics and
quality of bug reports, as they are constrained by the limitations of
manually-crafted patterns and pre-defined vocabulary lists. Inspired by the
success of Large Language Models (LLMs) in natural language understanding, we
propose AdbGPT, a new lightweight approach to automatically reproduce the bugs
from bug reports through prompt engineering, without any training and
hard-coding effort. AdbGPT leverages few-shot learning and chain-of-thought
reasoning to elicit human knowledge and logical reasoning from LLMs to
accomplish the bug replay in a manner similar to a developer. Our evaluations
demonstrate the effectiveness and efficiency of our AdbGPT to reproduce 81.3%
of bug reports in 253.6 seconds, outperforming the state-of-the-art baselines
and ablation studies. We also conduct a small-scale user study to confirm the
usefulness of AdbGPT in enhancing developers' bug replay capabilities.
"
ChatGPT as a mapping assistant: A novel method to enrich maps with  generative AI and content derived from street-level photographs,Levente Juhász,http://arxiv.org/pdf/2306.03204v1.pdf,2023-06-05,"['cs.cy', 'cs.cv']",2306.03204v1.pdf,"  This paper explores the concept of leveraging generative AI as a mapping
assistant for enhancing the efficiency of collaborative mapping. We present
results of an experiment that combines multiple sources of volunteered
geographic information (VGI) and large language models (LLMs). Three analysts
described the content of crowdsourced Mapillary street-level photographs taken
along roads in a small test area in Miami, Florida. GPT-3.5-turbo was
instructed to suggest the most appropriate tagging for each road in
OpenStreetMap (OSM). The study also explores the utilization of BLIP-2, a
state-of-the-art multimodal pre-training method as an artificial analyst of
street-level photographs in addition to human analysts. Results demonstrate two
ways to effectively increase the accuracy of mapping suggestions without
modifying the underlying AI models: by (1) providing a more detailed
description of source photographs, and (2) combining prompt engineering with
additional context (e.g. location and objects detected along a road). The first
approach increases the suggestion accuracy by up to 29%, and the second one by
up to 20%.
"
An Approach to Solving the Abstraction and Reasoning Corpus (ARC)  Challenge,Tan John Chong Min,http://arxiv.org/pdf/2306.03553v1.pdf,2023-06-06,['cs.ai'],2306.03553v1.pdf,"  We utilise the power of Large Language Models (LLMs), in particular GPT4, to
be prompt engineered into performing an arbitrary task. Here, we give the model
some human priors via text, along with some typical procedures for solving the
ARC tasks, and ask it to generate the i) broad description of the input-output
relation, ii) detailed steps of the input-output mapping, iii) use the detailed
steps to perform manipulation on the test input and derive the test output. The
current GPT3.5/GPT4 prompt solves 2 out of 4 tested small ARC challenges (those
with small grids of 8x8 and below). With tweaks to the prompt to make it more
specific for the use case, it can solve more. We posit that when scaled to a
multi-agent system with usage of past memory and equipped with an image
interpretation tool via Visual Question Answering, we may actually be able to
solve the majority of the ARC challenge
"
Protect Your Prompts: Protocols for IP Protection in LLM Applications,M. A. van Wyk,http://arxiv.org/pdf/2306.06297v1.pdf,2023-06-09,"['cs.cl', 'cs.ai', '91d10, 68t10, 03d40', 'i.2.6; k.6.5; f.3.2']",2306.06297v1.pdf,"  With the rapid adoption of AI in the form of large language models (LLMs),
the potential value of carefully engineered prompts has become significant.
However, to realize this potential, prompts should be tradable on an open
market. Since prompts are, at present, generally economically non-excludable,
by virtue of their nature as text, no general competitive market has yet been
established. This note discusses two protocols intended to provide protection
of prompts, elevating their status as intellectual property, thus confirming
the intellectual property rights of prompt engineers, and potentially
supporting the flourishing of an open market for LLM prompts.
"
Scalable 3D Captioning with Pretrained Models,Tiange Luo,http://arxiv.org/pdf/2306.07279v2.pdf,2023-06-12,['cs.cv'],2306.07279v2.pdf,"  We introduce Cap3D, an automatic approach for generating descriptive text for
3D objects. This approach utilizes pretrained models from image captioning,
image-text alignment, and LLM to consolidate captions from multiple views of a
3D asset, completely side-stepping the time-consuming and costly process of
manual annotation. We apply Cap3D to the recently introduced large-scale 3D
dataset, Objaverse, resulting in 660k 3D-text pairs. Our evaluation, conducted
using 41k human annotations from the same dataset, demonstrates that Cap3D
surpasses human-authored descriptions in terms of quality, cost, and speed.
Through effective prompt engineering, Cap3D rivals human performance in
generating geometric descriptions on 17k collected annotations from the ABO
dataset. Finally, we finetune Text-to-3D models on Cap3D and human captions,
and show Cap3D outperforms; and benchmark the SOTA including Point-E, Shape-E,
and DreamFusion.
"
FALL-E: A Foley Sound Synthesis Model and Strategies,Minsung Kang,http://arxiv.org/pdf/2306.09807v2.pdf,2023-06-16,"['eess.as', 'cs.lg', 'cs.sd']",2306.09807v2.pdf,"  This paper introduces FALL-E, a foley synthesis system and its
training/inference strategies. The FALL-E model employs a cascaded approach
comprising low-resolution spectrogram generation, spectrogram super-resolution,
and a vocoder. We trained every sound-related model from scratch using our
extensive datasets, and utilized a pre-trained language model. We conditioned
the model with dataset-specific texts, enabling it to learn sound quality and
recording environment based on text input. Moreover, we leveraged external
language models to improve text descriptions of our datasets and performed
prompt engineering for quality, coherence, and diversity. FALL-E was evaluated
by an objective measure as well as listening tests in the DCASE 2023 challenge
Task 7. The submission achieved the second place on average, while achieving
the best score for diversity, second place for audio quality, and third place
for class fitness.
"
The Cultivated Practices of Text-to-Image Generation,Jonas Oppenlaender,http://arxiv.org/pdf/2306.11393v1.pdf,2023-06-20,"['cs.cy', 'cs.ai', 'k.4; j.5; i.2.0; k.5.m']",2306.11393v1.pdf,"  Humankind is entering a novel creative era in which anybody can synthesize
digital information using generative artificial intelligence (AI).
Text-to-image generation, in particular, has become vastly popular and millions
of practitioners produce AI-generated images and AI art online. This chapter
first gives an overview of the key developments that enabled a healthy
co-creative online ecosystem around text-to-image generation to rapidly emerge,
followed by a high-level description of key elements in this ecosystem. A
particular focus is placed on prompt engineering, a creative practice that has
been embraced by the AI art community. It is then argued that the emerging
co-creative ecosystem constitutes an intelligent system on its own - a system
that both supports human creativity, but also potentially entraps future
generations and limits future development efforts in AI. The chapter discusses
the potential risks and dangers of cultivating this co-creative ecosystem, such
as the bias inherent in today's training data, potential quality degradation in
future image generation systems due to synthetic data becoming common place,
and the potential long-term effects of text-to-image generation on people's
imagination, ambitions, and development.
"
Solving and Generating NPR Sunday Puzzles with Large Language Models,Jingmiao Zhao,http://arxiv.org/pdf/2306.12255v1.pdf,2023-06-21,['cs.cl'],2306.12255v1.pdf,"  We explore the ability of large language models to solve and generate puzzles
from the NPR Sunday Puzzle game show using PUZZLEQA, a dataset comprising 15
years of on-air puzzles. We evaluate four large language models using PUZZLEQA,
in both multiple choice and free response formats, and explore two prompt
engineering techniques to improve free response performance: chain-of-thought
reasoning and prompt summarization. We find that state-of-the-art large
language models can solve many PUZZLEQA puzzles: the best model, GPT-3.5,
achieves 50.2% loose accuracy. However, in our few-shot puzzle generation
experiment, we find no evidence that models can generate puzzles: GPT-3.5
generates puzzles with answers that do not conform to the generated rules.
Puzzle generation remains a challenging task for future work.
"
Federated Large Language Model: A Position Paper,Chaochao Chen,http://arxiv.org/pdf/2307.08925v1.pdf,2023-07-18,"['cs.lg', 'cs.ai', 'cs.cl']",2307.08925v1.pdf,"  Large scale language models (LLM) have received significant attention and
found diverse applications across various domains, but their development
encounters challenges in real-world scenarios. These challenges arise due to
the scarcity of public domain data availability and the need to maintain
privacy with respect to private domain data. To address these issues, federated
learning (FL) has emerged as a promising technology that enables collaborative
training of shared models while preserving decentralized data. We propose the
concept of federated LLM, which comprises three key components, i.e., federated
LLM pre-training, federated LLM fine-tuning, and federated LLM prompt
engineering. For each component, we discuss its advantage over traditional LLM
training methods and propose specific engineering strategies for
implementation. Furthermore, we explore the novel challenges introduced by the
integration of FL and LLM. We analyze existing solutions and identify potential
obstacles faced by these solutions within the context of federated LLM.
"
Chit-Chat or Deep Talk: Prompt Engineering for Process Mining,Urszula Jessen,http://arxiv.org/pdf/2307.09909v1.pdf,2023-07-19,['cs.ai'],2307.09909v1.pdf,"  This research investigates the application of Large Language Models (LLMs) to
augment conversational agents in process mining, aiming to tackle its inherent
complexity and diverse skill requirements. While LLM advancements present novel
opportunities for conversational process mining, generating efficient outputs
is still a hurdle. We propose an innovative approach that amend many issues in
existing solutions, informed by prior research on Natural Language Processing
(NLP) for conversational agents. Leveraging LLMs, our framework improves both
accessibility and agent performance, as demonstrated by experiments on public
question and data sets. Our research sets the stage for future explorations
into LLMs' role in process mining and concludes with propositions for enhancing
LLM memory, implementing real-time user testing, and examining diverse data
sets.
"
Large Language Models can accomplish Business Process Management Tasks,Michael Grohs,http://arxiv.org/pdf/2307.09923v1.pdf,2023-07-19,['cs.cl'],2307.09923v1.pdf,"  Business Process Management (BPM) aims to improve organizational activities
and their outcomes by managing the underlying processes. To achieve this, it is
often necessary to consider information from various sources, including
unstructured textual documents. Therefore, researchers have developed several
BPM-specific solutions that extract information from textual documents using
Natural Language Processing techniques. These solutions are specific to their
respective tasks and cannot accomplish multiple process-related problems as a
general-purpose instrument. However, in light of the recent emergence of Large
Language Models (LLMs) with remarkable reasoning capabilities, such a
general-purpose instrument with multiple applications now appears attainable.
In this paper, we illustrate how LLMs can accomplish text-related BPM tasks by
applying a specific LLM to three exemplary tasks: mining imperative process
models from textual descriptions, mining declarative process models from
textual descriptions, and assessing the suitability of process tasks from
textual descriptions for robotic process automation. We show that, without
extensive configuration or prompt engineering, LLMs perform comparably to or
better than existing solutions and discuss implications for future BPM research
as well as practical usage.
"
SentimentGPT: Exploiting GPT for Advanced Sentiment Analysis and its  Departure from Current Machine Learning,Kiana Kheiri,http://arxiv.org/pdf/2307.10234v2.pdf,2023-07-16,"['cs.cl', 'cs.ai', 'cs.lg', 'cs.si']",2307.10234v2.pdf,"  This study presents a thorough examination of various Generative Pretrained
Transformer (GPT) methodologies in sentiment analysis, specifically in the
context of Task 4 on the SemEval 2017 dataset. Three primary strategies are
employed: 1) prompt engineering using the advanced GPT-3.5 Turbo, 2)
fine-tuning GPT models, and 3) an inventive approach to embedding
classification. The research yields detailed comparative insights among these
strategies and individual GPT models, revealing their unique strengths and
potential limitations. Additionally, the study compares these GPT-based
methodologies with other current, high-performing models previously used with
the same dataset. The results illustrate the significant superiority of the GPT
approaches in terms of predictive performance, more than 22\% in F1-score
compared to the state-of-the-art. Further, the paper sheds light on common
challenges in sentiment analysis tasks, such as understanding context and
detecting sarcasm. It underscores the enhanced capabilities of the GPT models
to effectively handle these complexities. Taken together, these findings
highlight the promising potential of GPT models in sentiment analysis, setting
the stage for future research in this field. The code can be found at
https://github.com/DSAatUSU/SentimentGPT
"
Domain Knowledge Distillation from Large Language Model: An Empirical  Study in the Autonomous Driving Domain,Yun Tang,http://arxiv.org/pdf/2307.11769v1.pdf,2023-07-17,['cs.cl'],2307.11769v1.pdf,"  Engineering knowledge-based (or expert) systems require extensive manual
effort and domain knowledge. As Large Language Models (LLMs) are trained using
an enormous amount of cross-domain knowledge, it becomes possible to automate
such engineering processes. This paper presents an empirical automation and
semi-automation framework for domain knowledge distillation using prompt
engineering and the LLM ChatGPT. We assess the framework empirically in the
autonomous driving domain and present our key observations. In our
implementation, we construct the domain knowledge ontology by ""chatting"" with
ChatGPT. The key finding is that while fully automated domain ontology
construction is possible, human supervision and early intervention typically
improve efficiency and output quality as they lessen the effects of response
randomness and the butterfly effect. We, therefore, also develop a web-based
distillation assistant enabling supervision and flexible intervention at
runtime. We hope our findings and tools could inspire future research toward
revolutionizing the engineering of knowledge-based systems across application
domains.
"
Copilot for Xcode: Exploring AI-Assisted Programming by Prompting  Cloud-based Large Language Models,Chee Wei Tan,http://arxiv.org/pdf/2307.14349v1.pdf,2023-07-08,"['cs.se', 'cs.ai']",2307.14349v1.pdf,"  This paper presents an AI-assisted programming tool called Copilot for Xcode
for program composition and design to support human software developers. By
seamlessly integrating cloud-based Large Language Models (LLM) with Apple's
local development environment, Xcode, this tool enhances productivity and
unleashes creativity for software development in Apple software ecosystem
(e.g., iOS apps, macOS). Leveraging advanced natural language processing (NLP)
techniques, Copilot for Xcode effectively processes source code tokens and
patterns within code repositories, enabling features such as code generation,
autocompletion, documentation, and error detection. Software developers can
also query and make ""small"" decisions for program composition, some of which
can be made simultaneously, and this is facilitated through prompt engineering
in a chat interface of Copilot for Xcode. Finally, we present simple case
studies as evidence of the effectiveness of utilizing NLP in Xcode to prompt
popular LLM services like OpenAI ChatGPT for program composition and design.
"
Backdoor Attacks for In-Context Learning with Language Models,Nikhil Kandpal,http://arxiv.org/pdf/2307.14692v1.pdf,2023-07-27,['cs.cr'],2307.14692v1.pdf,"  Because state-of-the-art language models are expensive to train, most
practitioners must make use of one of the few publicly available language
models or language model APIs. This consolidation of trust increases the
potency of backdoor attacks, where an adversary tampers with a machine learning
model in order to make it perform some malicious behavior on inputs that
contain a predefined backdoor trigger. We show that the in-context learning
ability of large language models significantly complicates the question of
developing backdoor attacks, as a successful backdoor must work against various
prompting strategies and should not affect the model's general purpose
capabilities. We design a new attack for eliciting targeted misclassification
when language models are prompted to perform a particular target task and
demonstrate the feasibility of this attack by backdooring multiple large
language models ranging in size from 1.3 billion to 6 billion parameters.
Finally we study defenses to mitigate the potential harms of our attack: for
example, while in the white-box setting we show that fine-tuning models for as
few as 500 steps suffices to remove the backdoor behavior, in the black-box
setting we are unable to develop a successful defense that relies on prompt
engineering alone.
"
Do LLMs Possess a Personality? Making the MBTI Test an Amazing  Evaluation for Large Language Models,Keyu Pan,http://arxiv.org/pdf/2307.16180v1.pdf,2023-07-30,['cs.cl'],2307.16180v1.pdf,"  The field of large language models (LLMs) has made significant progress, and
their knowledge storage capacity is approaching that of human beings.
Furthermore, advanced techniques, such as prompt learning and reinforcement
learning, are being employed to address ethical concerns and hallucination
problems associated with LLMs, bringing them closer to aligning with human
values. This situation naturally raises the question of whether LLMs with
human-like abilities possess a human-like personality? In this paper, we aim to
investigate the feasibility of using the Myers-Briggs Type Indicator (MBTI), a
widespread human personality assessment tool, as an evaluation metric for LLMs.
Specifically, extensive experiments will be conducted to explore: 1) the
personality types of different LLMs, 2) the possibility of changing the
personality types by prompt engineering, and 3) How does the training dataset
affect the model's personality. Although the MBTI is not a rigorous assessment,
it can still reflect the similarity between LLMs and human personality. In
practice, the MBTI has the potential to serve as a rough indicator. Our codes
are available at
https://github.com/HarderThenHarder/transformers_tasks/tree/main/LLM/llms_mbti.
"
Alpha-GPT: Human-AI Interactive Alpha Mining for Quantitative Investment,Saizhuo Wang,http://arxiv.org/pdf/2308.00016v1.pdf,2023-07-31,"['q-fin.cp', 'cs.ai', 'cs.cl']",2308.00016v1.pdf,"  One of the most important tasks in quantitative investment research is mining
new alphas (effective trading signals or factors). Traditional alpha mining
methods, either hand-crafted factor synthesizing or algorithmic factor mining
(e.g., search with genetic programming), have inherent limitations, especially
in implementing the ideas of quants. In this work, we propose a new alpha
mining paradigm by introducing human-AI interaction, and a novel prompt
engineering algorithmic framework to implement this paradigm by leveraging the
power of large language models. Moreover, we develop Alpha-GPT, a new
interactive alpha mining system framework that provides a heuristic way to
``understand'' the ideas of quant researchers and outputs creative, insightful,
and effective alphas. We demonstrate the effectiveness and advantage of
Alpha-GPT via a number of alpha mining experiments.
"
Optimizing Machine Translation through Prompt Engineering: An  Investigation into ChatGPT's Customizability,Masaru Yamada,http://arxiv.org/pdf/2308.01391v1.pdf,2023-08-02,['cs.cl'],2308.01391v1.pdf,"  This paper explores the influence of integrating the purpose of the
translation and the target audience into prompts on the quality of translations
produced by ChatGPT. Drawing on previous translation studies, industry
practices, and ISO standards, the research underscores the significance of the
pre-production phase in the translation process. The study reveals that the
inclusion of suitable prompts in large-scale language models like ChatGPT can
yield flexible translations, a feat yet to be realized by conventional Machine
Translation (MT). The research scrutinizes the changes in translation quality
when prompts are used to generate translations that meet specific conditions.
The evaluation is conducted from a practicing translator's viewpoint, both
subjectively and qualitatively, supplemented by the use of OpenAI's word
embedding API for cosine similarity calculations. The findings suggest that the
integration of the purpose and target audience into prompts can indeed modify
the generated translations, generally enhancing the translation quality by
industry standards. The study also demonstrates the practical application of
the ""good translation"" concept, particularly in the context of marketing
documents and culturally dependent idioms.
"
InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent,Po-Lin Chen,http://arxiv.org/pdf/2308.01552v1.pdf,2023-08-03,"['cs.ai', 'cs.cl', 'cs.lg']",2308.01552v1.pdf,"  This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning.
"
RTLLM: An Open-Source Benchmark for Design RTL Generation with Large  Language Model,Yao Lu,http://arxiv.org/pdf/2308.05345v2.pdf,2023-08-10,"['cs.lg', 'cs.ar']",2308.05345v2.pdf,"  Inspired by the recent success of large language models (LLMs) like ChatGPT,
researchers start to explore the adoption of LLMs for agile hardware design,
such as generating design RTL based on natural-language instructions. However,
in existing works, their target designs are all relatively simple and in a
small scale, and proposed by the authors themselves, making a fair comparison
among different LLM solutions challenging. In addition, many prior works only
focus on the design correctness, without evaluating the design qualities of
generated design RTL. In this work, we propose an open-source benchmark named
RTLLM, for generating design RTL with natural language instructions. To
systematically evaluate the auto-generated design RTL, we summarized three
progressive goals, named syntax goal, functionality goal, and design quality
goal. This benchmark can automatically provide a quantitative evaluation of any
given LLM-based solution. Furthermore, we propose an easy-to-use yet
surprisingly effective prompt engineering technique named self-planning, which
proves to significantly boost the performance of GPT-3.5 in our proposed
benchmark.
"
"LLM Self Defense: By Self Examination, LLMs Know They Are Being Tricked",Mansi Phute,http://arxiv.org/pdf/2308.07308v3.pdf,2023-08-14,"['cs.cl', 'cs.ai']",2308.07308v3.pdf,"  Large language models (LLMs) are popular for high-quality text generation but
can produce harmful content, even when aligned with human values through
reinforcement learning. Adversarial prompts can bypass their safety measures.
We propose LLM Self Defense, a simple approach to defend against these attacks
by having an LLM screen the induced responses. Our method does not require any
fine-tuning, input preprocessing, or iterative output generation. Instead, we
incorporate the generated content into a pre-defined prompt and employ another
instance of an LLM to analyze the text and predict whether it is harmful. We
test LLM Self Defense on GPT 3.5 and Llama 2, two of the current most prominent
LLMs against various types of attacks, such as forcefully inducing affirmative
responses to prompts and prompt engineering attacks. Notably, LLM Self Defense
succeeds in reducing the attack success rate to virtually 0 using both GPT 3.5
and Llama 2.
"
Data Race Detection Using Large Language Models,Le Chen,http://arxiv.org/pdf/2308.07505v2.pdf,2023-08-15,"['cs.lg', 'cs.cl']",2308.07505v2.pdf,"  Large language models (LLMs) are demonstrating significant promise as an
alternate strategy to facilitate analyses and optimizations of high-performance
computing programs, circumventing the need for resource-intensive manual tool
creation. In this paper, we explore a novel LLM-based data race detection
approach combining prompting engineering and fine-tuning techniques. We create
a dedicated dataset named DRB-ML, which is derived from DataRaceBench, with
fine-grain labels showing the presence of data race pairs and their associated
variables, line numbers, and read/write information. DRB-ML is then used to
evaluate representative LLMs and fine-tune open-source ones. Our experiment
shows that LLMs can be a viable approach to data race detection. However, they
still cannot compete with traditional data race detection tools when we need
detailed information about variable pairs causing data races.
"
Accelerated materials language processing enabled by GPT,Jaewoong Choi,http://arxiv.org/pdf/2308.09354v1.pdf,2023-08-18,"['cs.cl', 'cond-mat.mtrl-sci']",2308.09354v1.pdf,"  Materials language processing (MLP) is one of the key facilitators of
materials science research, as it enables the extraction of structured
information from massive materials science literature. Prior works suggested
high-performance MLP models for text classification, named entity recognition
(NER), and extractive question answering (QA), which require complex model
architecture, exhaustive fine-tuning and a large number of human-labelled
datasets. In this study, we develop generative pretrained transformer
(GPT)-enabled pipelines where the complex architectures of prior MLP models are
replaced with strategic designs of prompt engineering. First, we develop a
GPT-enabled document classification method for screening relevant documents,
achieving comparable accuracy and reliability compared to prior models, with
only small dataset. Secondly, for NER task, we design an entity-centric
prompts, and learning few-shot of them improved the performance on most of
entities in three open datasets. Finally, we develop an GPT-enabled extractive
QA model, which provides improved performance and shows the possibility of
automatically correcting annotations. While our findings confirm the potential
of GPT-enabled MLP models as well as their value in terms of reliability and
practicability, our scientific methods and systematic approach are applicable
to any materials science domain to accelerate the information extraction of
scientific literature.
"
Data-to-text Generation for Severely Under-Resourced Languages with  GPT-3.5: A Bit of Help Needed from Google Translate,Michela Lorandi,http://arxiv.org/pdf/2308.09957v1.pdf,2023-08-19,"['cs.cl', 'cs.ai']",2308.09957v1.pdf,"  LLMs like GPT are great at tasks involving English which dominates in their
training data. In this paper, we look at how they cope with tasks involving
languages that are severely under-represented in their training data, in the
context of data-to-text generation for Irish, Maltese, Welsh and Breton. During
the prompt-engineering phase we tested a range of prompt types and formats on
GPT-3.5 and~4 with a small sample of example input/output pairs. We then fully
evaluated the two most promising prompts in two scenarios: (i) direct
generation into the under-resourced language, and (ii) generation into English
followed by translation into the under-resourced language. We find that
few-shot prompting works better for direct generation into under-resourced
languages, but that the difference disappears when pivoting via English. The
few-shot + translation system variants were submitted to the WebNLG 2023 shared
task where they outperformed competitor systems by substantial margins in all
languages on all metrics. We conclude that good performance on under-resourced
languages can be achieved out-of-the box with state-of-the-art LLMs. However,
our best results (for Welsh) remain well below the lowest ranked English system
at WebNLG'20.
"
Activation Addition: Steering Language Models Without Optimization,Alexander Matt Turner,http://arxiv.org/pdf/2308.10248v2.pdf,2023-08-20,"['cs.cl', 'cs.lg']",2308.10248v2.pdf,"  Reliably controlling the behavior of large language models is a pressing open
problem. Existing methods include supervised finetuning, reinforcement learning
from human feedback, prompt engineering, and guided decoding. We instead
investigate activation engineering: modifying activations at inference time to
predictably alter model behavior. In particular, we bias the forward pass with
an added 'steering vector' implicitly specified through natural language.
  Unlike past work which learned these steering vectors, our Activation
Addition (ActAdd) method computes them by taking the activation differences
that result from pairs of prompts. We demonstrate ActAdd on GPT-2 on
OpenWebText and ConceptNet. Our inference-time approach yields control over
high-level properties of output and preserves off-target model performance. It
involves far less compute and implementation effort than finetuning, allows
users to provide natural language specifications, and its overhead scales
naturally with model size.
"
Situated Natural Language Explanations,Zining Zhu,http://arxiv.org/pdf/2308.14115v1.pdf,2023-08-27,['cs.cl'],2308.14115v1.pdf,"  Natural language is among the most accessible tools for explaining decisions
to humans, and large pretrained language models (PLMs) have demonstrated
impressive abilities to generate coherent natural language explanations (NLE).
The existing NLE research perspectives do not take the audience into account.
An NLE can have high textual quality, but it might not accommodate audiences'
needs and preference. To address this limitation, we propose an alternative
perspective, situated NLE, including a situated generation framework and a
situated evaluation framework. On the generation side, we propose simple prompt
engineering methods that adapt the NLEs to situations. In human studies, the
annotators preferred the situated NLEs. On the evaluation side, we set up
automated evaluation scores in lexical, semantic, and pragmatic categories. The
scores can be used to select the most suitable prompts to generate NLEs.
Situated NLE provides a perspective to conduct further research on automatic
NLE generations.
"
"FurChat: An Embodied Conversational Agent using LLMs, Combining Open and  Closed-Domain Dialogue with Facial Expressions",Neeraj Cherakara,http://arxiv.org/pdf/2308.15214v2.pdf,2023-08-29,"['cs.cl', 'cs.ai', 'cs.hc', 'cs.ro']",2308.15214v2.pdf,"  We demonstrate an embodied conversational agent that can function as a
receptionist and generate a mixture of open and closed-domain dialogue along
with facial expressions, by using a large language model (LLM) to develop an
engaging conversation. We deployed the system onto a Furhat robot, which is
highly expressive and capable of using both verbal and nonverbal cues during
interaction. The system was designed specifically for the National Robotarium
to interact with visitors through natural conversations, providing them with
information about the facilities, research, news, upcoming events, etc. The
system utilises the state-of-the-art GPT-3.5 model to generate such information
along with domain-general conversations and facial expressions based on prompt
engineering.
"
Can Prompt Learning Benefit Radiology Report Generation?,Jun Wang,http://arxiv.org/pdf/2308.16269v1.pdf,2023-08-30,['cs.cv'],2308.16269v1.pdf,"  Radiology report generation aims to automatically provide clinically
meaningful descriptions of radiology images such as MRI and X-ray. Although
great success has been achieved in natural scene image captioning tasks,
radiology report generation remains challenging and requires prior medical
knowledge. In this paper, we propose PromptRRG, a method that utilizes prompt
learning to activate a pretrained model and incorporate prior knowledge. Since
prompt learning for radiology report generation has not been explored before,
we begin with investigating prompt designs and categorise them based on varying
levels of knowledge: common, domain-specific and disease-enriched prompts.
Additionally, we propose an automatic prompt learning mechanism to alleviate
the burden of manual prompt engineering. This is the first work to
systematically examine the effectiveness of prompt learning for radiology
report generation. Experimental results on the largest radiology report
generation benchmark, MIMIC-CXR, demonstrate that our proposed method achieves
state-of-the-art performance. Code will be available upon the acceptance.
"
Large Language Models as Data Preprocessors,Haochen Zhang,http://arxiv.org/pdf/2308.16361v1.pdf,2023-08-30,"['cs.ai', 'cs.db']",2308.16361v1.pdf,"  Large Language Models (LLMs), typified by OpenAI's GPT series and Meta's
LLaMA variants, have marked a significant advancement in artificial
intelligence. Trained on vast amounts of text data, LLMs are capable of
understanding and generating human-like text across a diverse range of topics.
This study expands on the applications of LLMs, exploring their potential in
data preprocessing, a critical stage in data mining and analytics applications.
We delve into the applicability of state-of-the-art LLMs such as GPT-3.5,
GPT-4, and Vicuna-13B for error detection, data imputation, schema matching,
and entity matching tasks. Alongside showcasing the inherent capabilities of
LLMs, we highlight their limitations, particularly in terms of computational
expense and inefficiency. We propose an LLM-based framework for data
preprocessing, which integrates cutting-edge prompt engineering techniques,
coupled with traditional methods like contextualization and feature selection,
to improve the performance and efficiency of these models. The effectiveness of
LLMs in data preprocessing is evaluated through an experimental study spanning
12 datasets. GPT-4 emerged as a standout, achieving 100\% accuracy or F1 score
on 4 datasets, suggesting LLMs' immense potential in these tasks. Despite
certain limitations, our study underscores the promise of LLMs in this domain
and anticipates future developments to overcome current hurdles.
"
Developing a Scalable Benchmark for Assessing Large Language Models in  Knowledge Graph Engineering,Lars-Peter Meyer,http://arxiv.org/pdf/2308.16622v1.pdf,2023-08-31,"['cs.ai', 'cs.cl', 'cs.db']",2308.16622v1.pdf,"  As the field of Large Language Models (LLMs) evolves at an accelerated pace,
the critical need to assess and monitor their performance emerges. We introduce
a benchmarking framework focused on knowledge graph engineering (KGE)
accompanied by three challenges addressing syntax and error correction, facts
extraction and dataset generation. We show that while being a useful tool, LLMs
are yet unfit to assist in knowledge graph generation with zero-shot prompting.
Consequently, our LLM-KG-Bench framework provides automatic evaluation and
storage of LLM responses as well as statistical data and visualization tools to
support tracking of prompt engineering and model performance.
"
Linking microblogging sentiments to stock price movement: An application  of GPT-4,Rick Steinert,http://arxiv.org/pdf/2308.16771v1.pdf,2023-08-31,"['q-fin.st', 'q-fin.cp']",2308.16771v1.pdf,"  This paper investigates the potential improvement of the GPT-4 Language
Learning Model (LLM) in comparison to BERT for modeling same-day daily stock
price movements of Apple and Tesla in 2017, based on sentiment analysis of
microblogging messages. We recorded daily adjusted closing prices and
translated them into up-down movements. Sentiment for each day was extracted
from messages on the Stocktwits platform using both LLMs. We develop a novel
method to engineer a comprehensive prompt for contextual sentiment analysis
which unlocks the true capabilities of modern LLM. This enables us to carefully
retrieve sentiments, perceived advantages or disadvantages, and the relevance
towards the analyzed company. Logistic regression is used to evaluate whether
the extracted message contents reflect stock price movements. As a result,
GPT-4 exhibited substantial accuracy, outperforming BERT in five out of six
months and substantially exceeding a naive buy-and-hold strategy, reaching a
peak accuracy of 71.47 % in May. The study also highlights the importance of
prompt engineering in obtaining desired outputs from GPT-4's contextual
abilities. However, the costs of deploying GPT-4 and the need for fine-tuning
prompts highlight some practical considerations for its use.
"
LoGoPrompt: Synthetic Text Images Can Be Good Visual Prompts for  Vision-Language Models,Cheng Shi,http://arxiv.org/pdf/2309.01155v2.pdf,2023-09-03,['cs.cv'],2309.01155v2.pdf,"  Prompt engineering is a powerful tool used to enhance the performance of
pre-trained models on downstream tasks. For example, providing the prompt
""Let's think step by step"" improved GPT-3's reasoning accuracy to 63% on
MutiArith while prompting ""a photo of"" filled with a class name enables CLIP to
achieve $80$\% zero-shot accuracy on ImageNet. While previous research has
explored prompt learning for the visual modality, analyzing what constitutes a
good visual prompt specifically for image recognition is limited. In addition,
existing visual prompt tuning methods' generalization ability is worse than
text-only prompting tuning. This paper explores our key insight: synthetic text
images are good visual prompts for vision-language models! To achieve that, we
propose our LoGoPrompt, which reformulates the classification objective to the
visual prompt selection and addresses the chicken-and-egg challenge of first
adding synthetic text images as class-wise visual prompts or predicting the
class first. Without any trainable visual prompt parameters, experimental
results on 16 datasets demonstrate that our method consistently outperforms
state-of-the-art methods in few-shot learning, base-to-new generalization, and
domain generalization.
"
FIAT: Fusing learning paradigms with Instruction-Accelerated Tuning,Xinyi Wang,http://arxiv.org/pdf/2309.04663v2.pdf,2023-09-09,"['cs.cl', 'cs.ai']",2309.04663v2.pdf,"  Learning paradigms for large language models (LLMs) currently tend to fall
within either in-context learning (ICL) or full fine-tuning. Each of these
comes with their own trade-offs based on available data, model size, compute
cost, ease-of-use, and final quality with neither solution performing well
across-the-board. In this article, we first describe ICL and fine-tuning
paradigms in a way that highlights their natural connections. Based on these
connections, we propose a new learning paradigm called FIAT that fuses the best
of these paradigms together, enabling prompt-engineered instructions and
chain-of-thought reasoning with the very largest models while also using
similar methods to perform parameter updates on a modestly-sized LLM with
parameter-efficient tuning. We evaluate FIAT's effectiveness on a variety of
multilingual tasks and observe that FIAT performs better than both ICL and
fine-tuning at scales ranging from 100-10,000 training examples. We hope that
FIAT provides a practical way of harnessing the full potential of LLMs without
needing to make a hard choice between learning paradigms.
"
Toward Reproducing Network Research Results Using Large Language Models,Qiao Xiang,http://arxiv.org/pdf/2309.04716v1.pdf,2023-09-09,"['cs.lg', 'cs.ai', 'cs.cl']",2309.04716v1.pdf,"  Reproducing research results in the networking community is important for
both academia and industry. The current best practice typically resorts to
three approaches: (1) looking for publicly available prototypes; (2) contacting
the authors to get a private prototype; and (3) manually implementing a
prototype following the description of the publication. However, most published
network research does not have public prototypes and private prototypes are
hard to get. As such, most reproducing efforts are spent on manual
implementation based on the publications, which is both time and labor
consuming and error-prone. In this paper, we boldly propose reproducing network
research results using the emerging large language models (LLMs). In
particular, we first prove its feasibility with a small-scale experiment, in
which four students with essential networking knowledge each reproduces a
different networking system published in prominent conferences and journals by
prompt engineering ChatGPT. We report the experiment's observations and lessons
and discuss future open research questions of this proposal. This work raises
no ethical issue.
"
Detecting Natural Language Biases with Prompt-based Learning,Md Abdul Aowal,http://arxiv.org/pdf/2309.05227v1.pdf,2023-09-11,"['cs.cl', 'cs.ai']",2309.05227v1.pdf,"  In this project, we want to explore the newly emerging field of prompt
engineering and apply it to the downstream task of detecting LM biases. More
concretely, we explore how to design prompts that can indicate 4 different
types of biases: (1) gender, (2) race, (3) sexual orientation, and (4)
religion-based. Within our project, we experiment with different manually
crafted prompts that can draw out the subtle biases that may be present in the
language model. We apply these prompts to multiple variations of popular and
well-recognized models: BERT, RoBERTa, and T5 to evaluate their biases. We
provide a comparative analysis of these models and assess them using a two-fold
method: use human judgment to decide whether model predictions are biased and
utilize model-level judgment (through further prompts) to understand if a model
can self-diagnose the biases of its own prediction.
"
Two Timin': Repairing Smart Contracts With A Two-Layered Approach,Abhinav Jain,http://arxiv.org/pdf/2309.07841v1.pdf,2023-09-14,"['cs.cr', 'cs.ai']",2309.07841v1.pdf,"  Due to the modern relevance of blockchain technology, smart contracts present
both substantial risks and benefits. Vulnerabilities within them can trigger a
cascade of consequences, resulting in significant losses. Many current papers
primarily focus on classifying smart contracts for malicious intent, often
relying on limited contract characteristics, such as bytecode or opcode. This
paper proposes a novel, two-layered framework: 1) classifying and 2) directly
repairing malicious contracts. Slither's vulnerability report is combined with
source code and passed through a pre-trained RandomForestClassifier (RFC) and
Large Language Models (LLMs), classifying and repairing each suggested
vulnerability. Experiments demonstrate the effectiveness of fine-tuned and
prompt-engineered LLMs. The smart contract repair models, built from
pre-trained GPT-3.5-Turbo and fine-tuned Llama-2-7B models, reduced the overall
vulnerability count by 97.5% and 96.7% respectively. A manual inspection of
repaired contracts shows that all retain functionality, indicating that the
proposed method is appropriate for automatic batch classification and repair of
vulnerabilities in smart contracts.
"
Large Language Models for Failure Mode Classification: An Investigation,Michael Stewart,http://arxiv.org/pdf/2309.08181v1.pdf,2023-09-15,['cs.cl'],2309.08181v1.pdf,"  In this paper we present the first investigation into the effectiveness of
Large Language Models (LLMs) for Failure Mode Classification (FMC). FMC, the
task of automatically labelling an observation with a corresponding failure
mode code, is a critical task in the maintenance domain as it reduces the need
for reliability engineers to spend their time manually analysing work orders.
We detail our approach to prompt engineering to enable an LLM to predict the
failure mode of a given observation using a restricted code list. We
demonstrate that the performance of a GPT-3.5 model (F1=0.80) fine-tuned on
annotated data is a significant improvement over a currently available text
classification model (F1=0.60) trained on the same annotated data set. The
fine-tuned model also outperforms the out-of-the box GPT-3.5 (F1=0.46). This
investigation reinforces the need for high quality fine-tuning data sets for
domain-specific tasks using LLMs.
"
Safurai 001: New Qualitative Approach for Code LLM Evaluation,Davide Cifarelli,http://arxiv.org/pdf/2309.11385v1.pdf,2023-09-20,['cs.cl'],2309.11385v1.pdf,"  This paper presents Safurai-001, a new Large Language Model (LLM) with
significant potential in the domain of coding assistance. Driven by recent
advancements in coding LLMs, Safurai-001 competes in performance with the
latest models like WizardCoder [Xu et al., 2023], PanguCoder [Shen et al.,
2023] and Phi-1 [Gunasekar et al., 2023] but aims to deliver a more
conversational interaction. By capitalizing on the progress in data engineering
(including latest techniques of data transformation and prompt engineering) and
instruction tuning, this new model promises to stand toe-to-toe with recent
closed and open source developments. Recognizing the need for an efficacious
evaluation metric for coding LLMs, this paper also introduces GPT4-based
MultiParameters, an evaluation benchmark that harnesses varied parameters to
present a comprehensive insight into the models functioning and performance.
Our assessment shows that Safurai-001 can outperform GPT-3.5 by 1.58% and
WizardCoder by 18.78% in the Code Readability parameter and more.
"
A Practical Survey on Zero-shot Prompt Design for In-context Learning,Yinheng Li,http://arxiv.org/pdf/2309.13205v1.pdf,2023-09-22,"['cs.cl', 'cs.ai', 'cs.et', 'cs.lg']",2309.13205v1.pdf,"  The remarkable advancements in large language models (LLMs) have brought
about significant improvements in Natural Language Processing(NLP) tasks. This
paper presents a comprehensive review of in-context learning techniques,
focusing on different types of prompts, including discrete, continuous,
few-shot, and zero-shot, and their impact on LLM performance. We explore
various approaches to prompt design, such as manual design, optimization
algorithms, and evaluation methods, to optimize LLM performance across diverse
tasks. Our review covers key research studies in prompt engineering, discussing
their methodologies and contributions to the field. We also delve into the
challenges faced in evaluating prompt performance, given the absence of a
single ""best"" prompt and the importance of considering multiple metrics. In
conclusion, the paper highlights the critical role of prompt design in
harnessing the full potential of LLMs and provides insights into the
combination of manual design, optimization techniques, and rigorous evaluation
for more effective and efficient use of LLMs in various NLP tasks.
"
A Chat About Boring Problems: Studying GPT-based text normalization,Yang Zhang,http://arxiv.org/pdf/2309.13426v1.pdf,2023-09-23,"['cs.cl', 'cs.ai']",2309.13426v1.pdf,"  Text normalization - the conversion of text from written to spoken form - is
traditionally assumed to be an ill-formed task for language models. In this
work, we argue otherwise. We empirically show the capacity of Large-Language
Models (LLM) for text normalization in few-shot scenarios. Combining
self-consistency reasoning with linguistic-informed prompt engineering, we find
LLM based text normalization to achieve error rates around 40\% lower than top
normalization systems. Further, upon error analysis, we note key limitations in
the conventional design of text normalization tasks. We create a new taxonomy
of text normalization errors and apply it to results from GPT-3.5-Turbo and
GPT-4.0. Through this new framework, we can identify strengths and weaknesses
of GPT-based TN, opening opportunities for future work.
"
DynaCon: Dynamic Robot Planner with Contextual Awareness via LLMs,Gyeongmin Kim,http://arxiv.org/pdf/2309.16031v1.pdf,2023-09-27,['cs.ro'],2309.16031v1.pdf,"  Mobile robots often rely on pre-existing maps for effective path planning and
navigation. However, when these maps are unavailable, particularly in
unfamiliar environments, a different approach become essential. This paper
introduces DynaCon, a novel system designed to provide mobile robots with
contextual awareness and dynamic adaptability during navigation, eliminating
the reliance of traditional maps. DynaCon integrates real-time feedback with an
object server, prompt engineering, and navigation modules. By harnessing the
capabilities of Large Language Models (LLMs), DynaCon not only understands
patterns within given numeric series but also excels at categorizing objects
into matched spaces. This facilitates dynamic path planner imbued with
contextual awareness. We validated the effectiveness of DynaCon through an
experiment where a robot successfully navigated to its goal using reasoning.
Source code and experiment videos for this work can be found at:
https://sites.google.com/view/dynacon.
"
Cyber Sentinel: Exploring Conversational Agents in Streamlining Security  Tasks with GPT-4,Mehrdad Kaheh,http://arxiv.org/pdf/2309.16422v1.pdf,2023-09-28,['cs.cr'],2309.16422v1.pdf,"  In an era where cyberspace is both a battleground and a backbone of modern
society, the urgency of safeguarding digital assets against ever-evolving
threats is paramount. This paper introduces Cyber Sentinel, an innovative
task-oriented cybersecurity dialogue system that is effectively capable of
managing two core functions: explaining potential cyber threats within an
organization to the user, and taking proactive/reactive security actions when
instructed by the user. Cyber Sentinel embodies the fusion of artificial
intelligence, cybersecurity domain expertise, and real-time data analysis to
combat the multifaceted challenges posed by cyber adversaries. This article
delves into the process of creating such a system and how it can interact with
other components typically found in cybersecurity organizations. Our work is a
novel approach to task-oriented dialogue systems, leveraging the power of
chaining GPT-4 models combined with prompt engineering across all sub-tasks. We
also highlight its pivotal role in enhancing cybersecurity communication and
interaction, concluding that not only does this framework enhance the system's
transparency (Explainable AI) but also streamlines the decision-making process
and responding to threats (Actionable AI), therefore marking a significant
advancement in the realm of cybersecurity communication.
"
"A Sign Language Recognition System with Pepper, Lightweight-Transformer,  and LLM",JongYoon Lim,http://arxiv.org/pdf/2309.16898v1.pdf,2023-09-28,"['cs.ro', 'cs.cl', 'cs.cv', 'cs.hc']",2309.16898v1.pdf,"  This research explores using lightweight deep neural network architectures to
enable the humanoid robot Pepper to understand American Sign Language (ASL) and
facilitate non-verbal human-robot interaction. First, we introduce a
lightweight and efficient model for ASL understanding optimized for embedded
systems, ensuring rapid sign recognition while conserving computational
resources. Building upon this, we employ large language models (LLMs) for
intelligent robot interactions. Through intricate prompt engineering, we tailor
interactions to allow the Pepper Robot to generate natural Co-Speech Gesture
responses, laying the foundation for more organic and intuitive humanoid-robot
dialogues. Finally, we present an integrated software pipeline, embodying
advancements in a socially aware AI interaction model. Leveraging the Pepper
Robot's capabilities, we demonstrate the practicality and effectiveness of our
approach in real-world scenarios. The results highlight a profound potential
for enhancing human-robot interaction through non-verbal interactions, bridging
communication gaps, and making technology more accessible and understandable.
"
SPELL: Semantic Prompt Evolution based on a LLM,Yujian Betterest Li,http://arxiv.org/pdf/2310.01260v1.pdf,2023-10-02,"['cs.cl', 'cs.ai']",2310.01260v1.pdf,"  Prompt engineering is a new paradigm for enhancing the performance of trained
neural network models. For optimizing text-style prompts, existing methods
usually individually operate small portions of a text step by step, which
either breaks the fluency or could not globally adjust a prompt. Since large
language models (LLMs) have powerful ability of generating coherent texts token
by token, can we utilize LLMs for improving prompts? Based on this motivation,
in this paper, considering a trained LLM as a text generator, we attempt to
design a black-box evolution algorithm for automatically optimizing texts,
namely SPELL (Semantic Prompt Evolution based on a LLM). The proposed method is
evaluated with different LLMs and evolution parameters in different text tasks.
Experimental results show that SPELL could rapidly improve the prompts indeed.
We further explore the evolution process and discuss on the limitations,
potential possibilities and future work.
"
Co-audit: tools to help humans double-check AI-generated content,Andrew D. Gordon,http://arxiv.org/pdf/2310.01297v1.pdf,2023-10-02,"['cs.hc', 'cs.ai', 'cs.cl', 'cs.pl']",2310.01297v1.pdf,"  Users are increasingly being warned to check AI-generated content for
correctness. Still, as LLMs (and other generative models) generate more complex
output, such as summaries, tables, or code, it becomes harder for the user to
audit or evaluate the output for quality or correctness. Hence, we are seeing
the emergence of tool-assisted experiences to help the user double-check a
piece of AI-generated content. We refer to these as co-audit tools. Co-audit
tools complement prompt engineering techniques: one helps the user construct
the input prompt, while the other helps them check the output response. As a
specific example, this paper describes recent research on co-audit tools for
spreadsheet computations powered by generative models. We explain why co-audit
experiences are essential for any application of generative AI where quality is
important and errors are consequential (as is common in spreadsheet
computations). We propose a preliminary list of principles for co-audit, and
outline research challenges.
"
Chain of Natural Language Inference for Reducing Large Language Model  Ungrounded Hallucinations,Deren Lei,http://arxiv.org/pdf/2310.03951v2.pdf,2023-10-06,"['cs.cl', 'cs.ai']",2310.03951v2.pdf,"  Large language models (LLMs) can generate fluent natural language texts when
given relevant documents as background context. This ability has attracted
considerable interest in developing industry applications of LLMs. However,
LLMs are prone to generate hallucinations that are not supported by the
provided sources. In this paper, we propose a hierarchical framework to detect
and mitigate such ungrounded hallucination. Our framework uses Chain of Natural
Language Inference (CoNLI) for hallucination detection and hallucination
reduction via post-editing. Our approach achieves state-of-the-art performance
on hallucination detection and enhances text quality through rewrite, using
LLMs without any fine-tuning or domain-specific prompt engineering. We show
that this simple plug-and-play framework can serve as an effective choice for
hallucination detection and reduction, achieving competitive performance across
various contexts.
"
LLM4VV: Developing LLM-Driven Testsuite for Compiler Validation,Christian Munley,http://arxiv.org/pdf/2310.04963v2.pdf,2023-10-08,['cs.ai'],2310.04963v2.pdf,"  Large language models (LLMs) are a new and powerful tool for a wide span of
applications involving natural language and demonstrate impressive code
generation abilities. In this paper, we explore the capabilitity of
state-of-the-art LLMs, including closed-source options like OpenAI GPT-4 and
open-source alternatives like Meta AI Codellama, to automatically generate
tests and use these tests to validate and verify compiler implementations of a
directive-based programming paradigm, OpenACC. Our approach entails exploring
various prompt engineering techniques including a code template,
retrieval-augmented generation (RAG) with code template, expressive prompt
using RAG with code template, one-shot example, and RAG with one-shot example.
This paper focuses on (a) exploring the capabilities of the latest LLMs for
code generation, (b) investigating prompt and fine tuning methods, and (c)
analyzing the outcome of LLMs generated tests
"
Large Language Models for Propaganda Detection,Kilian Sprenkamp,http://arxiv.org/pdf/2310.06422v1.pdf,2023-10-10,"['cs.cl', 'cs.ai']",2310.06422v1.pdf,"  The prevalence of propaganda in our digital society poses a challenge to
societal harmony and the dissemination of truth. Detecting propaganda through
NLP in text is challenging due to subtle manipulation techniques and contextual
dependencies. To address this issue, we investigate the effectiveness of modern
Large Language Models (LLMs) such as GPT-3 and GPT-4 for propaganda detection.
We conduct experiments using the SemEval-2020 task 11 dataset, which features
news articles labeled with 14 propaganda techniques as a multi-label
classification problem. Five variations of GPT-3 and GPT-4 are employed,
incorporating various prompt engineering and fine-tuning strategies across the
different models. We evaluate the models' performance by assessing metrics such
as $F1$ score, $Precision$, and $Recall$, comparing the results with the
current state-of-the-art approach using RoBERTa. Our findings demonstrate that
GPT-4 achieves comparable results to the current state-of-the-art. Further,
this study analyzes the potential and challenges of LLMs in complex tasks like
propaganda detection.
"
Forgetful Large Language Models: Lessons Learned from Using LLMs in  Robot Programming,Juo-Tung Chen,http://arxiv.org/pdf/2310.06646v1.pdf,2023-10-10,['cs.ro'],2310.06646v1.pdf,"  Large language models offer new ways of empowering people to program robot
applications-namely, code generation via prompting. However, the code generated
by LLMs is susceptible to errors. This work reports a preliminary exploration
that empirically characterizes common errors produced by LLMs in robot
programming. We categorize these errors into two phases: interpretation and
execution. In this work, we focus on errors in execution and observe that they
are caused by LLMs being ""forgetful"" of key information provided in user
prompts. Based on this observation, we propose prompt engineering tactics
designed to reduce errors in execution. We then demonstrate the effectiveness
of these tactics with three language models: ChatGPT, Bard, and LLaMA-2.
Finally, we discuss lessons learned from using LLMs in robot programming and
call for the benchmarking of LLM-powered end-user development of robot
applications.
"
LLMs Killed the Script Kiddie: How Agents Supported by Large Language  Models Change the Landscape of Network Threat Testing,Stephen Moskal,http://arxiv.org/pdf/2310.06936v1.pdf,2023-10-10,"['cs.cr', 'cs.lg']",2310.06936v1.pdf,"  In this paper, we explore the potential of Large Language Models (LLMs) to
reason about threats, generate information about tools, and automate cyber
campaigns. We begin with a manual exploration of LLMs in supporting specific
threat-related actions and decisions. We proceed by automating the decision
process in a cyber campaign. We present prompt engineering approaches for a
plan-act-report loop for one action of a threat campaign and and a prompt
chaining design that directs the sequential decision process of a multi-action
campaign. We assess the extent of LLM's cyber-specific knowledge w.r.t the
short campaign we demonstrate and provide insights into prompt design for
eliciting actionable responses. We discuss the potential impact of LLMs on the
threat landscape and the ethical considerations of using LLMs for accelerating
threat actor capabilities. We report a promising, yet concerning, application
of generative AI to cyber threats. However, the LLM's capabilities to deal with
more complex networks, sophisticated vulnerabilities, and the sensitivity of
prompts are open questions. This research should spur deliberations over the
inevitable advancements in LLM-supported cyber adversarial landscape.
"
Beyond Factuality: A Comprehensive Evaluation of Large Language Models  as Knowledge Generators,Liang Chen,http://arxiv.org/pdf/2310.07289v1.pdf,2023-10-11,['cs.cl'],2310.07289v1.pdf,"  Large language models (LLMs) outperform information retrieval techniques for
downstream knowledge-intensive tasks when being prompted to generate world
knowledge. However, community concerns abound regarding the factuality and
potential implications of using this uncensored knowledge. In light of this, we
introduce CONNER, a COmpreheNsive kNowledge Evaluation fRamework, designed to
systematically and automatically evaluate generated knowledge from six
important perspectives -- Factuality, Relevance, Coherence, Informativeness,
Helpfulness and Validity. We conduct an extensive empirical analysis of the
generated knowledge from three different types of LLMs on two widely studied
knowledge-intensive tasks, i.e., open-domain question answering and
knowledge-grounded dialogue. Surprisingly, our study reveals that the
factuality of generated knowledge, even if lower, does not significantly hinder
downstream tasks. Instead, the relevance and coherence of the outputs are more
important than small factual mistakes. Further, we show how to use CONNER to
improve knowledge-intensive tasks by designing two strategies: Prompt
Engineering and Knowledge Selection. Our evaluation code and LLM-generated
knowledge with human annotations will be released to facilitate future
research.
"
Multimodal Large Language Model for Visual Navigation,Yao-Hung Hubert Tsai,http://arxiv.org/pdf/2310.08669v2.pdf,2023-10-12,"['cs.cv', 'cs.ro']",2310.08669v2.pdf,"  Recent efforts to enable visual navigation using large language models have
mainly focused on developing complex prompt systems. These systems incorporate
instructions, observations, and history into massive text prompts, which are
then combined with pre-trained large language models to facilitate visual
navigation. In contrast, our approach aims to fine-tune large language models
for visual navigation without extensive prompt engineering. Our design involves
a simple text prompt, current observations, and a history collector model that
gathers information from previous observations as input. For output, our design
provides a probability distribution of possible actions that the agent can take
during navigation. We train our model using human demonstrations and collision
signals from the Habitat-Matterport 3D Dataset (HM3D). Experimental results
demonstrate that our method outperforms state-of-the-art behavior cloning
methods and effectively reduces collision rates.
"
GPTutor: an open-source AI pair programming tool alternative to Copilot,Eason Chen,http://arxiv.org/pdf/2310.13896v3.pdf,2023-10-21,['cs.hc'],2310.13896v3.pdf,"  This paper presents the latest progress of GPTutor: a ChatGPT-powered
programming tool extension in Visual Studio Code. The emergence of Large
Language Models (LLMs) has improved software development efficiency, but their
performance can be hindered by training data limitations and prompt design
issues. Existing LLM development tools often operate as black boxes, with users
unable to view the prompts used and unable to improve performance by correcting
prompts when errors occur. To address the aforementioned issues, GPTutor was
introduced as an open-source AI pair programming tool, offering an alternative
to Copilot. GPTutor empowers users to customize prompts for various programming
languages and scenarios, with support for 120+ human languages and 50+
programming languages. Users can fine-tune prompts to correct the errors from
LLM for precision and efficient code generation. At the end of the paper, we
underscore GPTutor's potential through examples, including demonstrating its
proficiency in interpreting and generating Sui-Move, a newly introduced smart
contract language, using prompt engineering.
"
Open-Ended Instructable Embodied Agents with Memory-Augmented Large  Language Models,Gabriel Sarch,http://arxiv.org/pdf/2310.15127v1.pdf,2023-10-23,"['cs.ai', 'cs.cl', 'cs.lg', 'cs.ro']",2310.15127v1.pdf,"  Pre-trained and frozen LLMs can effectively map simple scene re-arrangement
instructions to programs over a robot's visuomotor functions through
appropriate few-shot example prompting. To parse open-domain natural language
and adapt to a user's idiosyncratic procedures, not known during prompt
engineering time, fixed prompts fall short. In this paper, we introduce HELPER,
an embodied agent equipped with an external memory of language-program pairs
that parses free-form human-robot dialogue into action programs through
retrieval-augmented LLM prompting: relevant memories are retrieved based on the
current dialogue, instruction, correction or VLM description, and used as
in-context prompt examples for LLM querying. The memory is expanded during
deployment to include pairs of user's language and action plans, to assist
future inferences and personalize them to the user's language and routines.
HELPER sets a new state-of-the-art in the TEACh benchmark in both Execution
from Dialog History (EDH) and Trajectory from Dialogue (TfD), with 1.7x
improvement over the previous SOTA for TfD. Our models, code and video results
can be found in our project's website: https://helper-agent-llm.github.io.
"
TaskDiff: A Similarity Metric for Task-Oriented Conversations,Ankita Bhaumik,http://arxiv.org/pdf/2310.15298v2.pdf,2023-10-23,"['cs.cl', 'cs.ai']",2310.15298v2.pdf,"  The popularity of conversational digital assistants has resulted in the
availability of large amounts of conversational data which can be utilized for
improved user experience and personalized response generation. Building these
assistants using popular large language models like ChatGPT also require
additional emphasis on prompt engineering and evaluation methods. Textual
similarity metrics are a key ingredient for such analysis and evaluations.
While many similarity metrics have been proposed in the literature, they have
not proven effective for task-oriented conversations as they do not take
advantage of unique conversational features. To address this gap, we present
TaskDiff, a novel conversational similarity metric that utilizes different
dialogue components (utterances, intents, and slots) and their distributions to
compute similarity. Extensive experimental evaluation of TaskDiff on a
benchmark dataset demonstrates its superior performance and improved robustness
over other related approaches.
"
Large language models for aspect-based sentiment analysis,Paul F. Simmering,http://arxiv.org/pdf/2310.18025v1.pdf,2023-10-27,"['cs.cl', 'cs.ai']",2310.18025v1.pdf,"  Large language models (LLMs) offer unprecedented text completion
capabilities. As general models, they can fulfill a wide range of roles,
including those of more specialized models. We assess the performance of GPT-4
and GPT-3.5 in zero shot, few shot and fine-tuned settings on the aspect-based
sentiment analysis (ABSA) task. Fine-tuned GPT-3.5 achieves a state-of-the-art
F1 score of 83.8 on the joint aspect term extraction and polarity
classification task of the SemEval-2014 Task 4, improving upon InstructABSA
[@scaria_instructabsa_2023] by 5.7%. However, this comes at the price of 1000
times more model parameters and thus increased inference cost. We discuss the
the cost-performance trade-offs of different models, and analyze the typical
errors that they make. Our results also indicate that detailed prompts improve
performance in zero-shot and few-shot settings but are not necessary for
fine-tuned models. This evidence is relevant for practioners that are faced
with the choice of prompt engineering versus fine-tuning when using LLMs for
ABSA.
"
Can Large Language Models Capture Public Opinion about Global Warming?  An Empirical Assessment of Algorithmic Fidelity and Bias,S. Lee,http://arxiv.org/pdf/2311.00217v1.pdf,2023-11-01,"['cs.ai', 'cs.cy']",2311.00217v1.pdf,"  Large language models (LLMs) have demonstrated their potential in social
science research by emulating human perceptions and behaviors, a concept
referred to as algorithmic fidelity. This study assesses the algorithmic
fidelity and bias of LLMs by utilizing two nationally representative climate
change surveys. The LLMs were conditioned on demographics and/or psychological
covariates to simulate survey responses. The findings indicate that LLMs can
effectively capture presidential voting behaviors but encounter challenges in
accurately representing global warming perspectives when relevant covariates
are not included. GPT-4 exhibits improved performance when conditioned on both
demographics and covariates. However, disparities emerge in LLM estimations of
the views of certain groups, with LLMs tending to underestimate worry about
global warming among Black Americans. While highlighting the potential of LLMs
to aid social science research, these results underscore the importance of
meticulous conditioning, model selection, survey question format, and bias
assessment when employing LLMs for survey simulation. Further investigation
into prompt engineering and algorithm auditing is essential to harness the
power of LLMs while addressing their inherent limitations.
"
Noisy Exemplars Make Large Language Models More Robust: A  Domain-Agnostic Behavioral Analysis,Hongyi Zheng,http://arxiv.org/pdf/2311.00258v1.pdf,2023-11-01,"['cs.cl', 'cs.lg']",2311.00258v1.pdf,"  Recent advances in prompt engineering enable large language models (LLMs) to
solve multi-hop logical reasoning problems with impressive accuracy. However,
there is little existing work investigating the robustness of LLMs with
few-shot prompting techniques. Therefore, we introduce a systematic approach to
test the robustness of LLMs in multi-hop reasoning tasks via domain-agnostic
perturbations. We include perturbations at multiple levels of abstractions
(e.g. lexical perturbations such as typos, and semantic perturbations such as
the inclusion of intermediate reasoning steps in the questions) to conduct
behavioral analysis on the LLMs. Throughout our experiments, we find that
models are more sensitive to certain perturbations such as replacing words with
their synonyms. We also demonstrate that increasing the proportion of perturbed
exemplars in the prompts improves the robustness of few-shot prompting methods.
"
Instruction Distillation Makes Large Language Models Efficient Zero-shot  Rankers,Weiwei Sun,http://arxiv.org/pdf/2311.01555v1.pdf,2023-11-02,"['cs.ir', 'cs.cl']",2311.01555v1.pdf,"  Recent studies have demonstrated the great potential of Large Language Models
(LLMs) serving as zero-shot relevance rankers. The typical approach involves
making comparisons between pairs or lists of documents. Although effective,
these listwise and pairwise methods are not efficient and also heavily rely on
intricate prompt engineering. To tackle this problem, we introduce a novel
instruction distillation method. The key idea is to distill the pairwise
ranking ability of open-sourced LLMs to a simpler but more efficient pointwise
ranking. Specifically, given the same LLM, we first rank documents using the
effective pairwise approach with complex instructions, and then distill the
teacher predictions to the pointwise approach with simpler instructions.
Evaluation results on the BEIR, TREC, and ReDial datasets demonstrate that
instruction distillation can improve efficiency by 10 to 100x and also enhance
the ranking performance of LLMs. Furthermore, our approach surpasses the
performance of existing supervised methods like monoT5 and is on par with the
state-of-the-art zero-shot methods. The code to reproduce our results is
available at www.github.com/sunnweiwei/RankGPT.
"
Indicative Summarization of Long Discussions,Shahbaz Syed,http://arxiv.org/pdf/2311.01882v1.pdf,2023-11-03,['cs.cl'],2311.01882v1.pdf,"  Online forums encourage the exchange and discussion of different stances on
many topics. Not only do they provide an opportunity to present one's own
arguments, but may also gather a broad cross-section of others' arguments.
However, the resulting long discussions are difficult to overview. This paper
presents a novel unsupervised approach using large language models (LLMs) to
generating indicative summaries for long discussions that basically serve as
tables of contents. Our approach first clusters argument sentences, generates
cluster labels as abstractive summaries, and classifies the generated cluster
labels into argumentation frames resulting in a two-level summary. Based on an
extensively optimized prompt engineering approach, we evaluate 19~LLMs for
generative cluster labeling and frame classification. To evaluate the
usefulness of our indicative summaries, we conduct a purpose-driven user study
via a new visual interface called Discussion Explorer: It shows that our
proposed indicative summaries serve as a convenient navigation tool to explore
long discussions.
"
Automating Governing Knowledge Commons and Contextual Integrity (GKC-CI)  Privacy Policy Annotations with Large Language Models,Jake Chanenson,http://arxiv.org/pdf/2311.02192v1.pdf,2023-11-03,"['cs.cy', 'cs.cl', 'cs.lg']",2311.02192v1.pdf,"  Identifying contextual integrity (CI) and governing knowledge commons (GKC)
parameters in privacy policy texts can facilitate normative privacy analysis.
However, GKC-CI annotation has heretofore required manual or crowdsourced
effort. This paper demonstrates that high-accuracy GKC-CI parameter annotation
of privacy policies can be performed automatically using large language models.
We fine-tune 18 open-source and proprietary models on 21,588 GKC-CI annotations
from 16 ground truth privacy policies. Our best-performing model (fine-tuned
GPT-3.5 Turbo with prompt engineering) has an accuracy of 86%, exceeding the
performance of prior crowdsourcing approaches despite the complexity of privacy
policy texts and the nuance of the GKC-CI annotation task. We apply our
best-performing model to privacy policies from 164 popular online services,
demonstrating the effectiveness of scaling GKC-CI annotation for data
exploration. We make all annotated policies as well as the training data and
scripts needed to fine-tune our best-performing model publicly available for
future research.
"
Requirements Engineering using Generative AI: Prompts and Prompting  Patterns,Krishna Ronanki,http://arxiv.org/pdf/2311.03832v1.pdf,2023-11-07,['cs.se'],2311.03832v1.pdf,"  [Context]: Companies are increasingly recognizing the importance of
automating Requirements Engineering (RE) tasks due to their resource-intensive
nature. The advent of GenAI has made these tasks more amenable to automation,
thanks to its ability to understand and interpret context effectively.
[Problem]: However, in the context of GenAI, prompt engineering is a critical
factor for success. Despite this, we currently lack tools and methods to
systematically assess and determine the most effective prompt patterns to
employ for a particular RE task. [Method]: Two tasks related to requirements,
specifically requirement classification and tracing, were automated using the
GPT-3.5 turbo API. The performance evaluation involved assessing various
prompts created using 5 prompt patterns and implemented programmatically to
perform the selected RE tasks, focusing on metrics such as precision, recall,
accuracy, and F-Score. [Results]: This paper evaluates the effectiveness of the
5 prompt patterns' ability to make GPT-3.5 turbo perform the selected RE tasks
and offers recommendations on which prompt pattern to use for a specific RE
task. Additionally, it also provides an evaluation framework as a reference for
researchers and practitioners who want to evaluate different prompt patterns
for different RE tasks.
"
Differentiable Prompt Makes Pre-trained Language Models Better Few-shot  Learners,Ningyu Zhang,http://arxiv.org/pdf/2108.13161v7.pdf,2021-08-30,"['cs.cl', 'cs.ai', 'cs.cv', 'cs.ir', 'cs.lg']",2108.13161v7.pdf,"  Large-scale pre-trained language models have contributed significantly to
natural language processing by demonstrating remarkable abilities as few-shot
learners. However, their effectiveness depends mainly on scaling the model
parameters and prompt design, hindering their implementation in most real-world
applications. This study proposes a novel pluggable, extensible, and efficient
approach named DifferentiAble pRompT (DART), which can convert small language
models into better few-shot learners without any prompt engineering. The main
principle behind this approach involves reformulating potential natural
language processing tasks into the task of a pre-trained language model and
differentially optimizing the prompt template as well as the target label with
backpropagation. Furthermore, the proposed approach can be: (i) Plugged to any
pre-trained language models; (ii) Extended to widespread classification tasks.
A comprehensive evaluation of standard NLP tasks demonstrates that the proposed
approach achieves a better few-shot performance. Code is available in
https://github.com/zjunlp/DART.
"
ActionCLIP: A New Paradigm for Video Action Recognition,Mengmeng Wang,http://arxiv.org/pdf/2109.08472v1.pdf,2021-09-17,['cs.cv'],2109.08472v1.pdf,"  The canonical approach to video action recognition dictates a neural model to
do a classic and standard 1-of-N majority vote task. They are trained to
predict a fixed set of predefined categories, limiting their transferable
ability on new datasets with unseen concepts. In this paper, we provide a new
perspective on action recognition by attaching importance to the semantic
information of label texts rather than simply mapping them into numbers.
Specifically, we model this task as a video-text matching problem within a
multimodal learning framework, which strengthens the video representation with
more semantic language supervision and enables our model to do zero-shot action
recognition without any further labeled data or parameters requirements.
Moreover, to handle the deficiency of label texts and make use of tremendous
web data, we propose a new paradigm based on this multimodal learning framework
for action recognition, which we dub ""pre-train, prompt and fine-tune"". This
paradigm first learns powerful representations from pre-training on a large
amount of web image-text or video-text data. Then it makes the action
recognition task to act more like pre-training problems via prompt engineering.
Finally, it end-to-end fine-tunes on target datasets to obtain strong
performance. We give an instantiation of the new paradigm, ActionCLIP, which
not only has superior and flexible zero-shot/few-shot transfer ability but also
reaches a top performance on general action recognition task, achieving 83.8%
top-1 accuracy on Kinetics-400 with a ViT-B/16 as the backbone. Code is
available at https://github.com/sallymmx/ActionCLIP.git
"
CLIP-Adapter: Better Vision-Language Models with Feature Adapters,Peng Gao,http://arxiv.org/pdf/2110.04544v1.pdf,2021-10-09,"['cs.cv', 'cs.cl']",2110.04544v1.pdf,"  Large-scale contrastive vision-language pre-training has shown significant
progress in visual representation learning. Unlike traditional visual systems
trained by a fixed set of discrete labels, a new paradigm was introduced in
\cite{radford2021learning} to directly learn to align images with raw texts in
an open-vocabulary setting. On downstream tasks, a carefully chosen text prompt
is employed to make zero-shot predictions.~To avoid non-trivial prompt
engineering, context optimization \cite{zhou2021coop} has been proposed to
learn continuous vectors as task-specific prompts with few-shot training
examples.~In this paper, we show that there is an alternative path to achieve
better vision-language models other than prompt tuning.~While prompt tuning is
for the textual inputs, we propose CLIP-Adapter to conduct fine-tuning with
feature adapters on either visual or language branch. Specifically,
CLIP-Adapter adopts an additional bottleneck layer to learn new features and
performs residual-style feature blending with the original pre-trained
features.~As a consequence, CLIP-Adapter is able to outperform context
optimization while maintains a simple design. Experiments and extensive
ablation studies on various visual classification tasks demonstrate the
effectiveness of our approach.
"
Symbolic Knowledge Distillation: from General Language Models to  Commonsense Models,Peter West,http://arxiv.org/pdf/2110.07178v2.pdf,2021-10-14,['cs.cl'],2110.07178v2.pdf,"  The common practice for training commonsense models has gone
from-human-to-corpus-to-machine: humans author commonsense knowledge graphs in
order to train commonsense models. In this work, we investigate an alternative,
from-machine-to-corpus-to-machine: general language models author these
commonsense knowledge graphs to train commonsense models. Our study leads to a
new framework, Symbolic Knowledge Distillation. As with prior art in Knowledge
Distillation (Hinton et al., 2015), our approach uses larger models to teach
smaller models. A key difference is that we distill knowledge symbolically-as
text-in addition to the neural model. We also distill only one aspect-the
commonsense of a general language model teacher, allowing the student to be a
different type, a commonsense model. Altogether, we show that careful prompt
engineering and a separately trained critic model allow us to selectively
distill high-quality causal commonsense from GPT-3, a general language model.
Empirical results demonstrate that, for the first time, a human-authored
commonsense knowledge graph is surpassed by our automatically distilled variant
in all three criteria: quantity, quality, and diversity. In addition, it
results in a neural commonsense model that surpasses the teacher model's
commonsense capabilities despite its 100x smaller size. We apply this to the
ATOMIC resource, and share our new symbolic knowledge graph and commonsense
models.
"
Red Teaming Language Models with Language Models,Ethan Perez,http://arxiv.org/pdf/2202.03286v1.pdf,2022-02-07,"['cs.cl', 'cs.ai', 'cs.cr', 'cs.lg']",2202.03286v1.pdf,"  Language Models (LMs) often cannot be deployed because of their potential to
harm users in hard-to-predict ways. Prior work identifies harmful behaviors
before deployment by using human annotators to hand-write test cases. However,
human annotation is expensive, limiting the number and diversity of test cases.
In this work, we automatically find cases where a target LM behaves in a
harmful way, by generating test cases (""red teaming"") using another LM. We
evaluate the target LM's replies to generated test questions using a classifier
trained to detect offensive content, uncovering tens of thousands of offensive
replies in a 280B parameter LM chatbot. We explore several methods, from
zero-shot generation to reinforcement learning, for generating test cases with
varying levels of diversity and difficulty. Furthermore, we use prompt
engineering to control LM-generated test cases to uncover a variety of other
harms, automatically finding groups of people that the chatbot discusses in
offensive ways, personal and hospital phone numbers generated as the chatbot's
own contact info, leakage of private training data in generated text, and harms
that occur over the course of a conversation. Overall, LM-based red teaming is
one promising tool (among many needed) for finding and fixing diverse,
undesirable LM behaviors before impacting users.
"
Learning to Prompt for Open-Vocabulary Object Detection with  Vision-Language Model,Yu Du,http://arxiv.org/pdf/2203.14940v1.pdf,2022-03-28,['cs.cv'],2203.14940v1.pdf,"  Recently, vision-language pre-training shows great potential in
open-vocabulary object detection, where detectors trained on base classes are
devised for detecting new classes. The class text embedding is firstly
generated by feeding prompts to the text encoder of a pre-trained
vision-language model. It is then used as the region classifier to supervise
the training of a detector. The key element that leads to the success of this
model is the proper prompt, which requires careful words tuning and ingenious
design. To avoid laborious prompt engineering, there are some prompt
representation learning methods being proposed for the image classification
task, which however can only be sub-optimal solutions when applied to the
detection task. In this paper, we introduce a novel method, detection prompt
(DetPro), to learn continuous prompt representations for open-vocabulary object
detection based on the pre-trained vision-language model. Different from the
previous classification-oriented methods, DetPro has two highlights: 1) a
background interpretation scheme to include the proposals in image background
into the prompt training; 2) a context grading scheme to separate proposals in
image foreground for tailored prompt training. We assemble DetPro with ViLD, a
recent state-of-the-art open-world object detector, and conduct experiments on
the LVIS as well as transfer learning on the Pascal VOC, COCO, Objects365
datasets. Experimental results show that our DetPro outperforms the baseline
ViLD in all settings, e.g., +3.4 APbox and +3.0 APmask improvements on the
novel classes of LVIS. Code and models are available at
https://github.com/dyabel/detpro.
"
No Token Left Behind: Explainability-Aided Image Classification and  Generation,Roni Paiss,http://arxiv.org/pdf/2204.04908v2.pdf,2022-04-11,['cs.cv'],2204.04908v2.pdf,"  The application of zero-shot learning in computer vision has been
revolutionized by the use of image-text matching models. The most notable
example, CLIP, has been widely used for both zero-shot classification and
guiding generative models with a text prompt. However, the zero-shot use of
CLIP is unstable with respect to the phrasing of the input text, making it
necessary to carefully engineer the prompts used. We find that this instability
stems from a selective similarity score, which is based only on a subset of the
semantically meaningful input tokens. To mitigate it, we present a novel
explainability-based approach, which adds a loss term to ensure that CLIP
focuses on all relevant semantic parts of the input, in addition to employing
the CLIP similarity loss used in previous works. When applied to one-shot
classification through prompt engineering, our method yields an improvement in
the recognition rate, without additional training or fine-tuning. Additionally,
we show that CLIP guidance of generative models using our method significantly
improves the generated images. Finally, we demonstrate a novel use of CLIP
guidance for text-based image generation with spatial conditioning on object
location, by requiring the image explainability heatmap for each object to be
confined to a pre-determined bounding box.
"
On Measuring Social Biases in Prompt-Based Multi-Task Learning,Afra Feyza AkyĂĽrek,http://arxiv.org/pdf/2205.11605v1.pdf,2022-05-23,"['cs.cl', 'cs.cy']",2205.11605v1.pdf,"  Large language models trained on a mixture of NLP tasks that are converted
into a text-to-text format using prompts, can generalize into novel forms of
language and handle novel tasks. A large body of work within prompt engineering
attempts to understand the effects of input forms and prompts in achieving
superior performance. We consider an alternative measure and inquire whether
the way in which an input is encoded affects social biases promoted in outputs.
In this paper, we study T0, a large-scale multi-task text-to-text language
model trained using prompt-based learning. We consider two different forms of
semantically equivalent inputs: question-answer format and premise-hypothesis
format. We use an existing bias benchmark for the former BBQ and create the
first bias benchmark in natural language inference BBNLI with hand-written
hypotheses while also converting each benchmark into the other form. The
results on two benchmarks suggest that given two different formulations of
essentially the same input, T0 conspicuously acts more biased in question
answering form, which is seen during training, compared to premise-hypothesis
form which is unlike its training examples. Code and data are released under
https://github.com/feyzaakyurek/bbnli.
"
OrdinalCLIP: Learning Rank Prompts for Language-Guided Ordinal  Regression,Wanhua Li,http://arxiv.org/pdf/2206.02338v2.pdf,2022-06-06,['cs.cv'],2206.02338v2.pdf,"  This paper presents a language-powered paradigm for ordinal regression.
Existing methods usually treat each rank as a category and employ a set of
weights to learn these concepts. These methods are easy to overfit and usually
attain unsatisfactory performance as the learned concepts are mainly derived
from the training set. Recent large pre-trained vision-language models like
CLIP have shown impressive performance on various visual tasks. In this paper,
we propose to learn the rank concepts from the rich semantic CLIP latent space.
Specifically, we reformulate this task as an image-language matching problem
with a contrastive objective, which regards labels as text and obtains a
language prototype from a text encoder for each rank. While prompt engineering
for CLIP is extremely time-consuming, we propose OrdinalCLIP, a differentiable
prompting method for adapting CLIP for ordinal regression. OrdinalCLIP consists
of learnable context tokens and learnable rank embeddings; The learnable rank
embeddings are constructed by explicitly modeling numerical continuity,
resulting in well-ordered, compact language prototypes in the CLIP space. Once
learned, we can only save the language prototypes and discard the huge language
model, resulting in zero additional computational overhead compared with the
linear head counterpart. Experimental results show that our paradigm achieves
competitive performance in general ordinal regression tasks, and gains
improvements in few-shot and distribution shift settings for age estimation.
The code is available at https://github.com/xk-huang/OrdinalCLIP.
"
P2P: Tuning Pre-trained Image Models for Point Cloud Analysis with  Point-to-Pixel Prompting,Ziyi Wang,http://arxiv.org/pdf/2208.02812v2.pdf,2022-08-04,"['cs.cv', 'cs.ai', 'cs.lg']",2208.02812v2.pdf,"  Nowadays, pre-training big models on large-scale datasets has become a
crucial topic in deep learning. The pre-trained models with high representation
ability and transferability achieve a great success and dominate many
downstream tasks in natural language processing and 2D vision. However, it is
non-trivial to promote such a pretraining-tuning paradigm to the 3D vision,
given the limited training data that are relatively inconvenient to collect. In
this paper, we provide a new perspective of leveraging pre-trained 2D knowledge
in 3D domain to tackle this problem, tuning pre-trained image models with the
novel Point-to-Pixel prompting for point cloud analysis at a minor parameter
cost. Following the principle of prompting engineering, we transform point
clouds into colorful images with geometry-preserved projection and
geometry-aware coloring to adapt to pre-trained image models, whose weights are
kept frozen during the end-to-end optimization of point cloud analysis tasks.
We conduct extensive experiments to demonstrate that cooperating with our
proposed Point-to-Pixel Prompting, better pre-trained image model will lead to
consistently better performance in 3D vision. Enjoying prosperous development
from image pre-training field, our method attains 89.3% accuracy on the hardest
setting of ScanObjectNN, surpassing conventional point cloud models with much
fewer trainable parameters. Our framework also exhibits very competitive
performance on ModelNet classification and ShapeNet Part Segmentation. Code is
available at https://github.com/wangzy22/P2P.
"
Unsupervised Hashing with Semantic Concept Mining,Rong-Cheng Tu,http://arxiv.org/pdf/2209.11475v1.pdf,2022-09-23,"['cs.cv', 'cs.ir']",2209.11475v1.pdf,"  Recently, to improve the unsupervised image retrieval performance, plenty of
unsupervised hashing methods have been proposed by designing a semantic
similarity matrix, which is based on the similarities between image features
extracted by a pre-trained CNN model. However, most of these methods tend to
ignore high-level abstract semantic concepts contained in images. Intuitively,
concepts play an important role in calculating the similarity among images. In
real-world scenarios, each image is associated with some concepts, and the
similarity between two images will be larger if they share more identical
concepts. Inspired by the above intuition, in this work, we propose a novel
Unsupervised Hashing with Semantic Concept Mining, called UHSCM, which
leverages a VLP model to construct a high-quality similarity matrix.
Specifically, a set of randomly chosen concepts is first collected. Then, by
employing a vision-language pretraining (VLP) model with the prompt engineering
which has shown strong power in visual representation learning, the set of
concepts is denoised according to the training images. Next, the proposed
method UHSCM applies the VLP model with prompting again to mine the concept
distribution of each image and construct a high-quality semantic similarity
matrix based on the mined concept distributions. Finally, with the semantic
similarity matrix as guiding information, a novel hashing loss with a modified
contrastive loss based regularization item is proposed to optimize the hashing
network. Extensive experiments on three benchmark datasets show that the
proposed method outperforms the state-of-the-art baselines in the image
retrieval task.
"
Robust Preference Learning for Storytelling via Contrastive  Reinforcement Learning,Louis Castricato,http://arxiv.org/pdf/2210.07792v2.pdf,2022-10-14,['cs.cl'],2210.07792v2.pdf,"  Controlled automated story generation seeks to generate natural language
stories satisfying constraints from natural language critiques or preferences.
Existing methods to control for story preference utilize prompt engineering
which is labor intensive and often inconsistent. They may also use
logit-manipulation methods which require annotated datasets to exist for the
desired attributes. To address these issues, we first train a contrastive
bi-encoder model to align stories with corresponding human critiques, named
CARP, building a general purpose preference model. This is subsequently used as
a reward function to fine-tune a generative language model via reinforcement
learning. However, simply fine-tuning a generative language model with a
contrastive reward model does not always reliably result in a story generation
system capable of generating stories that meet user preferences. To increase
story generation robustness we further fine-tune the contrastive reward model
using a prompt-learning technique. A human participant study is then conducted
comparing generations from our full system, ablations, and two baselines. We
show that the full fine-tuning pipeline results in a story generator preferred
over a LLM 20x as large as well as logit-based methods. This motivates the use
of contrastive learning for general purpose human preference modeling.
"
Towards Equitable Representation in Text-to-Image Synthesis Models with  the Cross-Cultural Understanding Benchmark (CCUB) Dataset,Zhixuan Liu,http://arxiv.org/pdf/2301.12073v2.pdf,2023-01-28,['cs.cv'],2301.12073v2.pdf,"  It has been shown that accurate representation in media improves the
well-being of the people who consume it. By contrast, inaccurate
representations can negatively affect viewers and lead to harmful perceptions
of other cultures. To achieve inclusive representation in generated images, we
propose a culturally-aware priming approach for text-to-image synthesis using a
small but culturally curated dataset that we collected, known here as
Cross-Cultural Understanding Benchmark (CCUB) Dataset, to fight the bias
prevalent in giant datasets. Our proposed approach is comprised of two
fine-tuning techniques: (1) Adding visual context via fine-tuning a pre-trained
text-to-image synthesis model, Stable Diffusion, on the CCUB text-image pairs,
and (2) Adding semantic context via automated prompt engineering using the
fine-tuned large language model, GPT-3, trained on our CCUB culturally-aware
text data. CCUB dataset is curated and our approach is evaluated by people who
have a personal relationship with that particular culture. Our experiments
indicate that priming using both text and image is effective in improving the
cultural relevance and decreasing the offensiveness of generated images while
maintaining quality.
"
Trash to Treasure: Using text-to-image models to inform the design of  physical artefacts,Amy Smith,http://arxiv.org/pdf/2302.00561v1.pdf,2023-02-01,['cs.ai'],2302.00561v1.pdf,"  Text-to-image generative models have recently exploded in popularity and
accessibility. Yet so far, use of these models in creative tasks that bridge
the 2D digital world and the creation of physical artefacts has been
understudied. We conduct a pilot study to investigate if and how text-to-image
models can be used to assist in upstream tasks within the creative process,
such as ideation and visualization, prior to a sculpture-making activity.
Thirty participants selected sculpture-making materials and generated three
images using the Stable Diffusion text-to-image generator, each with text
prompts of their choice, with the aim of informing and then creating a physical
sculpture. The majority of participants (23/30) reported that the generated
images informed their sculptures, and 28/30 reported interest in using
text-to-image models to help them in a creative task in the future. We identify
several prompt engineering strategies and find that a participant's prompting
strategy relates to their stage in the creative process. We discuss how our
findings can inform support for users at different stages of the design process
and for using text-to-image models for physical artefact design.
"
"Chat2VIS: Generating Data Visualisations via Natural Language using  ChatGPT, Codex and GPT-3 Large Language Models",Paula Maddigan,http://arxiv.org/pdf/2302.02094v2.pdf,2023-02-04,['cs.hc'],2302.02094v2.pdf,"  The field of data visualisation has long aimed to devise solutions for
generating visualisations directly from natural language text. Research in
Natural Language Interfaces (NLIs) has contributed towards the development of
such techniques. However, the implementation of workable NLIs has always been
challenging due to the inherent ambiguity of natural language, as well as in
consequence of unclear and poorly written user queries which pose problems for
existing language models in discerning user intent. Instead of pursuing the
usual path of developing new iterations of language models, this study uniquely
proposes leveraging the advancements in pre-trained large language models
(LLMs) such as ChatGPT and GPT-3 to convert free-form natural language directly
into code for appropriate visualisations. This paper presents a novel system,
Chat2VIS, which takes advantage of the capabilities of LLMs and demonstrates
how, with effective prompt engineering, the complex problem of language
understanding can be solved more efficiently, resulting in simpler and more
accurate end-to-end solutions than prior approaches. Chat2VIS shows that LLMs
together with the proposed prompts offer a reliable approach to rendering
visualisations from natural language queries, even when queries are highly
misspecified and underspecified. This solution also presents a significant
reduction in costs for the development of NLI systems, while attaining greater
visualisation inference abilities compared to traditional NLP approaches that
use hand-crafted grammar rules and tailored models. This study also presents
how LLM prompts can be constructed in a way that preserves data security and
privacy while being generalisable to different datasets. This work compares the
performance of GPT-3, Codex and ChatGPT across a number of case studies and
contrasts the performances with prior studies.
"
CHiLS: Zero-Shot Image Classification with Hierarchical Label Sets,Zachary Novack,http://arxiv.org/pdf/2302.02551v3.pdf,2023-02-06,"['cs.cv', 'cs.lg']",2302.02551v3.pdf,"  Open vocabulary models (e.g. CLIP) have shown strong performance on zero-shot
classification through their ability generate embeddings for each class based
on their (natural language) names. Prior work has focused on improving the
accuracy of these models through prompt engineering or by incorporating a small
amount of labeled downstream data (via finetuning). However, there has been
little focus on improving the richness of the class names themselves, which can
pose issues when class labels are coarsely-defined and are uninformative. We
propose Classification with Hierarchical Label Sets (or CHiLS), an alternative
strategy for zero-shot classification specifically designed for datasets with
implicit semantic hierarchies. CHiLS proceeds in three steps: (i) for each
class, produce a set of subclasses, using either existing label hierarchies or
by querying GPT-3; (ii) perform the standard zero-shot CLIP procedure as though
these subclasses were the labels of interest; (iii) map the predicted subclass
back to its parent to produce the final prediction. Across numerous datasets
with underlying hierarchical structure, CHiLS leads to improved accuracy in
situations both with and without ground-truth hierarchical information. CHiLS
is simple to implement within existing zero-shot pipelines and requires no
additional training cost. Code is available at:
https://github.com/acmi-lab/CHILS.
"
"A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on  Reasoning, Hallucination, and Interactivity",Yejin Bang,http://arxiv.org/pdf/2302.04023v2.pdf,2023-02-08,"['cs.cl', 'cs.ai']",2302.04023v2.pdf,"  This paper proposes a framework for quantitatively evaluating interactive
LLMs such as ChatGPT using publicly available data sets. We carry out an
extensive technical evaluation of ChatGPT using 23 data sets covering 8
different common NLP application tasks. We evaluate the multitask, multilingual
and multi-modal aspects of ChatGPT based on these data sets and a newly
designed multimodal dataset. We find that ChatGPT outperforms LLMs with
zero-shot learning on most tasks and even outperforms fine-tuned models on some
tasks. We find that it is better at understanding non-Latin script languages
than generating them. It is able to generate multimodal content from textual
prompts, via an intermediate code generation step. Moreover, we find that
ChatGPT is 63.41% accurate on average in 10 different reasoning categories
under logical reasoning, non-textual reasoning, and commonsense reasoning,
hence making it an unreliable reasoner. It is, for example, better at deductive
than inductive reasoning. ChatGPT suffers from hallucination problems like
other LLMs and it generates more extrinsic hallucinations from its parametric
memory as it does not have access to an external knowledge base. Finally, the
interactive feature of ChatGPT enables human collaboration with the underlying
LLM to improve its performance, i.e, 8% ROUGE-1 on summarization and 2% ChrF++
on machine translation, in a multi-turn ""prompt engineering"" fashion. We also
release codebase for evaluation set extraction.
"
Prompt Stealing Attacks Against Text-to-Image Generation Models,Xinyue Shen,http://arxiv.org/pdf/2302.09923v1.pdf,2023-02-20,"['cs.cr', 'cs.lg']",2302.09923v1.pdf,"  Text-to-Image generation models have revolutionized the artwork design
process and enabled anyone to create high-quality images by entering text
descriptions called prompts. Creating a high-quality prompt that consists of a
subject and several modifiers can be time-consuming and costly. In consequence,
a trend of trading high-quality prompts on specialized marketplaces has
emerged. In this paper, we propose a novel attack, namely prompt stealing
attack, which aims to steal prompts from generated images by text-to-image
generation models. Successful prompt stealing attacks direct violate the
intellectual property and privacy of prompt engineers and also jeopardize the
business model of prompt trading marketplaces. We first perform a large-scale
analysis on a dataset collected by ourselves and show that a successful prompt
stealing attack should consider a prompt's subject as well as its modifiers. We
then propose the first learning-based prompt stealing attack, PromptStealer,
and demonstrate its superiority over two baseline methods quantitatively and
qualitatively. We also make some initial attempts to defend PromptStealer. In
general, our study uncovers a new attack surface in the ecosystem created by
the popular text-to-image generation models. We hope our results can help to
mitigate the threat. To facilitate research in this field, we will share our
dataset and code with the community.
"
Controlled and Conditional Text to Image Generation with Diffusion Prior,Pranav Aggarwal,http://arxiv.org/pdf/2302.11710v2.pdf,2023-02-23,['cs.cv'],2302.11710v2.pdf,"  Denoising Diffusion models have shown remarkable performance in generating
diverse, high quality images from text. Numerous techniques have been proposed
on top of or in alignment with models like Stable Diffusion and Imagen that
generate images directly from text. A lesser explored approach is DALLE-2's two
step process comprising a Diffusion Prior that generates a CLIP image embedding
from text and a Diffusion Decoder that generates an image from a CLIP image
embedding. We explore the capabilities of the Diffusion Prior and the
advantages of an intermediate CLIP representation. We observe that Diffusion
Prior can be used in a memory and compute efficient way to constrain the
generation to a specific domain without altering the larger Diffusion Decoder.
Moreover, we show that the Diffusion Prior can be trained with additional
conditional information such as color histogram to further control the
generation. We show quantitatively and qualitatively that the proposed
approaches perform better than prompt engineering for domain specific
generation and existing baselines for color conditioned generation. We believe
that our observations and results will instigate further research into the
diffusion prior and uncover more of its capabilities.
"
EvoPrompting: Language Models for Code-Level Neural Architecture Search,Angelica Chen,http://arxiv.org/pdf/2302.14838v2.pdf,2023-02-28,"['cs.ne', 'cs.ai', 'cs.cl', 'cs.lg']",2302.14838v2.pdf,"  Given the recent impressive accomplishments of language models (LMs) for code
generation, we explore the use of LMs as adaptive mutation and crossover
operators for an evolutionary neural architecture search (NAS) algorithm. While
NAS still proves too difficult a task for LMs to succeed at solely through
prompting, we find that the combination of evolutionary prompt engineering with
soft prompt-tuning, a method we term EvoPrompting, consistently finds diverse
and high performing models. We first demonstrate that EvoPrompting is effective
on the computationally efficient MNIST-1D dataset, where EvoPrompting produces
convolutional architecture variants that outperform both those designed by
human experts and naive few-shot prompting in terms of accuracy and model size.
We then apply our method to searching for graph neural networks on the CLRS
Algorithmic Reasoning Benchmark, where EvoPrompting is able to design novel
architectures that outperform current state-of-the-art models on 21 out of 30
algorithmic reasoning tasks while maintaining similar model size. EvoPrompting
is successful at designing accurate and efficient neural network architectures
across a variety of machine learning tasks, while also being general enough for
easy adaptation to other tasks beyond neural network design.
"
Extracting Accurate Materials Data from Research Papers with  Conversational Language Models and Prompt Engineering,Maciej P. Polak,http://arxiv.org/pdf/2303.05352v2.pdf,2023-03-07,"['cs.cl', 'cond-mat.mtrl-sci']",2303.05352v2.pdf,"  There has been a growing effort to replace hand extraction of data from
research papers with automated data extraction based on natural language
processing, language models, and recently, large language models (LLMs).
Although these methods enable efficient extraction of data from large sets of
research papers, they require a significant amount of up-front effort,
expertise, and coding. In this work we propose the ChatExtract method that can
fully automate very accurate data extraction with minimal initial effort and
background, using an advanced conversational LLM. ChatExtract consists of a set
of engineered prompts applied to a conversational LLM that both identify
sentences with data, extract that data, and assure the data's correctness
through a series of follow-up questions. These follow-up questions largely
overcome known issues with LLMs providing factually inaccurate responses.
ChatExtract can be applied with any conversational LLMs and yields very high
quality data extraction. In tests on materials data we find precision and
recall both close to 90% from the best conversational LLMs, like ChatGPT-4. We
demonstrate that the exceptional performance is enabled by the information
retention in a conversational model combined with purposeful redundancy and
introducing uncertainty through follow-up prompts. These results suggest that
approaches similar to ChatExtract, due to their simplicity, transferability,
and accuracy are likely to become powerful tools for data extraction in the
near future. Finally, databases for critical cooling rates of metallic glasses
and yield strengths of high entropy alloys are developed using ChatExtract.
"
On Codex Prompt Engineering for OCL Generation: An Empirical Study,Seif Abukhalaf,http://arxiv.org/pdf/2303.16244v1.pdf,2023-03-28,"['cs.se', 'cs.ai']",2303.16244v1.pdf,"  The Object Constraint Language (OCL) is a declarative language that adds
constraints and object query expressions to MOF models. Despite its potential
to provide precision and conciseness to UML models, the unfamiliar syntax of
OCL has hindered its adoption. Recent advancements in LLMs, such as GPT-3, have
shown their capability in many NLP tasks, including semantic parsing and text
generation. Codex, a GPT-3 descendant, has been fine-tuned on publicly
available code from GitHub and can generate code in many programming languages.
We investigate the reliability of OCL constraints generated by Codex from
natural language specifications. To achieve this, we compiled a dataset of 15
UML models and 168 specifications and crafted a prompt template with slots to
populate with UML information and the target task, using both zero- and
few-shot learning methods. By measuring the syntactic validity and execution
accuracy metrics of the generated OCL constraints, we found that enriching the
prompts with UML information and enabling few-shot learning increases the
reliability of the generated OCL constraints. Furthermore, the results reveal a
close similarity based on sentence embedding between the generated OCL
constraints and the human-written ones in the ground truth, implying a level of
clarity and understandability in the generated OCL constraints by Codex.
"
Ten Quick Tips for Harnessing the Power of ChatGPT/GPT-4 in  Computational Biology,Tiago Lubiana,http://arxiv.org/pdf/2303.16429v1.pdf,2023-03-29,"['q-bio.ot', '92-04']",2303.16429v1.pdf,"  The rise of advanced chatbots, such as ChatGPT, has sparked curiosity in the
scientific community. ChatGPT is a general-purpose chatbot powered by large
language models (LLMs) GPT-3.5 and GPT-4, with the potential to impact numerous
fields, including computational biology. In this article, we offer ten tips
based on our experience with ChatGPT to assist computational biologists in
optimizing their workflows. We have collected relevant prompts and reviewed the
nascent literature in the field, compiling tips we project to remain pertinent
for future ChatGPT and LLM iterations, ranging from code refactoring to
scientific writing to prompt engineering. We hope our work will help
bioinformaticians to complement their workflows while staying aware of the
various implications of using this technology. Additionally, to track new and
creative applications for bioinformatics tools such as ChatGPT, we have
established a GitHub repository at
https://github.com/csbl-br/awesome-compbio-chatgpt. Our belief is that ethical
adherence to ChatGPT and other LLMs will increase the efficiency of
computational biologists, ultimately advancing the pace of scientific discovery
in the life sciences.
"
Humans in Humans Out: On GPT Converging Toward Common Sense in both  Success and Failure,Philipp Koralus,http://arxiv.org/pdf/2303.17276v1.pdf,2023-03-30,"['cs.ai', 'cs.cl', 'cs.hc', 'cs.lg', '00, 68', 'i.2.0; i.2.6']",2303.17276v1.pdf,"  Increase in computational scale and fine-tuning has seen a dramatic
improvement in the quality of outputs of large language models (LLMs) like GPT.
Given that both GPT-3 and GPT-4 were trained on large quantities of
human-generated text, we might ask to what extent their outputs reflect
patterns of human thinking, both for correct and incorrect cases. The Erotetic
Theory of Reason (ETR) provides a symbolic generative model of both human
success and failure in thinking, across propositional, quantified, and
probabilistic reasoning, as well as decision-making. We presented GPT-3,
GPT-3.5, and GPT-4 with 61 central inference and judgment problems from a
recent book-length presentation of ETR, consisting of experimentally verified
data-points on human judgment and extrapolated data-points predicted by ETR,
with correct inference patterns as well as fallacies and framing effects (the
ETR61 benchmark). ETR61 includes classics like Wason's card task, illusory
inferences, the decoy effect, and opportunity-cost neglect, among others. GPT-3
showed evidence of ETR-predicted outputs for 59% of these examples, rising to
77% in GPT-3.5 and 75% in GPT-4. Remarkably, the production of human-like
fallacious judgments increased from 18% in GPT-3 to 33% in GPT-3.5 and 34% in
GPT-4. This suggests that larger and more advanced LLMs may develop a tendency
toward more human-like mistakes, as relevant thought patterns are inherent in
human-produced training data. According to ETR, the same fundamental patterns
are involved both in successful and unsuccessful ordinary reasoning, so that
the ""bad"" cases could paradoxically be learned from the ""good"" cases. We
further present preliminary evidence that ETR-inspired prompt engineering could
reduce instances of these mistakes.
"
Pair Programming with Large Language Models for Sampling and Estimation  of Copulas,Jan GĂłrecki,http://arxiv.org/pdf/2303.18116v1.pdf,2023-03-31,"['cs.cl', 'stat.co', '65c60, 68n19, 68t50']",2303.18116v1.pdf,"  Without writing a single line of code by a human, an example Monte Carlo
simulation based application for stochastic dependence modeling with copulas is
developed using a state-of-the-art large language model (LLM) fine-tuned for
conversations. This includes interaction with ChatGPT in natural language and
using mathematical formalism, which, under careful supervision by a
human-expert, led to producing a working code in MATLAB, Python and R for
sampling from a given copula model, evaluation of the model's density,
performing maximum likelihood estimation, optimizing the code for parallel
computing for CPUs as well as for GPUs, and visualization of the computed
results. In contrast to other emerging studies that assess the accuracy of LLMs
like ChatGPT on tasks from a selected area, this work rather investigates ways
how to achieve a successful solution of a standard statistical task in a
collaboration of a human-expert and artificial intelligence (AI). Particularly,
through careful prompt engineering, we separate successful solutions generated
by ChatGPT from unsuccessful ones, resulting in a comprehensive list of related
pros and cons. It is demonstrated that if the typical pitfalls are avoided, we
can substantially benefit from collaborating with an AI partner. For example,
we show that if ChatGPT is not able to provide a correct solution due to a lack
of or incorrect knowledge, the human-expert can feed it with the correct
knowledge, e.g., in the form of mathematical theorems and formulas, and make it
to apply the gained knowledge in order to provide a solution that is correct.
Such ability presents an attractive opportunity to achieve a programmed
solution even for users with rather limited knowledge of programming
techniques.
"
"Unlocking the Potential of ChatGPT: A Comprehensive Exploration of its  Applications, Advantages, Limitations, and Future Directions in Natural  Language Processing",Walid Hariri,http://arxiv.org/pdf/2304.02017v5.pdf,2023-03-27,['cs.cl'],2304.02017v5.pdf,"  Large language models have revolutionized the field of artificial
intelligence and have been used in various applications. Among these models,
ChatGPT (Chat Generative Pre-trained Transformer) has been developed by OpenAI,
it stands out as a powerful tool that has been widely adopted. ChatGPT has been
successfully applied in numerous areas, including chatbots, content generation,
language translation, personalized recommendations, and even medical diagnosis
and treatment. Its success in these applications can be attributed to its
ability to generate human-like responses, understand natural language, and
adapt to different contexts. Its versatility and accuracy make it a powerful
tool for natural language processing (NLP). However, there are also limitations
to ChatGPT, such as its tendency to produce biased responses and its potential
to perpetuate harmful language patterns. This article provides a comprehensive
overview of ChatGPT, its applications, advantages, and limitations.
Additionally, the paper emphasizes the importance of ethical considerations
when using this robust tool in real-world scenarios. Finally, This paper
contributes to ongoing discussions surrounding artificial intelligence and its
impact on vision and NLP domains by providing insights into prompt engineering
techniques.
"
TagGPT: Large Language Models are Zero-shot Multimodal Taggers,Chen Li,http://arxiv.org/pdf/2304.03022v1.pdf,2023-04-06,['cs.ir'],2304.03022v1.pdf,"  Tags are pivotal in facilitating the effective distribution of multimedia
content in various applications in the contemporary Internet era, such as
search engines and recommendation systems. Recently, large language models
(LLMs) have demonstrated impressive capabilities across a wide range of tasks.
In this work, we propose TagGPT, a fully automated system capable of tag
extraction and multimodal tagging in a completely zero-shot fashion. Our core
insight is that, through elaborate prompt engineering, LLMs are able to extract
and reason about proper tags given textual clues of multimodal data, e.g., OCR,
ASR, title, etc. Specifically, to automatically build a high-quality tag set
that reflects user intent and interests for a specific application, TagGPT
predicts large-scale candidate tags from a series of raw data via prompting
LLMs, filtered with frequency and semantics. Given a new entity that needs
tagging for distribution, TagGPT introduces two alternative options for
zero-shot tagging, i.e., a generative method with late semantic matching with
the tag set, and another selective method with early matching in prompts. It is
well noticed that TagGPT provides a system-level solution based on a modular
framework equipped with a pre-trained LLM (GPT-3.5 used here) and a sentence
embedding model (SimCSE used here), which can be seamlessly replaced with any
more advanced one you want. TagGPT is applicable for various modalities of data
in modern social media and showcases strong generalization ability to a wide
range of applications. We evaluate TagGPT on publicly available datasets, i.e.,
Kuaishou and Food.com, and demonstrate the effectiveness of TagGPT compared to
existing hashtags and off-the-shelf taggers. Project page:
https://github.com/TencentARC/TagGPT.
"
Towards Interpretable Mental Health Analysis with Large Language Models,Kailai Yang,http://arxiv.org/pdf/2304.03347v4.pdf,2023-04-06,['cs.cl'],2304.03347v4.pdf,"  The latest large language models (LLMs) such as ChatGPT, exhibit strong
capabilities in automated mental health analysis. However, existing relevant
studies bear several limitations, including inadequate evaluations, lack of
prompting strategies, and ignorance of exploring LLMs for explainability. To
bridge these gaps, we comprehensively evaluate the mental health analysis and
emotional reasoning ability of LLMs on 11 datasets across 5 tasks. We explore
the effects of different prompting strategies with unsupervised and distantly
supervised emotional information. Based on these prompts, we explore LLMs for
interpretable mental health analysis by instructing them to generate
explanations for each of their decisions. We convey strict human evaluations to
assess the quality of the generated explanations, leading to a novel dataset
with 163 human-assessed explanations. We benchmark existing automatic
evaluation metrics on this dataset to guide future related works. According to
the results, ChatGPT shows strong in-context learning ability but still has a
significant gap with advanced task-specific methods. Careful prompt engineering
with emotional cues and expert-written few-shot examples can also effectively
improve performance on mental health analysis. In addition, ChatGPT generates
explanations that approach human performance, showing its great potential in
explainable mental health analysis.
"
Low-code LLM: Visual Programming over LLMs,Yuzhe Cai,http://arxiv.org/pdf/2304.08103v2.pdf,2023-04-17,"['cs.cl', 'cs.hc']",2304.08103v2.pdf,"  Effectively utilizing LLMs for complex tasks is challenging, often involving
a time-consuming and uncontrollable prompt engineering process. This paper
introduces a novel human-LLM interaction framework, Low-code LLM. It
incorporates six types of simple low-code visual programming interactions, all
supported by clicking, dragging, or text editing, to achieve more controllable
and stable responses. Through visual interaction with a graphical user
interface, users can incorporate their ideas into the workflow without writing
trivial prompts. The proposed Low-code LLM framework consists of a Planning LLM
that designs a structured planning workflow for complex tasks, which can be
correspondingly edited and confirmed by users through low-code visual
programming operations, and an Executing LLM that generates responses following
the user-confirmed workflow. We highlight three advantages of the low-code LLM:
controllable generation results, user-friendly human-LLM interaction, and
broadly applicable scenarios. We demonstrate its benefits using four typical
applications. By introducing this approach, we aim to bridge the gap between
humans and LLMs, enabling more effective and efficient utilization of LLMs for
complex tasks. Our system will be soon publicly available at LowCodeLLM.
"
Inducing anxiety in large language models increases exploration and bias,Julian Coda-Forno,http://arxiv.org/pdf/2304.11111v1.pdf,2023-04-21,"['cs.cl', 'cs.ai', 'cs.lg']",2304.11111v1.pdf,"  Large language models are transforming research on machine learning while
galvanizing public debates. Understanding not only when these models work well
and succeed but also why they fail and misbehave is of great societal
relevance. We propose to turn the lens of computational psychiatry, a framework
used to computationally describe and modify aberrant behavior, to the outputs
produced by these models. We focus on the Generative Pre-Trained Transformer
3.5 and subject it to tasks commonly studied in psychiatry. Our results show
that GPT-3.5 responds robustly to a common anxiety questionnaire, producing
higher anxiety scores than human subjects. Moreover, GPT-3.5's responses can be
predictably changed by using emotion-inducing prompts. Emotion-induction not
only influences GPT-3.5's behavior in a cognitive task measuring exploratory
decision-making but also influences its behavior in a previously-established
task measuring biases such as racism and ableism. Crucially, GPT-3.5 shows a
strong increase in biases when prompted with anxiety-inducing text. Thus, it is
likely that how prompts are communicated to large language models has a strong
influence on their behavior in applied settings. These results progress our
understanding of prompt engineering and demonstrate the usefulness of methods
taken from computational psychiatry for studying the capable algorithms to
which we increasingly delegate authority and autonomy.
"
Is ChatGPT the Ultimate Programming Assistant -- How far is it?,Haoye Tian,http://arxiv.org/pdf/2304.11938v2.pdf,2023-04-24,"['cs.se', 'cs.ai']",2304.11938v2.pdf,"  Recently, the ChatGPT LLM has received great attention: it can be used as a
bot for discussing source code, prompting it to suggest changes, provide
descriptions or even generate code. Typical demonstrations generally focus on
existing benchmarks, which may have been used in model training (i.e., data
leakage). To assess the feasibility of using an LLM as a useful assistant bot
for programmers, we must assess its realistic capabilities on unseen problems
as well as its capabilities on various tasks. In this paper, we present an
empirical study of ChatGPT's potential as a fully automated programming
assistant, focusing on the tasks of code generation, program repair, and code
summariziation. The study investigates ChatGPT's performance on common
programming problems and compares it with state-of-the-art approaches on two
benchmarks. Among several findings, our study shows that ChatGPT is effective
in dealing with common programming problems. However, our experiments also
reveal limitations in terms of its attention span: detailed descriptions will
constrain the focus of ChatGPT and prevent it from leveraging its vast
knowledge to solve the actual problem. Surprisingly, we have identified the
ability of ChatGPT to reason the original intention of the code. We expect
future work to build on this insight for dealing with the open question of the
oracle problem. Our findings contribute interesting insights to the development
of LLMs for programming assistance, notably by demonstrating the importance of
prompt engineering, and providing a better understanding of ChatGPT's practical
applications for software engineering.
"
Framing the News:From Human Perception to Large Language Model  Inferences,David Alonso del Barrio,http://arxiv.org/pdf/2304.14456v1.pdf,2023-04-27,"['cs.cl', 'cs.hc']",2304.14456v1.pdf,"  Identifying the frames of news is important to understand the articles'
vision, intention, message to be conveyed, and which aspects of the news are
emphasized. Framing is a widely studied concept in journalism, and has emerged
as a new topic in computing, with the potential to automate processes and
facilitate the work of journalism professionals. In this paper, we study this
issue with articles related to the Covid-19 anti-vaccine movement. First, to
understand the perspectives used to treat this theme, we developed a protocol
for human labeling of frames for 1786 headlines of No-Vax movement articles of
European newspapers from 5 countries. Headlines are key units in the written
press, and worth of analysis as many people only read headlines (or use them to
guide their decision for further reading.) Second, considering advances in
Natural Language Processing (NLP) with large language models, we investigated
two approaches for frame inference of news headlines: first with a GPT-3.5
fine-tuning approach, and second with GPT-3.5 prompt-engineering. Our work
contributes to the study and analysis of the performance that these models have
to facilitate journalistic tasks like classification of frames, while
understanding whether the models are able to replicate human perception in the
identification of these frames.
"
"ChatGPT Evaluation on Sentence Level Relations: A Focus on Temporal,  Causal, and Discourse Relations",Chunkit Chan,http://arxiv.org/pdf/2304.14827v2.pdf,2023-04-28,['cs.cl'],2304.14827v2.pdf,"  This paper aims to quantitatively evaluate the performance of ChatGPT, an
interactive large language model, on inter-sentential relations such as
temporal relations, causal relations, and discourse relations. Given ChatGPT's
promising performance across various tasks, we conduct extensive evaluations on
the whole test sets of 13 datasets, including temporal and causal relations,
PDTB2.0-based and dialogue-based discourse relations, and downstream
applications on discourse understanding. To achieve reliable results, we adopt
three tailored prompt templates for each task, including the zero-shot prompt
template, zero-shot prompt engineering (PE) template, and in-context learning
(ICL) prompt template, to establish the initial baseline scores for all popular
sentence-pair relation classification tasks for the first time. We find that
ChatGPT exhibits strong performance in detecting and reasoning about causal
relations, while it may not be proficient in identifying the temporal order
between two events. It can recognize most discourse relations with existing
explicit discourse connectives, but the implicit discourse relation still
remains a challenging task. Meanwhile, ChatGPT performs poorly in the dialogue
discourse parsing task that requires structural understanding in a dialogue
before being aware of the discourse relation.
"
Large Language Models Can Be Used To Effectively Scale Spear Phishing  Campaigns,Julian Hazell,http://arxiv.org/pdf/2305.06972v2.pdf,2023-05-11,"['cs.cy', 'cs.ai', 'cs.cr']",2305.06972v2.pdf,"  Recent progress in artificial intelligence (AI), particularly in the domain
of large language models (LLMs), has resulted in powerful and versatile
dual-use systems. Indeed, cognition can be put towards a wide variety of tasks,
some of which can result in harm. This study investigates how LLMs can be used
for spear phishing, a form of cybercrime that involves manipulating targets
into divulging sensitive information. I first explore LLMs' ability to assist
with the reconnaissance and message generation stages of a successful spear
phishing attack, where I find that advanced LLMs are capable of improving
cybercriminals' efficiency during these stages. To explore how LLMs can be used
to scale spear phishing campaigns, I then create unique spear phishing messages
for over 600 British Members of Parliament using OpenAI's GPT-3.5 and GPT-4
models. My findings reveal that these messages are not only realistic but also
cost-effective, with each email costing only a fraction of a cent to generate.
Next, I demonstrate how basic prompt engineering can circumvent safeguards
installed in LLMs by the reinforcement learning from human feedback fine-tuning
process, highlighting the need for more robust governance interventions aimed
at preventing misuse. To address these evolving risks, I propose two potential
solutions: structured access schemes, such as application programming
interfaces, and LLM-based defensive systems.
"
Text2Cohort: Democratizing the NCI Imaging Data Commons with Natural  Language Cohort Discovery,Pranav Kulkarni,http://arxiv.org/pdf/2305.07637v2.pdf,2023-05-12,"['cs.lg', 'cs.cl', 'cs.hc', 'cs.ir']",2305.07637v2.pdf,"  The Imaging Data Commons (IDC) is a cloud-based database that provides
researchers with open access to cancer imaging data, with the goal of
facilitating collaboration in medical imaging research. However, querying the
IDC database for cohort discovery and access to imaging data has a significant
learning curve for researchers due to its complex nature. We developed
Text2Cohort, a large language model (LLM) based toolkit to facilitate
user-friendly and intuitive natural language cohort discovery in the IDC.
Text2Cohorts translates user input into IDC database queries using prompt
engineering and autocorrection and returns the query's response to the user.
Autocorrection resolves errors in queries by passing the errors back to the
model for interpretation and correction. We evaluate Text2Cohort on 50 natural
language user inputs ranging from information extraction to cohort discovery.
The resulting queries and outputs were verified by two computer scientists to
measure Text2Cohort's accuracy and F1 score. Text2Cohort successfully generated
queries and their responses with an 88% accuracy and F1 score of 0.94. However,
it failed to generate queries for 6/50 (12%) user inputs due to syntax and
semantic errors. Our results indicate that Text2Cohort succeeded at generating
queries with correct responses, but occasionally failed due to a lack of
understanding of the data schema. Despite these shortcomings, Text2Cohort
demonstrates the utility of LLMs to enable researchers to discover and curate
cohorts using data hosted on IDC with high levels of accuracy using natural
language in a more intuitive and user-friendly way.
"
Sensitivity and Robustness of Large Language Models to Prompt Template  in Japanese Text Classification Tasks,Chengguang Gan,http://arxiv.org/pdf/2305.08714v2.pdf,2023-05-15,"['cs.cl', 'cs.ai']",2305.08714v2.pdf,"  Prompt engineering relevance research has seen a notable surge in recent
years, primarily driven by advancements in pre-trained language models and
large language models. However, a critical issue has been identified within
this domain: the inadequate of sensitivity and robustness of these models
towards Prompt Templates, particularly in lesser-studied languages such as
Japanese. This paper explores this issue through a comprehensive evaluation of
several representative Large Language Models (LLMs) and a widely-utilized
pre-trained model(PLM). These models are scrutinized using a benchmark dataset
in Japanese, with the aim to assess and analyze the performance of the current
multilingual models in this context. Our experimental results reveal startling
discrepancies. A simple modification in the sentence structure of the Prompt
Template led to a drastic drop in the accuracy of GPT-4 from 49.21 to 25.44.
This observation underscores the fact that even the highly performance GPT-4
model encounters significant stability issues when dealing with diverse
Japanese prompt templates, rendering the consistency of the model's output
results questionable. In light of these findings, we conclude by proposing
potential research trajectories to further enhance the development and
performance of Large Language Models in their current stage.
"
Knowledge Graph Completion Models are Few-shot Learners: An Empirical  Study of Relation Labeling in E-commerce with LLMs,Jiao Chen,http://arxiv.org/pdf/2305.09858v1.pdf,2023-05-17,"['cs.ir', 'cs.ai', 'cs.cl', 'cs.lg']",2305.09858v1.pdf,"  Knowledge Graphs (KGs) play a crucial role in enhancing e-commerce system
performance by providing structured information about entities and their
relationships, such as complementary or substitutable relations between
products or product types, which can be utilized in recommender systems.
However, relation labeling in KGs remains a challenging task due to the dynamic
nature of e-commerce domains and the associated cost of human labor. Recently,
breakthroughs in Large Language Models (LLMs) have shown surprising results in
numerous natural language processing tasks. In this paper, we conduct an
empirical study of LLMs for relation labeling in e-commerce KGs, investigating
their powerful learning capabilities in natural language and effectiveness in
predicting relations between product types with limited labeled data. We
evaluate various LLMs, including PaLM and GPT-3.5, on benchmark datasets,
demonstrating their ability to achieve competitive performance compared to
humans on relation labeling tasks using just 1 to 5 labeled examples per
relation. Additionally, we experiment with different prompt engineering
techniques to examine their impact on model performance. Our results show that
LLMs significantly outperform existing KG completion models in relation
labeling for e-commerce KGs and exhibit performance strong enough to replace
human labeling.
"
VisorGPT: Learning Visual Prior via Generative Pre-Training,Jinheng Xie,http://arxiv.org/pdf/2305.13777v4.pdf,2023-05-23,['cs.cv'],2305.13777v4.pdf,"  Various stuff and things in visual data possess specific traits, which can be
learned by deep neural networks and are implicitly represented as the visual
prior, e.g., object location and shape, in the model. Such prior potentially
impacts many vision tasks. For example, in conditional image synthesis, spatial
conditions failing to adhere to the prior can result in visually inaccurate
synthetic results. This work aims to explicitly learn the visual prior and
enable the customization of sampling. Inspired by advances in language
modeling, we propose to learn Visual prior via Generative Pre-Training, dubbed
VisorGPT. By discretizing visual locations of objects, e.g., bounding boxes,
human pose, and instance masks, into sequences, VisorGPT can model visual prior
through likelihood maximization. Besides, prompt engineering is investigated to
unify various visual locations and enable customized sampling of sequential
outputs from the learned prior. Experimental results demonstrate that VisorGPT
can effectively model the visual prior, which can be employed for many vision
tasks, such as customizing accurate human pose for conditional image synthesis
models like ControlNet. Code will be released at
https://github.com/Sierkinhane/VisorGPT.
"
Game of Tones: Faculty detection of GPT-4 generated content in  university assessments,Mike Perkins,http://arxiv.org/pdf/2305.18081v1.pdf,2023-05-29,"['cs.cy', 'cs.ai', 'k.4']",2305.18081v1.pdf,"  This study explores the robustness of university assessments against the use
of Open AI's Generative Pre-Trained Transformer 4 (GPT-4) generated content and
evaluates the ability of academic staff to detect its use when supported by the
Turnitin Artificial Intelligence (AI) detection tool. The research involved
twenty-two GPT-4 generated submissions being created and included in the
assessment process to be marked by fifteen different faculty members. The study
reveals that although the detection tool identified 91% of the experimental
submissions as containing some AI-generated content, the total detected content
was only 54.8%. This suggests that the use of adversarial techniques regarding
prompt engineering is an effective method in evading AI detection tools and
highlights that improvements to AI detection software are needed. Using the
Turnitin AI detect tool, faculty reported 54.5% of the experimental submissions
to the academic misconduct process, suggesting the need for increased awareness
and training into these tools. Genuine submissions received a mean score of
54.4, whereas AI-generated content scored 52.3, indicating the comparable
performance of GPT-4 in real-life situations. Recommendations include adjusting
assessment strategies to make them more resistant to the use of AI tools, using
AI-inclusive assessment where possible, and providing comprehensive training
programs for faculty and students. This research contributes to understanding
the relationship between AI-generated content and academic assessment, urging
further investigation to preserve academic integrity.
"
Responsible Task Automation: Empowering Large Language Models as  Responsible Task Automators,Zhizheng Zhang,http://arxiv.org/pdf/2306.01242v1.pdf,2023-06-02,"['cs.ai', 'cs.cl']",2306.01242v1.pdf,"  The recent success of Large Language Models (LLMs) signifies an impressive
stride towards artificial general intelligence. They have shown a promising
prospect in automatically completing tasks upon user instructions, functioning
as brain-like coordinators. The associated risks will be revealed as we
delegate an increasing number of tasks to machines for automated completion. A
big question emerges: how can we make machines behave responsibly when helping
humans automate tasks as personal copilots? In this paper, we explore this
question in depth from the perspectives of feasibility, completeness and
security. In specific, we present Responsible Task Automation (ResponsibleTA)
as a fundamental framework to facilitate responsible collaboration between
LLM-based coordinators and executors for task automation with three empowered
capabilities: 1) predicting the feasibility of the commands for executors; 2)
verifying the completeness of executors; 3) enhancing the security (e.g., the
protection of users' privacy). We further propose and compare two paradigms for
implementing the first two capabilities. One is to leverage the generic
knowledge of LLMs themselves via prompt engineering while the other is to adopt
domain-specific learnable models. Moreover, we introduce a local memory
mechanism for achieving the third capability. We evaluate our proposed
ResponsibleTA on UI task automation and hope it could bring more attentions to
ensuring LLMs more responsible in diverse scenarios. The research project
homepage is at
https://task-automation-research.github.io/responsible_task_automation.
"
A Survey on Segment Anything Model (SAM): Vision Foundation Model Meets  Prompt Engineering,Chaoning Zhang,http://arxiv.org/pdf/2306.06211v3.pdf,2023-05-12,['cs.cv'],2306.06211v3.pdf,"  Segment anything model (SAM) developed by Meta AI Research has recently
attracted significant attention. Trained on a large segmentation dataset of
over 1 billion masks, SAM is capable of segmenting any object on a certain
image. In the original SAM work, the authors turned to zero-short transfer
tasks (like edge detection) for evaluating the performance of SAM. Recently,
numerous works have attempted to investigate the performance of SAM in various
scenarios to recognize and segment objects. Moreover, numerous projects have
emerged to show the versatility of SAM as a foundation model by combining it
with other models, like Grounding DINO, Stable Diffusion, ChatGPT, etc. With
the relevant papers and projects increasing exponentially, it is challenging
for the readers to catch up with the development of SAM. To this end, this work
conducts the first yet comprehensive survey on SAM. This is an ongoing project
and we intend to update the manuscript on a regular basis. Therefore, readers
are welcome to contact us if they complete new works related to SAM so that we
can include them in our next version.
"
The economic trade-offs of large language models: A case study,Kristen Howell,http://arxiv.org/pdf/2306.07402v1.pdf,2023-06-08,"['cs.cl', 'cs.ai']",2306.07402v1.pdf,"  Contacting customer service via chat is a common practice. Because employing
customer service agents is expensive, many companies are turning to NLP that
assists human agents by auto-generating responses that can be used directly or
with modifications. Large Language Models (LLMs) are a natural fit for this use
case; however, their efficacy must be balanced with the cost of training and
serving them. This paper assesses the practical cost and impact of LLMs for the
enterprise as a function of the usefulness of the responses that they generate.
We present a cost framework for evaluating an NLP model's utility for this use
case and apply it to a single brand as a case study in the context of an
existing agent assistance product. We compare three strategies for specializing
an LLM - prompt engineering, fine-tuning, and knowledge distillation - using
feedback from the brand's customer service agents. We find that the usability
of a model's responses can make up for a large difference in inference cost for
our case study brand, and we extrapolate our findings to the broader enterprise
space.
"
TART: A plug-and-play Transformer module for task-agnostic reasoning,Kush Bhatia,http://arxiv.org/pdf/2306.07536v1.pdf,2023-06-13,"['cs.lg', 'cs.ai', 'cs.cl']",2306.07536v1.pdf,"  Large language models (LLMs) exhibit in-context learning abilities which
enable the same model to perform several tasks without any task-specific
training. In contrast, traditional adaptation approaches, such as fine-tuning,
modify the underlying models for each specific task. In-context learning,
however, consistently underperforms task-specific tuning approaches even when
presented with the same examples. While most existing approaches (e.g., prompt
engineering) focus on the LLM's learned representations to patch this
performance gap, our analysis actually reveal that LLM representations contain
sufficient information to make good predictions. As such, we focus on the LLM's
reasoning abilities and demonstrate that this performance gap exists due to
their inability to perform simple probabilistic reasoning tasks. This raises an
intriguing question: Are LLMs actually capable of learning how to reason in a
task-agnostic manner? We answer this in the affirmative and propose TART which
generically improves an LLM's reasoning abilities using a synthetically trained
Transformer-based reasoning module. TART trains this reasoning module in a
task-agnostic manner using only synthetic logistic regression tasks and
composes it with an arbitrary real-world pre-trained model without any
additional training. With a single inference module, TART improves performance
across different model families (GPT-Neo, Pythia, BLOOM), model sizes (100M -
6B), tasks (14 NLP binary classification tasks), and even across different
modalities (audio and vision). Additionally, on the RAFT Benchmark, TART
improves GPT-Neo (125M)'s performance such that it outperforms BLOOM (176B),
and is within 4% of GPT-3 (175B). Our code and models are available at
https://github.com/HazyResearch/TART .
"
Exploring the Effectiveness of Dataset Synthesis: An application of  Apple Detection in Orchards,Alexander van Meekeren,http://arxiv.org/pdf/2306.11763v1.pdf,2023-06-20,['cs.cv'],2306.11763v1.pdf,"  Deep object detection models have achieved notable successes in recent years,
but one major obstacle remains: the requirement for a large amount of training
data. Obtaining such data is a tedious process and is mainly time consuming,
leading to the exploration of new research avenues like synthetic data
generation techniques. In this study, we explore the usability of Stable
Diffusion 2.1-base for generating synthetic datasets of apple trees for object
detection and compare it to a baseline model trained on real-world data. After
creating a dataset of realistic apple trees with prompt engineering and
utilizing a previously trained Stable Diffusion model, the custom dataset was
annotated and evaluated by training a YOLOv5m object detection model to predict
apples in a real-world apple detection dataset. YOLOv5m was chosen for its
rapid inference time and minimal hardware demands. Results demonstrate that the
model trained on generated data is slightly underperforming compared to a
baseline model trained on real-world images when evaluated on a set of
real-world images. However, these findings remain highly promising, as the
average precision difference is only 0.09 and 0.06, respectively. Qualitative
results indicate that the model can accurately predict the location of apples,
except in cases of heavy shading. These findings illustrate the potential of
synthetic data generation techniques as a viable alternative to the collection
of extensive training data for object detection models.
"
Do you still need a manual smart contract audit?,Isaac David,http://arxiv.org/pdf/2306.12338v2.pdf,2023-06-21,['cs.cr'],2306.12338v2.pdf,"  We investigate the feasibility of employing large language models (LLMs) for
conducting the security audit of smart contracts, a traditionally
time-consuming and costly process. Our research focuses on the optimization of
prompt engineering for enhanced security analysis, and we evaluate the
performance and accuracy of LLMs using a benchmark dataset comprising 52
Decentralized Finance (DeFi) smart contracts that have previously been
compromised.
  Our findings reveal that, when applied to vulnerable contracts, both GPT-4
and Claude models correctly identify the vulnerability type in 40% of the
cases. However, these models also demonstrate a high false positive rate,
necessitating continued involvement from manual auditors. The LLMs tested
outperform a random model by 20% in terms of F1-score.
  To ensure the integrity of our study, we conduct mutation testing on five
newly developed and ostensibly secure smart contracts, into which we manually
insert two and 15 vulnerabilities each. This testing yielded a remarkable
best-case 78.7% true positive rate for the GPT-4-32k model. We tested both,
asking the models to perform a binary classification on whether a contract is
vulnerable, and a non-binary prompt. We also examined the influence of model
temperature variations and context length on the LLM's performance.
  Despite the potential for many further enhancements, this work lays the
groundwork for a more efficient and economical approach to smart contract
security audits.
"
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language  Models,Chaoyou Fu,http://arxiv.org/pdf/2306.13394v2.pdf,2023-06-23,['cs.cv'],2306.13394v2.pdf,"  Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform
multimodal tasks, showing amazing emergent abilities in recent studies, such as
writing poems based on an image. However, it is difficult for these case
studies to fully reflect the performance of MLLM, lacking a comprehensive
evaluation. In this paper, we fill in this blank, presenting the first MLLM
Evaluation benchmark MME. It measures both perception and cognition abilities
on a total of 14 subtasks. In order to avoid data leakage that may arise from
direct use of public datasets for evaluation, the annotations of
instruction-answer pairs are all manually designed. The concise instruction
design allows us to fairly compare MLLMs, instead of struggling in prompt
engineering. Besides, with such an instruction, we can also easily carry out
quantitative statistics. A total of 12 advanced MLLMs are comprehensively
evaluated on our MME, which not only suggests that existing MLLMs still have a
large room for improvement, but also reveals the potential directions for the
subsequent model optimization.
"
Zero-shot Nuclei Detection via Visual-Language Pre-trained Models,Yongjian Wu,http://arxiv.org/pdf/2306.17659v1.pdf,2023-06-30,['cs.cv'],2306.17659v1.pdf,"  Large-scale visual-language pre-trained models (VLPM) have proven their
excellent performance in downstream object detection for natural scenes.
However, zero-shot nuclei detection on H\&E images via VLPMs remains
underexplored. The large gap between medical images and the web-originated
text-image pairs used for pre-training makes it a challenging task. In this
paper, we attempt to explore the potential of the object-level VLPM, Grounded
Language-Image Pre-training (GLIP) model, for zero-shot nuclei detection.
Concretely, an automatic prompts design pipeline is devised based on the
association binding trait of VLPM and the image-to-text VLPM BLIP, avoiding
empirical manual prompts engineering. We further establish a self-training
framework, using the automatically designed prompts to generate the preliminary
results as pseudo labels from GLIP and refine the predicted boxes in an
iterative manner. Our method achieves a remarkable performance for label-free
nuclei detection, surpassing other comparison methods. Foremost, our work
demonstrates that the VLPM pre-trained on natural image-text pairs exhibits
astonishing potential for downstream tasks in the medical field as well. Code
will be released at https://github.com/wuyongjianCODE/VLPMNuD.
"
Comparative Analysis of GPT-4 and Human Graders in Evaluating Praise  Given to Students in Synthetic Dialogues,Dollaya Hirunyasiri,http://arxiv.org/pdf/2307.02018v1.pdf,2023-07-05,"['cs.cl', 'cs.ai', 'cs.hc']",2307.02018v1.pdf,"  Research suggests that providing specific and timely feedback to human tutors
enhances their performance. However, it presents challenges due to the
time-consuming nature of assessing tutor performance by human evaluators. Large
language models, such as the AI-chatbot ChatGPT, hold potential for offering
constructive feedback to tutors in practical settings. Nevertheless, the
accuracy of AI-generated feedback remains uncertain, with scant research
investigating the ability of models like ChatGPT to deliver effective feedback.
In this work-in-progress, we evaluate 30 dialogues generated by GPT-4 in a
tutor-student setting. We use two different prompting approaches, the zero-shot
chain of thought and the few-shot chain of thought, to identify specific
components of effective praise based on five criteria. These approaches are
then compared to the results of human graders for accuracy. Our goal is to
assess the extent to which GPT-4 can accurately identify each praise criterion.
We found that both zero-shot and few-shot chain of thought approaches yield
comparable results. GPT-4 performs moderately well in identifying instances
when the tutor offers specific and immediate praise. However, GPT-4
underperforms in identifying the tutor's ability to deliver sincere praise,
particularly in the zero-shot prompting scenario where examples of sincere
tutor praise statements were not provided. Future work will focus on enhancing
prompt engineering, developing a more general tutoring rubric, and evaluating
our method using real-life tutoring dialogues.
"
"Right to be Forgotten in the Era of Large Language Models: Implications,  Challenges, and Solutions",Dawen Zhang,http://arxiv.org/pdf/2307.03941v3.pdf,2023-07-08,"['cs.cy', 'cs.ai', 'cs.cl']",2307.03941v3.pdf,"  The Right to be Forgotten (RTBF) was first established as the result of the
ruling of Google Spain SL, Google Inc. v AEPD, Mario Costeja Gonz\'alez, and
was later included as the Right to Erasure under the General Data Protection
Regulation (GDPR) of European Union to allow individuals the right to request
personal data be deleted by organizations. Specifically for search engines,
individuals can send requests to organizations to exclude their information
from the query results. It was a significant emergent right as the result of
the evolution of technology. With the recent development of Large Language
Models (LLMs) and their use in chatbots, LLM-enabled software systems have
become popular. But they are not excluded from the RTBF. Compared with the
indexing approach used by search engines, LLMs store, and process information
in a completely different way. This poses new challenges for compliance with
the RTBF. In this paper, we explore these challenges and provide our insights
on how to implement technical solutions for the RTBF, including the use of
differential privacy, machine unlearning, model editing, and prompt
engineering. With the rapid advancement of AI and the increasing need of
regulating this powerful technology, learning from the case of RTBF can provide
valuable lessons for technical practitioners, legal experts, organizations, and
authorities.
"
"Software Testing with Large Language Model: Survey, Landscape, and  Vision",Junjie Wang,http://arxiv.org/pdf/2307.07221v1.pdf,2023-07-14,['cs.se'],2307.07221v1.pdf,"  Pre-trained large language models (LLMs) have recently emerged as a
breakthrough technology in natural language processing and artificial
intelligence, with the ability to handle large-scale datasets and exhibit
remarkable performance across a wide range of tasks. Meanwhile, software
testing is a crucial undertaking that serves as a cornerstone for ensuring the
quality and reliability of software products. As the scope and complexity of
software systems continue to grow, the need for more effective software testing
techniques becomes increasingly urgent, and making it an area ripe for
innovative approaches such as the use of LLMs. This paper provides a
comprehensive review of the utilization of LLMs in software testing. It
analyzes 52 relevant studies that have used LLMs for software testing, from
both the software testing and LLMs perspectives. The paper presents a detailed
discussion of the software testing tasks for which LLMs are commonly used,
among which test case preparation and program repair are the most
representative ones. It also analyzes the commonly used LLMs, the types of
prompt engineering that are employed, as well as the accompanied techniques
with these LLMs. It also summarizes the key challenges and potential
opportunities in this direction. This work can serve as a roadmap for future
research in this area, highlighting potential avenues for exploration, and
identifying gaps in our current understanding of the use of LLMs in software
testing.
"
The Potential and Pitfalls of using a Large Language Model such as  ChatGPT or GPT-4 as a Clinical Assistant,Jingqing Zhang,http://arxiv.org/pdf/2307.08152v1.pdf,2023-07-16,['cs.cl'],2307.08152v1.pdf,"  Recent studies have demonstrated promising performance of ChatGPT and GPT-4
on several medical domain tasks. However, none have assessed its performance
using a large-scale real-world electronic health record database, nor have
evaluated its utility in providing clinical diagnostic assistance for patients
across a full range of disease presentation. We performed two analyses using
ChatGPT and GPT-4, one to identify patients with specific medical diagnoses
using a real-world large electronic health record database and the other, in
providing diagnostic assistance to healthcare workers in the prospective
evaluation of hypothetical patients. Our results show that GPT-4 across disease
classification tasks with chain of thought and few-shot prompting can achieve
performance as high as 96% F1 scores. For patient assessment, GPT-4 can
accurately diagnose three out of four times. However, there were mentions of
factually incorrect statements, overlooking crucial medical findings,
recommendations for unnecessary investigations and overtreatment. These issues
coupled with privacy concerns, make these models currently inadequate for real
world clinical use. However, limited data and time needed for prompt
engineering in comparison to configuration of conventional machine learning
workflows highlight their potential for scalability across healthcare
applications.
"
A Lightweight Framework for High-Quality Code Generation,Mohammed Latif Siddiq,http://arxiv.org/pdf/2307.08220v1.pdf,2023-07-17,"['cs.se', 'cs.lg']",2307.08220v1.pdf,"  In recent years, the use of automated source code generation utilizing
transformer-based generative models has expanded, and these models can generate
functional code according to the requirements of the developers. However,
recent research revealed that these automatically generated source codes can
contain vulnerabilities and other quality issues. Despite researchers' and
practitioners' attempts to enhance code generation models, retraining and
fine-tuning large language models is time-consuming and resource-intensive.
Thus, we describe FRANC, a lightweight framework for recommending more secure
and high-quality source code derived from transformer-based code generation
models. FRANC includes a static filter to make the generated code compilable
with heuristics and a quality-aware ranker to sort the code snippets based on a
quality score. Moreover, the framework uses prompt engineering to fix
persistent quality issues. We evaluated the framework with five Python and Java
code generation models and six prompt datasets, including a newly created one
in this work (SOEval). The static filter improves 9% to 46% Java suggestions
and 10% to 43% Python suggestions regarding compilability. The average
improvement over the NDCG@10 score for the ranking system is 0.0763, and the
repairing techniques repair the highest 80% of prompts. FRANC takes, on
average, 1.98 seconds for Java; for Python, it takes 0.08 seconds.
"
"Multi-Method Self-Training: Improving Code Generation With Text, And  Vice Versa",Shriyash K. Upadhyay,http://arxiv.org/pdf/2307.10633v1.pdf,2023-07-20,"['cs.cl', 'cs.lg']",2307.10633v1.pdf,"  Large Language Models have many methods for solving the same problem. This
introduces novel strengths (different methods may work well for different
problems) and weaknesses (it may be difficult for users to know which method to
use). In this paper, we introduce Multi-Method Self-Training (MMST), where one
method is trained on the filtered outputs of another, allowing us to augment
the strengths and ameliorate the weaknesses of each method. Using a 176B
parameter model trained on both language and code, we show that MMST can 1)
improve the less performant method (up to 30%) making the model easier to use,
2) improve the more performant method (up to 32.2%) making the model more
performant, and 3) improve the performance of related but distinct tasks (up to
10.3%) by improving the ability of the model to generate rationales. We then
conduct ablation analyses to explore why MMST works. We show that MMST
generates more data than traditional self-training, but the improvement in
performance is driven by the use of multiple methods. We also analyze
prompt-engineering and anti-correlated performance between methods as means of
making MMST more effective. We hope the evidence from our paper motivates
machine learning researchers to explore ways in which advances in language
models allow for new forms of training.
"
Enhancing CLIP with GPT-4: Harnessing Visual Descriptions as Prompts,Mayug Maniparambil,http://arxiv.org/pdf/2307.11661v2.pdf,2023-07-21,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",2307.11661v2.pdf,"  Contrastive pretrained large Vision-Language Models (VLMs) like CLIP have
revolutionized visual representation learning by providing good performance on
downstream datasets. VLMs are 0-shot adapted to a downstream dataset by
designing prompts that are relevant to the dataset. Such prompt engineering
makes use of domain expertise and a validation dataset. Meanwhile, recent
developments in generative pretrained models like GPT-4 mean they can be used
as advanced internet search tools. They can also be manipulated to provide
visual information in any structure. In this work, we show that GPT-4 can be
used to generate text that is visually descriptive and how this can be used to
adapt CLIP to downstream tasks. We show considerable improvements in 0-shot
transfer accuracy on specialized fine-grained datasets like EuroSAT (~7%), DTD
(~7%), SUN397 (~4.6%), and CUB (~3.3%) when compared to CLIP's default prompt.
We also design a simple few-shot adapter that learns to choose the best
possible sentences to construct generalizable classifiers that outperform the
recently proposed CoCoOP by ~2% on average and by over 4% on 4 specialized
fine-grained datasets. The code, prompts, and auxiliary text dataset is
available at https://github.com/mayug/VDT-Adapter.
"
GPT-3 Models are Few-Shot Financial Reasoners,Raul Salles de Padua,http://arxiv.org/pdf/2307.13617v2.pdf,2023-07-25,"['cs.cl', 'cs.ai']",2307.13617v2.pdf,"  Financial analysis is an important tool for evaluating company performance.
Practitioners work to answer financial questions to make profitable investment
decisions, and use advanced quantitative analyses to do so. As a result,
Financial Question Answering (QA) is a question answering task that requires
deep reasoning about numbers. Furthermore, it is unknown how well pre-trained
language models can reason in the financial domain. The current
state-of-the-art requires a retriever to collect relevant facts about the
financial question from the text and a generator to produce a valid financial
program and a final answer. However, recently large language models like GPT-3
have achieved state-of-the-art performance on wide variety of tasks with just a
few shot examples. We run several experiments with GPT-3 and find that a
separate retrieval model and logic engine continue to be essential components
to achieving SOTA performance in this task, particularly due to the precise
nature of financial questions and the complex information stored in financial
documents. With this understanding, our refined prompt-engineering approach on
GPT-3 achieves near SOTA accuracy without any fine-tuning.
"
S3: Social-network Simulation System with Large Language Model-Empowered  Agents,Chen Gao,http://arxiv.org/pdf/2307.14984v2.pdf,2023-07-27,['cs.si'],2307.14984v2.pdf,"  Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science.
"
Flows: Building Blocks of Reasoning and Collaborating AI,Martin Josifoski,http://arxiv.org/pdf/2308.01285v1.pdf,2023-08-02,"['cs.ai', 'cs.hc']",2308.01285v1.pdf,"  Recent advances in artificial intelligence (AI) have produced highly capable
and controllable systems. This creates unprecedented opportunities for
structured reasoning as well as collaboration among multiple AI systems and
humans. To fully realize this potential, it is essential to develop a
principled way of designing and studying such structured interactions. For this
purpose, we introduce the conceptual framework of Flows: a systematic approach
to modeling complex interactions. Flows are self-contained building blocks of
computation, with an isolated state, communicating through a standardized
message-based interface. This modular design allows Flows to be recursively
composed into arbitrarily nested interactions, with a substantial reduction of
complexity. Crucially, any interaction can be implemented using this framework,
including prior work on AI--AI and human--AI interactions, prompt engineering
schemes, and tool augmentation. We demonstrate the potential of Flows on the
task of competitive coding, a challenging task on which even GPT-4 struggles.
Our results suggest that structured reasoning and collaboration substantially
improve generalization, with AI-only Flows adding +$21$ and human--AI Flows
adding +$54$ absolute points in terms of solve rate. To support rapid and
rigorous research, we introduce the aiFlows library. The library comes with a
repository of Flows that can be easily used, extended, and composed into novel,
more complex Flows.
  The aiFlows library is available at https://github.com/epfl-dlab/aiflows.
Data and Flows for reproducing our experiments are available at
https://github.com/epfl-dlab/cc_flows.
"
Evaluating ChatGPT text-mining of clinical records for obesity  monitoring,Ivo S. Fins,http://arxiv.org/pdf/2308.01666v1.pdf,2023-08-03,"['cs.ir', 'cs.cl']",2308.01666v1.pdf,"  Background: Veterinary clinical narratives remain a largely untapped resource
for addressing complex diseases. Here we compare the ability of a large
language model (ChatGPT) and a previously developed regular expression (RegexT)
to identify overweight body condition scores (BCS) in veterinary narratives.
Methods: BCS values were extracted from 4,415 anonymised clinical narratives
using either RegexT or by appending the narrative to a prompt sent to ChatGPT
coercing the model to return the BCS information. Data were manually reviewed
for comparison. Results: The precision of RegexT was higher (100%, 95% CI
94.81-100%) than the ChatGPT (89.3%; 95% CI82.75-93.64%). However, the recall
of ChatGPT (100%. 95% CI 96.18-100%) was considerably higher than that of
RegexT (72.6%, 95% CI 63.92-79.94%). Limitations: Subtle prompt engineering is
needed to improve ChatGPT output. Conclusions: Large language models create
diverse opportunities and, whilst complex, present an intuitive interface to
information but require careful implementation to avoid unpredictable errors.
"
ParaFuzz: An Interpretability-Driven Technique for Detecting Poisoned  Samples in NLP,Lu Yan,http://arxiv.org/pdf/2308.02122v2.pdf,2023-08-04,"['cs.cr', 'cs.cl']",2308.02122v2.pdf,"  Backdoor attacks have emerged as a prominent threat to natural language
processing (NLP) models, where the presence of specific triggers in the input
can lead poisoned models to misclassify these inputs to predetermined target
classes. Current detection mechanisms are limited by their inability to address
more covert backdoor strategies, such as style-based attacks. In this work, we
propose an innovative test-time poisoned sample detection framework that hinges
on the interpretability of model predictions, grounded in the semantic meaning
of inputs. We contend that triggers (e.g., infrequent words) are not supposed
to fundamentally alter the underlying semantic meanings of poisoned samples as
they want to stay stealthy. Based on this observation, we hypothesize that
while the model's predictions for paraphrased clean samples should remain
stable, predictions for poisoned samples should revert to their true labels
upon the mutations applied to triggers during the paraphrasing process. We
employ ChatGPT, a state-of-the-art large language model, as our paraphraser and
formulate the trigger-removal task as a prompt engineering problem. We adopt
fuzzing, a technique commonly used for unearthing software vulnerabilities, to
discover optimal paraphrase prompts that can effectively eliminate triggers
while concurrently maintaining input semantics. Experiments on 4 types of
backdoor attacks, including the subtle style backdoors, and 4 distinct datasets
demonstrate that our approach surpasses baseline methods, including STRIP, RAP,
and ONION, in precision and recall.
"
IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image  Diffusion Models,Hu Ye,http://arxiv.org/pdf/2308.06721v1.pdf,2023-08-13,"['cs.cv', 'cs.ai']",2308.06721v1.pdf,"  Recent years have witnessed the strong power of large text-to-image diffusion
models for the impressive generative capability to create high-fidelity images.
However, it is very tricky to generate desired images using only text prompt as
it often involves complex prompt engineering. An alternative to text prompt is
image prompt, as the saying goes: ""an image is worth a thousand words"".
Although existing methods of direct fine-tuning from pretrained models are
effective, they require large computing resources and are not compatible with
other base models, text prompt, and structural controls. In this paper, we
present IP-Adapter, an effective and lightweight adapter to achieve image
prompt capability for the pretrained text-to-image diffusion models. The key
design of our IP-Adapter is decoupled cross-attention mechanism that separates
cross-attention layers for text features and image features. Despite the
simplicity of our method, an IP-Adapter with only 22M parameters can achieve
comparable or even better performance to a fully fine-tuned image prompt model.
As we freeze the pretrained diffusion model, the proposed IP-Adapter can be
generalized not only to other custom models fine-tuned from the same base
model, but also to controllable generation using existing controllable tools.
With the benefit of the decoupled cross-attention strategy, the image prompt
can also work well with the text prompt to achieve multimodal image generation.
The project page is available at \url{https://ip-adapter.github.io}.
"
LogPrompt: Prompt Engineering Towards Zero-Shot and Interpretable Log  Analysis,Yilun Liu,http://arxiv.org/pdf/2308.07610v1.pdf,2023-08-15,"['cs.se', 'cs.cl']",2308.07610v1.pdf,"  Automated log analysis is crucial in modern software-intensive systems for
ensuring reliability and resilience throughout software maintenance and
engineering life cycles. Existing methods perform tasks such as log parsing and
log anomaly detection by providing a single prediction value without
interpretation. However, given the increasing volume of system events, the
limited interpretability of analysis results hinders analysts' trust and their
ability to take appropriate actions. Moreover, these methods require
substantial in-domain training data, and their performance declines sharply (by
up to 62.5%) in online scenarios involving unseen logs from new domains, a
common occurrence due to rapid software updates. In this paper, we propose
LogPrompt, a novel zero-shot and interpretable log analysis approach. LogPrompt
employs large language models (LLMs) to perform zero-shot log analysis tasks
via a suite of advanced prompt strategies tailored for log tasks, which
enhances LLMs' performance by up to 107.5% compared with simple prompts.
Experiments on nine publicly available evaluation datasets across two tasks
demonstrate that LogPrompt, despite using no training data, outperforms
existing approaches trained on thousands of logs by up to around 50%. We also
conduct a human evaluation of LogPrompt's interpretability, with six
practitioners possessing over 10 years of experience, who highly rated the
generated content in terms of usefulness and readability (averagely 4.42/5).
LogPrompt also exhibits remarkable compatibility with open-source and
smaller-scale LLMs, making it flexible for practical deployment.
"
Transforming Sentiment Analysis in the Financial Domain with ChatGPT,Georgios Fatouros,http://arxiv.org/pdf/2308.07935v1.pdf,2023-08-13,"['cs.cl', 'cs.ai', 'cs.ce', 'cs.ir', '68t01, 68t50, 91b28, 91b30']",2308.07935v1.pdf,"  Financial sentiment analysis plays a crucial role in decoding market trends
and guiding strategic trading decisions. Despite the deployment of advanced
deep learning techniques and language models to refine sentiment analysis in
finance, this study breaks new ground by investigating the potential of large
language models, particularly ChatGPT 3.5, in financial sentiment analysis,
with a strong emphasis on the foreign exchange market (forex). Employing a
zero-shot prompting approach, we examine multiple ChatGPT prompts on a
meticulously curated dataset of forex-related news headlines, measuring
performance using metrics such as precision, recall, f1-score, and Mean
Absolute Error (MAE) of the sentiment class. Additionally, we probe the
correlation between predicted sentiment and market returns as an additional
evaluation approach. ChatGPT, compared to FinBERT, a well-established sentiment
analysis model for financial texts, exhibited approximately 35\% enhanced
performance in sentiment classification and a 36\% higher correlation with
market returns. By underlining the significance of prompt engineering,
particularly in zero-shot contexts, this study spotlights ChatGPT's potential
to substantially boost sentiment analysis in financial applications. By sharing
the utilized dataset, our intention is to stimulate further research and
advancements in the field of financial services.
"
ChatGPT-HealthPrompt. Harnessing the Power of XAI in Prompt-Based  Healthcare Decision Support using ChatGPT,Fatemeh Nazary,http://arxiv.org/pdf/2308.09731v1.pdf,2023-08-17,"['cs.ai', 'cs.cl', 'cs.lg']",2308.09731v1.pdf,"  This study presents an innovative approach to the application of large
language models (LLMs) in clinical decision-making, focusing on OpenAI's
ChatGPT. Our approach introduces the use of contextual prompts-strategically
designed to include task description, feature description, and crucially,
integration of domain knowledge-for high-quality binary classification tasks
even in data-scarce scenarios. The novelty of our work lies in the utilization
of domain knowledge, obtained from high-performing interpretable ML models, and
its seamless incorporation into prompt design. By viewing these ML models as
medical experts, we extract key insights on feature importance to aid in
decision-making processes. This interplay of domain knowledge and AI holds
significant promise in creating a more insightful diagnostic tool.
  Additionally, our research explores the dynamics of zero-shot and few-shot
prompt learning based on LLMs. By comparing the performance of OpenAI's ChatGPT
with traditional supervised ML models in different data conditions, we aim to
provide insights into the effectiveness of prompt engineering strategies under
varied data availability. In essence, this paper bridges the gap between AI and
healthcare, proposing a novel methodology for LLMs application in clinical
decision support systems. It highlights the transformative potential of
effective prompt design, domain knowledge integration, and flexible learning
approaches in enhancing automated decision-making.
"
Synergistic Integration of Large Language Models and Cognitive  Architectures for Robust AI: An Exploratory Analysis,Oscar J. Romero,http://arxiv.org/pdf/2308.09830v3.pdf,2023-08-18,['cs.ai'],2308.09830v3.pdf,"  This paper explores the integration of two AI subdisciplines employed in the
development of artificial agents that exhibit intelligent behavior: Large
Language Models (LLMs) and Cognitive Architectures (CAs). We present three
integration approaches, each grounded in theoretical models and supported by
preliminary empirical evidence. The modular approach, which introduces four
models with varying degrees of integration, makes use of chain-of-thought
prompting, and draws inspiration from augmented LLMs, the Common Model of
Cognition, and the simulation theory of cognition. The agency approach,
motivated by the Society of Mind theory and the LIDA cognitive architecture,
proposes the formation of agent collections that interact at micro and macro
cognitive levels, driven by either LLMs or symbolic components. The
neuro-symbolic approach, which takes inspiration from the CLARION cognitive
architecture, proposes a model where bottom-up learning extracts symbolic
representations from an LLM layer and top-down guidance utilizes symbolic
representations to direct prompt engineering in the LLM layer. These approaches
aim to harness the strengths of both LLMs and CAs, while mitigating their
weaknesses, thereby advancing the development of more robust AI systems. We
discuss the tradeoffs and challenges associated with each approach.
"
Manipulating Embeddings of Stable Diffusion Prompts,Niklas Deckers,http://arxiv.org/pdf/2308.12059v1.pdf,2023-08-23,"['cs.cv', 'cs.lg']",2308.12059v1.pdf,"  Generative text-to-image models such as Stable Diffusion allow users to
generate images based on a textual description, the prompt. Changing the prompt
is still the primary means for the user to change a generated image as desired.
However, changing the image by reformulating the prompt remains a difficult
process of trial and error, which has led to the emergence of prompt
engineering as a new field of research. We propose and analyze methods to
change the embedding of a prompt directly instead of the prompt text. It allows
for more fine-grained and targeted control that takes into account user
intentions. Our approach treats the generative text-to-image model as a
continuous function and passes gradients between the image space and the prompt
embedding space. By addressing different user interaction problems, we can
apply this idea in three scenarios: (1) Optimization of a metric defined in
image space that could measure, for example, image style. (2) Assistance of
users in creative tasks by enabling them to navigate the image space along a
selection of directions of ""near"" prompt embeddings. (3) Changing the embedding
of the prompt to include information that the user has seen in a particular
seed but finds difficult to describe in the prompt. Our experiments demonstrate
the feasibility of the described methods.
"
Large Language Models in Fault Localisation,Yonghao Wu,http://arxiv.org/pdf/2308.15276v3.pdf,2023-08-29,['cs.se'],2308.15276v3.pdf,"  Large Language Models (LLMs) have shown promise in multiple software
engineering tasks including code generation, program repair, code
summarisation, and test generation. Fault localisation is instrumental in
enabling automated debugging and repair of programs and was prominently
featured as a highlight during the launch event of ChatGPT-4. Nevertheless, the
performance of LLMs compared to state-of-the-art methods, as well as the impact
of prompt design and context length on their efficacy, remains unclear. To fill
this gap, this paper presents an in-depth investigation into the capability of
ChatGPT-3.5 and ChatGPT-4, the two state-of-the-art LLMs, on fault
localisation. Using the widely-adopted large-scale Defects4J dataset, we
compare the two LLMs with the existing fault localisation techniques. We also
investigate the consistency of LLMs in fault localisation, as well as how
prompt engineering and the length of code context affect the fault localisation
effectiveness.
  Our findings demonstrate that within function-level context, ChatGPT-4
outperforms all the existing fault localisation methods. Additional error logs
can further improve ChatGPT models' localisation accuracy and consistency, with
an average 46.9% higher accuracy over the state-of-the-art baseline SmartFL on
the Defects4J dataset in terms of TOP-1 metric. However, when the code context
of the Defects4J dataset expands to the class-level, ChatGPT-4's performance
suffers a significant drop, with 49.9% lower accuracy than SmartFL under TOP-1
metric. These observations indicate that although ChatGPT can effectively
localise faults under specific conditions, limitations are evident. Further
research is needed to fully harness the potential of LLMs like ChatGPT for
practical fault localisation applications.
"
Leveraging Large Language Models for Exploiting ASR Uncertainty,Pranay Dighe,http://arxiv.org/pdf/2309.04842v2.pdf,2023-09-09,"['cs.cl', 'cs.hc', 'cs.sd', 'eess.as']",2309.04842v2.pdf,"  While large language models excel in a variety of natural language processing
(NLP) tasks, to perform well on spoken language understanding (SLU) tasks, they
must either rely on off-the-shelf automatic speech recognition (ASR) systems
for transcription, or be equipped with an in-built speech modality. This work
focuses on the former scenario, where LLM's accuracy on SLU tasks is
constrained by the accuracy of a fixed ASR system on the spoken input.
Specifically, we tackle speech-intent classification task, where a high
word-error-rate can limit the LLM's ability to understand the spoken intent.
Instead of chasing a high accuracy by designing complex or specialized
architectures regardless of deployment costs, we seek to answer how far we can
go without substantially changing the underlying ASR and LLM, which can
potentially be shared by multiple unrelated tasks. To this end, we propose
prompting the LLM with an n-best list of ASR hypotheses instead of only the
error-prone 1-best hypothesis. We explore prompt-engineering to explain the
concept of n-best lists to the LLM; followed by the finetuning of Low-Rank
Adapters on the downstream tasks. Our approach using n-best lists proves to be
effective on a device-directed speech detection task as well as on a keyword
spotting task, where systems using n-best list prompts outperform those using
1-best ASR hypothesis; thus paving the way for an efficient method to exploit
ASR uncertainty via LLMs for speech-based applications.
"
Unveiling the potential of large language models in generating semantic  and cross-language clones,Palash R. Roy,http://arxiv.org/pdf/2309.06424v1.pdf,2023-09-12,"['cs.se', 'cs.ai', 'cs.lg']",2309.06424v1.pdf,"  Semantic and Cross-language code clone generation may be useful for code
reuse, code comprehension, refactoring and benchmarking. OpenAI's GPT model has
potential in such clone generation as GPT is used for text generation. When
developers copy/paste codes from Stack Overflow (SO) or within a system, there
might be inconsistent changes leading to unexpected behaviours. Similarly, if
someone possesses a code snippet in a particular programming language but seeks
equivalent functionality in a different language, a semantic cross-language
code clone generation approach could provide valuable assistance. In this
study, using SemanticCloneBench as a vehicle, we evaluated how well the GPT-3
model could help generate semantic and cross-language clone variants for a
given fragment.We have comprised a diverse set of code fragments and assessed
GPT-3s performance in generating code variants.Through extensive
experimentation and analysis, where 9 judges spent 158 hours to validate, we
investigate the model's ability to produce accurate and semantically correct
variants. Our findings shed light on GPT-3's strengths in code generation,
offering insights into the potential applications and challenges of using
advanced language models in software development. Our quantitative analysis
yields compelling results. In the realm of semantic clones, GPT-3 attains an
impressive accuracy of 62.14% and 0.55 BLEU score, achieved through few-shot
prompt engineering. Furthermore, the model shines in transcending linguistic
confines, boasting an exceptional 91.25% accuracy in generating cross-language
clones
"
Is GPT4 a Good Trader?,Bingzhe Wu,http://arxiv.org/pdf/2309.10982v1.pdf,2023-09-20,['cs.ai'],2309.10982v1.pdf,"  Recently, large language models (LLMs), particularly GPT-4, have demonstrated
significant capabilities in various planning and reasoning tasks
\cite{cheng2023gpt4,bubeck2023sparks}. Motivated by these advancements, there
has been a surge of interest among researchers to harness the capabilities of
GPT-4 for the automated design of quantitative factors that do not overlap with
existing factor libraries, with an aspiration to achieve alpha returns
\cite{webpagequant}. In contrast to these work, this study aims to examine the
fidelity of GPT-4's comprehension of classic trading theories and its
proficiency in applying its code interpreter abilities to real-world trading
data analysis. Such an exploration is instrumental in discerning whether the
underlying logic GPT-4 employs for trading is intrinsically reliable.
Furthermore, given the acknowledged interpretative latitude inherent in most
trading theories, we seek to distill more precise methodologies of deploying
these theories from GPT-4's analytical process, potentially offering invaluable
insights to human traders.
  To achieve this objective, we selected daily candlestick (K-line) data from
specific periods for certain assets, such as the Shanghai Stock Index. Through
meticulous prompt engineering, we guided GPT-4 to analyze the technical
structures embedded within this data, based on specific theories like the
Elliott Wave Theory. We then subjected its analytical output to manual
evaluation, assessing its interpretative depth and accuracy vis-\`a-vis these
trading theories from multiple dimensions. The results and findings from this
study could pave the way for a synergistic amalgamation of human expertise and
AI-driven insights in the realm of trading.
"
AI-Copilot for Business Optimisation: A Framework and A Case Study in  Production Scheduling,Pivithuru Thejan Amarasinghe,http://arxiv.org/pdf/2309.13218v3.pdf,2023-09-22,['cs.ai'],2309.13218v3.pdf,"  Business optimisation refers to the process of finding and implementing
efficient and cost-effective means of operation to bring a competitive
advantage for businesses. Synthesizing problem formulations is an integral part
of business optimisation, which relies on human expertise to construct problem
formulations using optimisation languages. Interestingly, with advancements in
Large Language Models (LLMs), the human expertise needed in problem formulation
can be minimized. However, developing an LLM for problem formulation is
challenging, due to training data, token limitations, and lack of appropriate
performance metrics. For the requirement of training data, recent attention has
been directed towards fine-tuning pre-trained LLMs for downstream tasks rather
than training an LLM from scratch for a specific task. In this paper, we adopt
an LLM fine-tuning approach and propose an AI-Copilot for business optimisation
problem formulation. For token limitations, we introduce modularization and
prompt engineering techniques to synthesize complex problem formulations as
modules that fit into the token limits of LLMs. Additionally, we design
performance evaluation metrics that are better suited for assessing the
accuracy and quality of problem formulations. The experiment results
demonstrate that with this approach we can synthesize complex and large problem
formulations for a typical business optimisation problem in production
scheduling.
"
An AI Chatbot for Explaining Deep Reinforcement Learning Decisions of  Service-oriented Systems,Andreas Metzger,http://arxiv.org/pdf/2309.14391v1.pdf,2023-09-25,"['cs.lg', 'cs.ai', 'cs.cl']",2309.14391v1.pdf,"  Deep Reinforcement Learning (Deep RL) is increasingly used to cope with the
open-world assumption in service-oriented systems. Deep RL was successfully
applied to problems such as dynamic service composition, job scheduling, and
offloading, as well as service adaptation. While Deep RL offers many benefits,
understanding the decision-making of Deep RL is challenging because its learned
decision-making policy essentially appears as a black box. Yet, understanding
the decision-making of Deep RL is key to help service developers perform
debugging, support service providers to comply with relevant legal frameworks,
and facilitate service users to build trust. We introduce Chat4XAI to
facilitate the understanding of the decision-making of Deep RL by providing
natural-language explanations. Compared with visual explanations, the reported
benefits of natural-language explanations include better understandability for
non-technical users, increased user acceptance and trust, as well as more
efficient explanations. Chat4XAI leverages modern AI chatbot technology and
dedicated prompt engineering. Compared to earlier work on natural-language
explanations using classical software-based dialogue systems, using an AI
chatbot eliminates the need for eliciting and defining potential questions and
answers up-front. We prototypically realize Chat4XAI using OpenAI's ChatGPT API
and evaluate the fidelity and stability of its explanations using an adaptive
service exemplar.
"
Batch Calibration: Rethinking Calibration for In-Context Learning and  Prompt Engineering,Han Zhou,http://arxiv.org/pdf/2309.17249v1.pdf,2023-09-29,"['cs.cl', 'cs.ai', 'cs.lg']",2309.17249v1.pdf,"  Prompting and in-context learning (ICL) have become efficient learning
paradigms for large language models (LLMs). However, LLMs suffer from prompt
brittleness and various bias factors in the prompt, including but not limited
to the formatting, the choice verbalizers, and the ICL examples. To address
this problem that results in unexpected performance degradation, calibration
methods have been developed to mitigate the effects of these biases while
recovering LLM performance. In this work, we first conduct a systematic
analysis of the existing calibration methods, where we both provide a unified
view and reveal the failure cases. Inspired by these analyses, we propose Batch
Calibration (BC), a simple yet intuitive method that controls the contextual
bias from the batched input, unifies various prior approaches, and effectively
addresses the aforementioned issues. BC is zero-shot, inference-only, and
incurs negligible additional costs. In the few-shot setup, we further extend BC
to allow it to learn the contextual bias from labeled data. We validate the
effectiveness of BC with PaLM 2-(S, M, L) and CLIP models and demonstrate
state-of-the-art performance over previous calibration baselines across more
than 10 natural language understanding and image classification tasks.
"
Suspicion-Agent: Playing Imperfect Information Games with Theory of Mind  Aware GPT-4,Jiaxian Guo,http://arxiv.org/pdf/2309.17277v2.pdf,2023-09-29,['cs.ai'],2309.17277v2.pdf,"  Unlike perfect information games, where all elements are known to every
player, imperfect information games emulate the real-world complexities of
decision-making under uncertain or incomplete information. GPT-4, the recent
breakthrough in large language models (LLMs) trained on massive passive data,
is notable for its knowledge retrieval and reasoning abilities. This paper
delves into the applicability of GPT-4's learned knowledge for imperfect
information games. To achieve this, we introduce \textbf{Suspicion-Agent}, an
innovative agent that leverages GPT-4's capabilities for performing in
imperfect information games. With proper prompt engineering to achieve
different functions, Suspicion-Agent based on GPT-4 demonstrates remarkable
adaptability across a range of imperfect information card games. Importantly,
GPT-4 displays a strong high-order theory of mind (ToM) capacity, meaning it
can understand others and intentionally impact others' behavior. Leveraging
this, we design a planning strategy that enables GPT-4 to competently play
against different opponents, adapting its gameplay style as needed, while
requiring only the game rules and descriptions of observations as input. In the
experiments, we qualitatively showcase the capabilities of Suspicion-Agent
across three different imperfect information games and then quantitatively
evaluate it in Leduc Hold'em. The results show that Suspicion-Agent can
potentially outperform traditional algorithms designed for imperfect
information games, without any specialized training or examples. In order to
encourage and foster deeper insights within the community, we make our
game-related data publicly available.
"
Investigating the Limitation of CLIP Models: The Worst-Performing  Categories,Jie-Jing Shao,http://arxiv.org/pdf/2310.03324v1.pdf,2023-10-05,"['cs.cv', 'cs.lg']",2310.03324v1.pdf,"  Contrastive Language-Image Pre-training (CLIP) provides a foundation model by
integrating natural language into visual concepts, enabling zero-shot
recognition on downstream tasks. It is usually expected that satisfactory
overall accuracy can be achieved across numerous domains through well-designed
textual prompts. However, we found that their performance in the worst
categories is significantly inferior to the overall performance. For example,
on ImageNet, there are a total of 10 categories with class-wise accuracy as low
as 0\%, even though the overall performance has achieved 64.1\%. This
phenomenon reveals the potential risks associated with using CLIP models,
particularly in risk-sensitive applications where specific categories hold
significant importance. To address this issue, we investigate the alignment
between the two modalities in the CLIP model and propose the Class-wise
Matching Margin (\cmm) to measure the inference confusion. \cmm\ can
effectively identify the worst-performing categories and estimate the potential
performance of the candidate prompts. We further query large language models to
enrich descriptions of worst-performing categories and build a weighted
ensemble to highlight the efficient prompts. Experimental results clearly
verify the effectiveness of our proposal, where the accuracy on the worst-10
categories on ImageNet is boosted to 5.2\%, without manual prompt engineering,
laborious optimization, or access to labeled validation data.
"
Thought Propagation: An Analogical Approach to Complex Reasoning with  Large Language Models,Junchi Yu,http://arxiv.org/pdf/2310.03965v2.pdf,2023-10-06,"['cs.ai', 'cs.cl']",2310.03965v2.pdf,"  Large Language Models (LLMs) have achieved remarkable success in reasoning
tasks with the development of prompting methods. However, existing prompting
approaches cannot reuse insights of solving similar problems and suffer from
accumulated errors in multi-step reasoning, since they prompt LLMs to reason
\textit{from scratch}. To address these issues, we propose
\textbf{\textit{Thought Propagation} (TP)}, which explores the analogous
problems and leverages their solutions to enhance the complex reasoning ability
of LLMs. These analogous problems are related to the input one, with reusable
solutions and problem-solving strategies. Thus, it is promising to propagate
insights of solving previous analogous problems to inspire new problem-solving.
To achieve this, TP first prompts LLMs to propose and solve a set of analogous
problems that are related to the input one. Then, TP reuses the results of
analogous problems to directly yield a new solution or derive a
knowledge-intensive plan for execution to amend the initial solution obtained
from scratch. TP is compatible with existing prompting approaches, allowing
plug-and-play generalization and enhancement in a wide range of tasks without
much labor in task-specific prompt engineering. Experiments across three
challenging tasks demonstrate TP enjoys a substantial improvement over the
baselines by an average of 12\% absolute increase in finding the optimal
solutions in Shortest-path Reasoning, 13\% improvement of human preference in
Creative Writing, and 15\% enhancement in the task completion rate of LLM-Agent
Planning.
"
JVNV: A Corpus of Japanese Emotional Speech with Verbal Content and  Nonverbal Expressions,Detai Xin,http://arxiv.org/pdf/2310.06072v1.pdf,2023-10-09,"['cs.sd', 'eess.as']",2310.06072v1.pdf,"  We present the JVNV, a Japanese emotional speech corpus with verbal content
and nonverbal vocalizations whose scripts are generated by a large-scale
language model. Existing emotional speech corpora lack not only proper
emotional scripts but also nonverbal vocalizations (NVs) that are essential
expressions in spoken language to express emotions. We propose an automatic
script generation method to produce emotional scripts by providing seed words
with sentiment polarity and phrases of nonverbal vocalizations to ChatGPT using
prompt engineering. We select 514 scripts with balanced phoneme coverage from
the generated candidate scripts with the assistance of emotion confidence
scores and language fluency scores. We demonstrate the effectiveness of JVNV by
showing that JVNV has better phoneme coverage and emotion recognizability than
previous Japanese emotional speech corpora. We then benchmark JVNV on emotional
text-to-speech synthesis using discrete codes to represent NVs. We show that
there still exists a gap between the performance of synthesizing read-aloud
speech and emotional speech, and adding NVs in the speech makes the task even
harder, which brings new challenges for this task and makes JVNV a valuable
resource for relevant works in the future. To our best knowledge, JVNV is the
first speech corpus that generates scripts automatically using large language
models.
"
Large Language Model-Empowered Agents for Simulating Macroeconomic  Activities,Nian Li,http://arxiv.org/pdf/2310.10436v1.pdf,2023-10-16,['cs.ai'],2310.10436v1.pdf,"  The advent of the Web has brought about a paradigm shift in traditional
economics, particularly in the digital economy era, enabling the precise
recording and analysis of individual economic behavior. This has led to a
growing emphasis on data-driven modeling in macroeconomics. In macroeconomic
research, Agent-based modeling (ABM) emerged as an alternative, evolving
through rule-based agents, machine learning-enhanced decision-making, and, more
recently, advanced AI agents. However, the existing works are suffering from
three main challenges when endowing agents with human-like decision-making,
including agent heterogeneity, the influence of macroeconomic trends, and
multifaceted economic factors. Large language models (LLMs) have recently
gained prominence in offering autonomous human-like characteristics. Therefore,
leveraging LLMs in macroeconomic simulation presents an opportunity to overcome
traditional limitations. In this work, we take an early step in introducing a
novel approach that leverages LLMs in macroeconomic simulation. We design
prompt-engineering-driven LLM agents to exhibit human-like decision-making and
adaptability in the economic environment, with the abilities of perception,
reflection, and decision-making to address the abovementioned challenges.
Simulation experiments on macroeconomic activities show that LLM-empowered
agents can make realistic work and consumption decisions and emerge more
reasonable macroeconomic phenomena than existing rule-based or AI agents. Our
work demonstrates the promising potential to simulate macroeconomics based on
LLM and its human-like characteristics.
"
Large Language Model for Multi-objective Evolutionary Optimization,Fei Liu,http://arxiv.org/pdf/2310.12541v2.pdf,2023-10-19,"['cs.ne', 'cs.ai', 'cs.cl', 'cs.et']",2310.12541v2.pdf,"  Multiobjective evolutionary algorithms (MOEAs) are major methods for solving
multiobjective optimization problems (MOPs). Many MOEAs have been proposed in
the past decades, of which the search operators need a carefully handcrafted
design with domain knowledge. Recently, some attempts have been made to replace
the manually designed operators in MOEAs with learning-based operators (e.g.,
neural network models). However, much effort is still required for designing
and training such models, and the learned operators might not generalize well
on new problems. To tackle the above challenges, this work investigates a novel
approach that leverages the powerful large language model (LLM) to design MOEA
operators. With proper prompt engineering, we successfully let a general LLM
serve as a black-box search operator for decomposition-based MOEA (MOEA/D) in a
zero-shot manner. In addition, by learning from the LLM behavior, we further
design an explicit white-box operator with randomness and propose a new version
of decomposition-based MOEA, termed MOEA/D-LO. Experimental studies on
different test benchmarks show that our proposed method can achieve competitive
performance with widely used MOEAs. It is also promising to see the operator
only learned from a few instances can have robust generalization performance on
unseen problems with quite different patterns and settings. The results reveal
the potential benefits of using pre-trained LLMs in the design of MOEAs.
"
Vision-Language Models are Zero-Shot Reward Models for Reinforcement  Learning,Juan Rocamonde,http://arxiv.org/pdf/2310.12921v1.pdf,2023-10-19,"['cs.lg', 'cs.ai']",2310.12921v1.pdf,"  Reinforcement learning (RL) requires either manually specifying a reward
function, which is often infeasible, or learning a reward model from a large
amount of human feedback, which is often very expensive. We study a more
sample-efficient alternative: using pretrained vision-language models (VLMs) as
zero-shot reward models (RMs) to specify tasks via natural language. We propose
a natural and general approach to using VLMs as reward models, which we call
VLM-RMs. We use VLM-RMs based on CLIP to train a MuJoCo humanoid to learn
complex tasks without a manually specified reward function, such as kneeling,
doing the splits, and sitting in a lotus position. For each of these tasks, we
only provide a single sentence text prompt describing the desired task with
minimal prompt engineering. We provide videos of the trained agents at:
https://sites.google.com/view/vlm-rm. We can improve performance by providing a
second ``baseline'' prompt and projecting out parts of the CLIP embedding space
irrelevant to distinguish between goal and baseline. Further, we find a strong
scaling effect for VLM-RMs: larger VLMs trained with more compute and data are
better reward models. The failure modes of VLM-RMs we encountered are all
related to known capability limitations of current VLMs, such as limited
spatial reasoning ability or visually unrealistic environments that are far
off-distribution for the VLM. We find that VLM-RMs are remarkably robust as
long as the VLM is large enough. This suggests that future VLMs will become
more and more useful reward models for a wide range of RL applications.
"
Enhancing Zero-Shot Crypto Sentiment with Fine-tuned Language Model and  Prompt Engineering,Rahman S M Wahidur,http://arxiv.org/pdf/2310.13226v1.pdf,2023-10-20,['cs.cl'],2310.13226v1.pdf,"  Blockchain technology has revolutionized the financial landscape, with
cryptocurrencies gaining widespread adoption for their decentralized and
transparent nature. As the sentiment expressed on social media platforms can
significantly influence cryptocurrency discussions and market movements,
sentiment analysis has emerged as a crucial tool for understanding public
opinion and predicting market trends. Motivated by the aim to enhance sentiment
analysis accuracy in the cryptocurrency domain, this paper investigates
fine-tuning techniques on large language models. This paper also investigates
the efficacy of supervised fine-tuning and instruction-based fine-tuning on
large language models for unseen tasks. Experimental results demonstrate a
significant average zero-shot performance gain of 40% after fine-tuning,
highlighting the potential of this technique in optimizing pre-trained language
model efficiency. Additionally, the impact of instruction tuning on models of
varying scales is examined, revealing that larger models benefit from
instruction tuning, achieving the highest average accuracy score of 75.16%. In
contrast, smaller-scale models may experience reduced generalization due to the
complete utilization of model capacity. To gain deeper insight about how
instruction works with these language models, this paper presents an
experimental investigation into the response of an instruction-based model
under different instruction tuning setups. The investigation demonstrates that
the model achieves an average accuracy score of 72.38% for short and simple
instructions. This performance significantly outperforms its accuracy under
long and complex instructions by over 12%, thereby effectively highlighting the
profound significance of instruction characteristics in maximizing model
performance.
"
Can LLMs Grade Short-answer Reading Comprehension Questions :  Foundational Literacy Assessment in LMICs,Owen Henkel,http://arxiv.org/pdf/2310.18373v1.pdf,2023-10-26,"['cs.cl', 'cs.ai']",2310.18373v1.pdf,"  This paper presents emerging evidence of using generative large language
models (i.e., GPT-4) to reliably evaluate short-answer reading comprehension
questions. Specifically, we explore how various configurations of generative
(LLMs) are able to evaluate student responses from a new dataset, drawn from a
battery of reading assessments conducted with over 150 students in Ghana. As
this dataset is novel and hence not used in training runs of GPT, it offers an
opportunity to test for domain shift and evaluate the generalizability of
generative LLMs, which are predominantly designed and trained on data from
high-income North American countries. We found that GPT-4, with minimal prompt
engineering performed extremely well on evaluating the novel dataset (Quadratic
Weighted Kappa 0.923, F1 0.88), substantially outperforming transfer-learning
based approaches, and even exceeding expert human raters (Quadratic Weighted
Kappa 0.915, F1 0.87). To the best of our knowledge, our work is the first to
empirically evaluate the performance of generative LLMs on short-answer reading
comprehension questions, using real student data, and suggests that generative
LLMs have the potential to reliably evaluate foundational literacy. Currently
the assessment of formative literacy and numeracy is infrequent in many low and
middle-income countries (LMICs) due to the cost and operational complexities of
conducting them at scale. Automating the grading process for reading assessment
could enable wider usage, and in turn improve decision-making regarding
curricula, school management, and teaching practice at the classroom level.
Importantly, in contrast transfer learning based approaches, generative LLMs
generalize well and the technical barriers to their use are low, making them
more feasible to implement and scale in lower resource educational contexts.
"
Promise:Prompt-driven 3D Medical Image Segmentation Using Pretrained  Image Foundation Models,Hao Li,http://arxiv.org/pdf/2310.19721v2.pdf,2023-10-30,"['eess.iv', 'cs.cv']",2310.19721v2.pdf,"  To address prevalent issues in medical imaging, such as data acquisition
challenges and label availability, transfer learning from natural to medical
image domains serves as a viable strategy to produce reliable segmentation
results. However, several existing barriers between domains need to be broken
down, including addressing contrast discrepancies, managing anatomical
variability, and adapting 2D pretrained models for 3D segmentation tasks. In
this paper, we propose ProMISe,a prompt-driven 3D medical image segmentation
model using only a single point prompt to leverage knowledge from a pretrained
2D image foundation model. In particular, we use the pretrained vision
transformer from the Segment Anything Model (SAM) and integrate lightweight
adapters to extract depth-related (3D) spatial context without updating the
pretrained weights. For robust results, a hybrid network with complementary
encoders is designed, and a boundary-aware loss is proposed to achieve precise
boundaries. We evaluate our model on two public datasets for colon and pancreas
tumor segmentations, respectively. Compared to the state-of-the-art
segmentation methods with and without prompt engineering, our proposed method
achieves superior performance. The code is publicly available at
https://github.com/MedICL-VU/ProMISe.
"
Making Large Language Models Better Data Creators,Dong-Ho Lee,http://arxiv.org/pdf/2310.20111v1.pdf,2023-10-31,['cs.cl'],2310.20111v1.pdf,"  Although large language models (LLMs) have advanced the state-of-the-art in
NLP significantly, deploying them for downstream applications is still
challenging due to cost, responsiveness, control, or concerns around privacy
and security. As such, trainable models are still the preferred option in some
cases. However, these models still require human-labeled data for optimal
performance, which is expensive and time-consuming to obtain. In order to
address this issue, several techniques to reduce human effort involve labeling
or generating data using LLMs. Although these methods are effective for certain
applications, in practice they encounter difficulties in real-world scenarios.
Labeling data requires careful data selection, while generating data
necessitates task-specific prompt engineering. In this paper, we propose a
unified data creation pipeline that requires only a single formatting example,
and which is applicable to a broad range of tasks, including traditionally
problematic ones with semantically devoid label spaces. In our experiments we
demonstrate that instruction-following LLMs are highly cost-effective data
creators, and that models trained with these data exhibit performance better
than those trained with human-labeled data (by up to 17.5%) on
out-of-distribution evaluation, while maintaining comparable performance on
in-distribution tasks. These results have important implications for the
robustness of NLP systems deployed in the real-world.
"
VisPercep: A Vision-Language Approach to Enhance Visual Perception for  People with Blindness and Low Vision,Yu Hao,http://arxiv.org/pdf/2310.20225v1.pdf,2023-10-31,"['cs.cv', 'cs.ai']",2310.20225v1.pdf,"  People with blindness and low vision (pBLV) encounter substantial challenges
when it comes to comprehensive scene recognition and precise object
identification in unfamiliar environments. Additionally, due to the vision
loss, pBLV have difficulty in accessing and identifying potential tripping
hazards on their own. In this paper, we present a pioneering approach that
leverages a large vision-language model to enhance visual perception for pBLV,
offering detailed and comprehensive descriptions of the surrounding
environments and providing warnings about the potential risks. Our method
begins by leveraging a large image tagging model (i.e., Recognize Anything
(RAM)) to identify all common objects present in the captured images. The
recognition results and user query are then integrated into a prompt, tailored
specifically for pBLV using prompt engineering. By combining the prompt and
input image, a large vision-language model (i.e., InstructBLIP) generates
detailed and comprehensive descriptions of the environment and identifies
potential risks in the environment by analyzing the environmental objects and
scenes, relevant to the prompt. We evaluate our approach through experiments
conducted on both indoor and outdoor datasets. Our results demonstrate that our
method is able to recognize objects accurately and provide insightful
descriptions and analysis of the environment for pBLV.
"
BigBIO: A Framework for Data-Centric Biomedical Natural Language  Processing,Jason Alan Fries,http://arxiv.org/pdf/2206.15076v1.pdf,2022-06-30,['cs.cl'],2206.15076v1.pdf,"  Training and evaluating language models increasingly requires the
construction of meta-datasets --diverse collections of curated data with clear
provenance. Natural language prompting has recently lead to improved zero-shot
generalization by transforming existing, supervised datasets into a diversity
of novel pretraining tasks, highlighting the benefits of meta-dataset curation.
While successful in general-domain text, translating these data-centric
approaches to biomedical language modeling remains challenging, as labeled
biomedical datasets are significantly underrepresented in popular data hubs. To
address this challenge, we introduce BigBIO a community library of 126+
biomedical NLP datasets, currently covering 12 task categories and 10+
languages. BigBIO facilitates reproducible meta-dataset curation via
programmatic access to datasets and their metadata, and is compatible with
current platforms for prompt engineering and end-to-end few/zero shot language
model evaluation. We discuss our process for task schema harmonization, data
auditing, contribution guidelines, and outline two illustrative use cases:
zero-shot evaluation of biomedical prompts and large-scale, multi-task
learning. BigBIO is an ongoing community effort and is available at
https://github.com/bigscience-workshop/biomedical
"
GPT Takes the Bar Exam,Michael Bommarito II,http://arxiv.org/pdf/2212.14402v1.pdf,2022-12-29,"['cs.cl', 'cs.ai', 'cs.lg']",2212.14402v1.pdf,"  Nearly all jurisdictions in the United States require a professional license
exam, commonly referred to as ""the Bar Exam,"" as a precondition for law
practice. To even sit for the exam, most jurisdictions require that an
applicant completes at least seven years of post-secondary education, including
three years at an accredited law school. In addition, most test-takers also
undergo weeks to months of further, exam-specific preparation. Despite this
significant investment of time and capital, approximately one in five
test-takers still score under the rate required to pass the exam on their first
try. In the face of a complex task that requires such depth of knowledge, what,
then, should we expect of the state of the art in ""AI?"" In this research, we
document our experimental evaluation of the performance of OpenAI's
`text-davinci-003` model, often-referred to as GPT-3.5, on the multistate
multiple choice (MBE) section of the exam. While we find no benefit in
fine-tuning over GPT-3.5's zero-shot performance at the scale of our training
data, we do find that hyperparameter optimization and prompt engineering
positively impacted GPT-3.5's zero-shot performance. For best prompt and
parameters, GPT-3.5 achieves a headline correct rate of 50.3% on a complete
NCBE MBE practice exam, significantly in excess of the 25% baseline guessing
rate, and performs at a passing rate for both Evidence and Torts. GPT-3.5's
ranking of responses is also highly-correlated with correctness; its top two
and top three choices are correct 71% and 88% of the time, respectively,
indicating very strong non-entailment performance. While our ability to
interpret these results is limited by nascent scientific understanding of LLMs
and the proprietary nature of GPT, we believe that these results strongly
suggest that an LLM will pass the MBE component of the Bar Exam in the near
future.
"
Few-shot Multimodal Multitask Multilingual Learning,Aman Chadha,http://arxiv.org/pdf/2303.12489v1.pdf,2023-02-19,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.cv', 'cs.mm']",2303.12489v1.pdf,"  While few-shot learning as a transfer learning paradigm has gained
significant traction for scenarios with limited data, it has primarily been
explored in the context of building unimodal and unilingual models.
Furthermore, a significant part of the existing literature in the domain of
few-shot multitask learning perform in-context learning which requires manually
generated prompts as the input, yielding varying outcomes depending on the
level of manual prompt-engineering. In addition, in-context learning suffers
from substantial computational, memory, and storage costs which eventually
leads to high inference latency because it involves running all of the prompt's
examples through the model every time a prediction is made. In contrast,
methods based on the transfer learning via the fine-tuning paradigm avoid the
aforementioned issues at a one-time cost of fine-tuning weights on a per-task
basis. However, such methods lack exposure to few-shot multimodal multitask
learning. In this paper, we propose few-shot learning for a multimodal
multitask multilingual (FM3) setting by adapting pre-trained vision and
language models using task-specific hypernetworks and contrastively fine-tuning
them to enable few-shot learning. FM3's architecture combines the best of both
worlds of in-context and fine-tuning based learning and consists of three major
components: (i) multimodal contrastive fine-tuning to enable few-shot learning,
(ii) hypernetwork task adaptation to perform multitask learning, and (iii)
task-specific output heads to cater to a plethora of diverse tasks. FM3 learns
the most prominent tasks in the vision and language domains along with their
intersections, namely visual entailment (VE), visual question answering (VQA),
and natural language understanding (NLU) tasks such as neural entity
recognition (NER) and the GLUE benchmark including QNLI, MNLI, QQP, and SST-2.
"
Improving Few-Shot Prompts with Relevant Static Analysis Products,Toufique Ahmed,http://arxiv.org/pdf/2304.06815v2.pdf,2023-04-13,"['cs.se', 'cs.lg']",2304.06815v2.pdf,"  Large Language Models (LLM) are a new class of computation engines,
""programmed"" via prompt engineering. We are still learning how to best
""program"" these LLMs to help developers. We start with the intuition that
developers tend to consciously and unconsciously have a collection of semantics
facts in mind when working on coding tasks. Mostly these are shallow, simple
facts arising from a quick read. For a function, examples of facts might
include parameter and local variable names, return expressions, simple pre- and
post-conditions, and basic control and data flow, etc.
  One might assume that the powerful multi-layer architecture of
transformer-style LLMs makes them inherently capable of doing this simple level
of ""code analysis"" and extracting such information, implicitly, while
processing code: but are they, really? If they aren't, could explicitly adding
this information help? Our goal here is to investigate this question, using the
code summarization task and evaluate whether automatically augmenting an LLM's
prompt with semantic facts explicitly, actually helps.
  Prior work shows that LLM performance on code summarization benefits from
few-shot samples drawn either from the same-project or from examples found via
information retrieval methods (such as BM25). While summarization performance
has steadily increased since the early days, there is still room for
improvement: LLM performance on code summarization still lags its performance
on natural-language tasks like translation and text summarization.
  We find that adding semantic facts actually does help! This approach improves
performance in several different settings suggested by prior work, including
for two different Large Language Models. In most cases, improvement nears or
exceeds 2 BLEU; for the PHP language in the challenging CodeSearchNet dataset,
this augmentation actually yields performance surpassing 30 BLEU.
"
Evaluation of GPT-3.5 and GPT-4 for supporting real-world information  needs in healthcare delivery,Debadutta Dash,http://arxiv.org/pdf/2304.13714v3.pdf,2023-04-26,"['cs.ai', 'cs.cl', 'cs.ir']",2304.13714v3.pdf,"  Despite growing interest in using large language models (LLMs) in healthcare,
current explorations do not assess the real-world utility and safety of LLMs in
clinical settings. Our objective was to determine whether two LLMs can serve
information needs submitted by physicians as questions to an informatics
consultation service in a safe and concordant manner. Sixty six questions from
an informatics consult service were submitted to GPT-3.5 and GPT-4 via simple
prompts. 12 physicians assessed the LLM responses' possibility of patient harm
and concordance with existing reports from an informatics consultation service.
Physician assessments were summarized based on majority vote. For no questions
did a majority of physicians deem either LLM response as harmful. For GPT-3.5,
responses to 8 questions were concordant with the informatics consult report,
20 discordant, and 9 were unable to be assessed. There were 29 responses with
no majority on ""Agree"", ""Disagree"", and ""Unable to assess"". For GPT-4,
responses to 13 questions were concordant, 15 discordant, and 3 were unable to
be assessed. There were 35 responses with no majority. Responses from both LLMs
were largely devoid of overt harm, but less than 20% of the responses agreed
with an answer from an informatics consultation service, responses contained
hallucinated references, and physicians were divided on what constitutes harm.
These results suggest that while general purpose LLMs are able to provide safe
and credible responses, they often do not meet the specific information need of
a given question. A definitive evaluation of the usefulness of LLMs in
healthcare settings will likely require additional research on prompt
engineering, calibration, and custom-tailoring of general purpose models.
"
Zelda: Video Analytics using Vision-Language Models,Francisco Romero,http://arxiv.org/pdf/2305.03785v2.pdf,2023-05-05,['cs.db'],2305.03785v2.pdf,"  Advances in ML have motivated the design of video analytics systems that
allow for structured queries over video datasets. However, existing systems
limit query expressivity, require users to specify an ML model per predicate,
rely on complex optimizations that trade off accuracy for performance, and
return large amounts of redundant and low-quality results. This paper focuses
on the recently developed Vision-Language Models (VLMs) that allow users to
query images using natural language like ""cars during daytime at traffic
intersections."" Through an in-depth analysis, we show VLMs address three
limitations of current video analytics systems: general expressivity, a single
general purpose model to query many predicates, and are both simple and fast.
However, VLMs still return large numbers of redundant and low-quality results
that can overwhelm and burden users. In addition, VLMs often require manual
prompt engineering to improve result relevance.
  We present Zelda: a video analytics system that uses VLMs to return both
relevant and semantically diverse results for top-K queries on large video
datasets. Zelda prompts the VLM with the user's query in natural language.
Zelda then automatically adds discriminator and synonym terms to boost
accuracy, and terms to identify low-quality frames. To improve result
diversity, Zelda uses semantic-rich VLM embeddings in an algorithm that prunes
similar frames while considering their relevance to the query and the number of
top-K results requested. We evaluate Zelda across five datasets and 19 queries
and quantitatively show it achieves higher mean average precision (up to 1.15x)
and improves average pairwise similarity (up to 1.16x) compared to using VLMs
out-of-the-box. We also compare Zelda to a state-of-the-art video analytics
engine and show that Zelda retrieves results 7.5x (up to 10.4x) faster for the
same accuracy and frame diversity.
"
ConES: Concept Embedding Search for Parameter Efficient Tuning Large  Vision Language Models,Huahui Yi,http://arxiv.org/pdf/2305.18993v1.pdf,2023-05-30,['cs.cv'],2305.18993v1.pdf,"  Large pre-trained vision-language models have shown great prominence in
transferring pre-acquired knowledge to various domains and downstream tasks
with appropriate prompting or tuning. Existing prevalent tuning methods can be
generally categorized into three genres: 1) prompt engineering by creating
suitable prompt texts, which is time-consuming and requires domain expertise;
2) or simply fine-tuning the whole model, which is extremely inefficient; 3)
prompt tuning through parameterized prompt embeddings with the text encoder.
Nevertheless, all methods rely on the text encoder for bridging the modality
gap between vision and language. In this work, we question the necessity of the
cumbersome text encoder for a more lightweight and efficient tuning paradigm as
well as more representative prompt embeddings closer to the image
representations. To achieve this, we propose a Concept Embedding Search (ConES)
approach by optimizing prompt embeddings -- without the need of the text
encoder -- to capture the 'concept' of the image modality through a variety of
task objectives. By dropping the text encoder, we are able to significantly
speed up the learning process, \eg, from about an hour to just ten minutes in
our experiments for personalized text-to-image generation without impairing the
generation quality. Moreover, our proposed approach is orthogonal to current
existing tuning methods since the searched concept embeddings can be further
utilized in the next stage of fine-tuning the pre-trained large models for
boosting performance. Extensive experiments show that our approach can beat the
prompt tuning and textual inversion methods in a variety of downstream tasks
including objection detection, instance segmentation, and image generation. Our
approach also shows better generalization capability for unseen concepts in
specialized domains, such as the medical domain.
"
ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF  Synthesis,Zhiling Zheng,http://arxiv.org/pdf/2306.11296v2.pdf,2023-06-20,"['cs.ir', 'cond-mat.mtrl-sci', 'cs.cl', 'physics.chem-ph']",2306.11296v2.pdf,"  We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines.
"
Identifying and Extracting Rare Disease Phenotypes with Large Language  Models,Cathy Shyr,http://arxiv.org/pdf/2306.12656v1.pdf,2023-06-22,"['cs.cl', 'cs.ai']",2306.12656v1.pdf,"  Rare diseases (RDs) are collectively common and affect 300 million people
worldwide. Accurate phenotyping is critical for informing diagnosis and
treatment, but RD phenotypes are often embedded in unstructured text and
time-consuming to extract manually. While natural language processing (NLP)
models can perform named entity recognition (NER) to automate extraction, a
major bottleneck is the development of a large, annotated corpus for model
training. Recently, prompt learning emerged as an NLP paradigm that can lead to
more generalizable results without any (zero-shot) or few labeled samples
(few-shot). Despite growing interest in ChatGPT, a revolutionary large language
model capable of following complex human prompts and generating high-quality
responses, none have studied its NER performance for RDs in the zero- and
few-shot settings. To this end, we engineered novel prompts aimed at extracting
RD phenotypes and, to the best of our knowledge, are the first the establish a
benchmark for evaluating ChatGPT's performance in these settings. We compared
its performance to the traditional fine-tuning approach and conducted an
in-depth error analysis. Overall, fine-tuning BioClinicalBERT resulted in
higher performance (F1 of 0.689) than ChatGPT (F1 of 0.472 and 0.591 in the
zero- and few-shot settings, respectively). Despite this, ChatGPT achieved
similar or higher accuracy for certain entities (i.e., rare diseases and signs)
in the one-shot setting (F1 of 0.776 and 0.725). This suggests that with
appropriate prompt engineering, ChatGPT has the potential to match or
outperform fine-tuned language models for certain entity types with just one
labeled sample. While the proliferation of large language models may provide
opportunities for supporting RD diagnosis and treatment, researchers and
clinicians should critically evaluate model outputs and be well-informed of
their limitations.
"
Demonstrations of the Potential of AI-based Political Issue Polling,Nathan E. Sanders,http://arxiv.org/pdf/2307.04781v2.pdf,2023-07-10,['cs.cy'],2307.04781v2.pdf,"  Political polling is a multi-billion dollar industry with outsized influence
on the societal trajectory of the United States and nations around the world.
However, it has been challenged by factors that stress its cost, availability,
and accuracy. At the same time, artificial intelligence (AI) chatbots have
become compelling stand-ins for human behavior, powered by increasingly
sophisticated large language models (LLMs). Could AI chatbots be an effective
tool for anticipating public opinion on controversial issues to the extent that
they could be used by campaigns, interest groups, and polling firms? We have
developed a prompt engineering methodology for eliciting human-like survey
responses from ChatGPT, which simulate the response to a policy question of a
person described by a set of demographic factors, and produce both an ordinal
numeric response score and a textual justification. We execute large scale
experiments, querying for thousands of simulated responses at a cost far lower
than human surveys. We compare simulated data to human issue polling data from
the Cooperative Election Study (CES). We find that ChatGPT is effective at
anticipating both the mean level and distribution of public opinion on a
variety of policy issues such as abortion bans and approval of the US Supreme
Court, particularly in their ideological breakdown (correlation typically
>85%). However, it is less successful at anticipating demographic-level
differences. Moreover, ChatGPT tends to overgeneralize to new policy issues
that arose after its training data was collected, such as US support for
involvement in the war in Ukraine. Our work has implications for our
understanding of the strengths and limitations of the current generation of AI
chatbots as virtual publics or online listening platforms, future directions
for LLM development, and applications of AI tools to the political domain.
(Abridged)
"
Go Beyond The Obvious: Probing the gap of INFORMAL reasoning ability  between Humanity and LLMs by Detective Reasoning Puzzle Benchmark,Zhouhon Gu,http://arxiv.org/pdf/2307.05113v2.pdf,2023-07-11,['cs.cl'],2307.05113v2.pdf,"  Informal reasoning ability is the ability to reason based on common sense,
experience, and intuition.Humans use informal reasoning every day to extract
the most influential elements for their decision-making from a large amount of
life-like information.With the rapid development of language models, the
realization of general artificial intelligence has emerged with hope. Given the
outstanding informal reasoning ability of humans, how much informal reasoning
ability language models have has not been well studied by scholars.In order to
explore the gap between humans and language models in informal reasoning
ability, this paper constructs a Detective Reasoning Benchmark, which is an
assembly of 1,200 questions gathered from accessible online resources, aims at
evaluating the model's informal reasoning ability in real-life
context.Considering the improvement of the model's informal reasoning ability
restricted by the lack of benchmark, we further propose a Self-Question Prompt
Framework that mimics human thinking to enhance the model's informal reasoning
ability.The goals of self-question are to find key elements, deeply investigate
the connections between these elements, encourage the relationship between each
element and the problem, and finally, require the model to reasonably answer
the problem.The experimental results show that human performance greatly
outperforms the SoTA Language Models in Detective Reasoning Benchmark.Besides,
Self-Question is proven to be the most effective prompt engineering in
improving GPT-4's informal reasoning ability, but it still does not even
surpass the lowest score made by human participants.Upon acceptance of the
paper, the source code for the benchmark will be made publicly accessible.
"
Benchmarking Causal Study to Interpret Large Language Models for Source  Code,Daniel Rodriguez-Cardenas,http://arxiv.org/pdf/2308.12415v1.pdf,2023-08-23,"['cs.se', 'cs.ai']",2308.12415v1.pdf,"  One of the most common solutions adopted by software researchers to address
code generation is by training Large Language Models (LLMs) on massive amounts
of source code. Although a number of studies have shown that LLMs have been
effectively evaluated on popular accuracy metrics (e.g., BLEU, CodeBleu),
previous research has largely overlooked the role of Causal Inference as a
fundamental component of the interpretability of LLMs' performance. Existing
benchmarks and datasets are meant to highlight the difference between the
expected and the generated outcome, but do not take into account confounding
variables (e.g., lines of code, prompt size) that equally influence the
accuracy metrics. The fact remains that, when dealing with generative software
tasks by LLMs, no benchmark is available to tell researchers how to quantify
neither the causal effect of SE-based treatments nor the correlation of
confounders to the model's performance. In an effort to bring statistical rigor
to the evaluation of LLMs, this paper introduces a benchmarking strategy named
Galeras comprised of curated testbeds for three SE tasks (i.e., code
completion, code summarization, and commit generation) to help aid the
interpretation of LLMs' performance. We illustrate the insights of our
benchmarking strategy by conducting a case study on the performance of ChatGPT
under distinct prompt engineering methods. The results of the case study
demonstrate the positive causal influence of prompt semantics on ChatGPT's
generative performance by an average treatment effect of $\approx 3\%$.
Moreover, it was found that confounders such as prompt size are highly
correlated with accuracy metrics ($\approx 0.412\%$). The end result of our
case study is to showcase causal inference evaluations, in practice, to reduce
confounding bias. By reducing the bias, we offer an interpretable solution for
the accuracy metric under analysis.
"
GPTCloneBench: A comprehensive benchmark of semantic clones and  cross-language clones using GPT-3 model and SemanticCloneBench,Ajmain Inqiad Alam,http://arxiv.org/pdf/2308.13963v2.pdf,2023-08-26,['cs.se'],2308.13963v2.pdf,"  With the emergence of Machine Learning, there has been a surge in leveraging
its capabilities for problem-solving across various domains. In the code clone
realm, the identification of type-4 or semantic clones has emerged as a crucial
yet challenging task. Researchers aim to utilize Machine Learning to tackle
this challenge, often relying on the BigCloneBench dataset. However, it's worth
noting that BigCloneBench, originally not designed for semantic clone
detection, presents several limitations that hinder its suitability as a
comprehensive training dataset for this specific purpose. Furthermore, CLCDSA
dataset suffers from a lack of reusable examples aligning with real-world
software systems, rendering it inadequate for cross-language clone detection
approaches. In this work, we present a comprehensive semantic clone and
cross-language clone benchmark, GPTCloneBench by exploiting SemanticCloneBench
and OpenAI's GPT-3 model. In particular, using code fragments from
SemanticCloneBench as sample inputs along with appropriate prompt engineering
for GPT-3 model, we generate semantic and cross-language clones for these
specific fragments and then conduct a combination of extensive manual analysis,
tool-assisted filtering, functionality testing and automated validation in
building the benchmark. From 79,928 clone pairs of GPT-3 output, we created a
benchmark with 37,149 true semantic clone pairs, 19,288 false semantic
pairs(Type-1/Type-2), and 20,770 cross-language clones across four languages
(Java, C, C#, and Python). Our benchmark is 15-fold larger than
SemanticCloneBench, has more functional code examples for software systems and
programming language support than CLCDSA, and overcomes BigCloneBench's
qualities, quantification, and language variety limitations.
"
"AI Foundation Models for Weather and Climate: Applications, Design, and  Implementation",S. Karthik Mukkavilli,http://arxiv.org/pdf/2309.10808v2.pdf,2023-09-19,"['cs.lg', 'cs.ai', 'physics.ao-ph', '68t07 (primary), 68t01, 86a08', 'i.2.0; i.4.0; j.2.5']",2309.10808v2.pdf,"  Machine learning and deep learning methods have been widely explored in
understanding the chaotic behavior of the atmosphere and furthering weather
forecasting. There has been increasing interest from technology companies,
government institutions, and meteorological agencies in building digital twins
of the Earth. Recent approaches using transformers, physics-informed machine
learning, and graph neural networks have demonstrated state-of-the-art
performance on relatively narrow spatiotemporal scales and specific tasks. With
the recent success of generative artificial intelligence (AI) using pre-trained
transformers for language modeling and vision with prompt engineering and
fine-tuning, we are now moving towards generalizable AI. In particular, we are
witnessing the rise of AI foundation models that can perform competitively on
multiple domain-specific downstream tasks. Despite this progress, we are still
in the nascent stages of a generalizable AI model for global Earth system
models, regional climate models, and mesoscale weather models. Here, we review
current state-of-the-art AI approaches, primarily from transformer and operator
learning literature in the context of meteorology. We provide our perspective
on criteria for success towards a family of foundation models for nowcasting
and forecasting weather and climate predictions. We also discuss how such
models can perform competitively on downstream tasks such as downscaling
(super-resolution), identifying conditions conducive to the occurrence of
wildfires, and predicting consequential meteorological phenomena across various
spatiotemporal scales such as hurricanes and atmospheric rivers. In particular,
we examine current AI methodologies and contend they have matured enough to
design and implement a weather foundation model.
"
Exploring Small Language Models with Prompt-Learning Paradigm for  Efficient Domain-Specific Text Classification,Hengyu Luo,http://arxiv.org/pdf/2309.14779v1.pdf,2023-09-26,"['cs.cl', 'cs.ai', 'cs.lg']",2309.14779v1.pdf,"  Domain-specific text classification faces the challenge of scarce labeled
data due to the high cost of manual labeling. Prompt-learning, known for its
efficiency in few-shot scenarios, is proposed as an alternative to traditional
fine-tuning methods. And besides, although large language models (LLMs) have
gained prominence, small language models (SLMs, with under 1B parameters) offer
significant customizability, adaptability, and cost-effectiveness for
domain-specific tasks, given industry constraints. In this study, we
investigate the potential of SLMs combined with prompt-learning paradigm for
domain-specific text classification, specifically within customer-agent
interactions in retail. Our evaluations show that, in few-shot settings when
prompt-based model fine-tuning is possible, T5-base, a typical SLM with 220M
parameters, achieve approximately 75% accuracy with limited labeled data (up to
15% of full data), which shows great potentials of SLMs with prompt-learning.
Based on this, We further validate the effectiveness of active few-shot
sampling and the ensemble strategy in the prompt-learning pipeline that
contribute to a remarkable performance gain. Besides, in zero-shot settings
with a fixed model, we underscore a pivotal observation that, although the
GPT-3.5-turbo equipped with around 154B parameters garners an accuracy of
55.16%, the power of well designed prompts becomes evident when the
FLAN-T5-large, a model with a mere 0.5% of GPT-3.5-turbo's parameters, achieves
an accuracy exceeding 31% with the optimized prompt, a leap from its sub-18%
performance with an unoptimized one. Our findings underscore the promise of
prompt-learning in classification tasks with SLMs, emphasizing the benefits of
active few-shot sampling, and ensemble strategies in few-shot settings, and the
importance of prompt engineering in zero-shot settings.
"
Label Supervised LLaMA Finetuning,Zongxi Li,http://arxiv.org/pdf/2310.01208v1.pdf,2023-10-02,['cs.cl'],2310.01208v1.pdf,"  The recent success of Large Language Models (LLMs) has gained significant
attention in both academia and industry. Substantial efforts have been made to
enhance the zero- and few-shot generalization capabilities of open-source LLMs
through finetuning. Currently, the prevailing approach is instruction-tuning,
which trains LLMs to complete real-world tasks by generating responses guided
by natural language instructions. It is worth noticing that such an approach
may underperform in sequence and token classification tasks. Unlike text
generation tasks, classification tasks have a limited label space, where
precise label prediction is more appreciated than generating diverse and
human-like responses. Prior research has unveiled that instruction-tuned LLMs
cannot outperform BERT, prompting us to explore the potential of leveraging
latent representations from LLMs for supervised label prediction. In this
paper, we introduce a label-supervised adaptation for LLMs, which aims to
finetuning the model with discriminant labels. We evaluate this approach with
Label Supervised LLaMA (LS-LLaMA), based on LLaMA-2-7B, a relatively
small-scale LLM, and can be finetuned on a single GeForce RTX4090 GPU. We
extract latent representations from the final LLaMA layer and project them into
the label space to compute the cross-entropy loss. The model is finetuned by
Low-Rank Adaptation (LoRA) to minimize this loss. Remarkably, without intricate
prompt engineering or external knowledge, LS-LLaMA substantially outperforms
LLMs ten times its size in scale and demonstrates consistent improvements
compared to robust baselines like BERT-Large and RoBERTa-Large in text
classification. Moreover, by removing the causal mask from decoders, LS-unLLaMA
achieves the state-of-the-art performance in named entity recognition (NER).
Our work will shed light on a novel approach to adapting LLMs for various
downstream tasks.
"
Mini-DALLE3: Interactive Text to Image by Prompting Large Language  Models,Zeqiang Lai,http://arxiv.org/pdf/2310.07653v2.pdf,2023-10-11,['cs.ai'],2310.07653v2.pdf,"  The revolution of artificial intelligence content generation has been rapidly
accelerated with the booming text-to-image (T2I) diffusion models. Within just
two years of development, it was unprecedentedly of high-quality, diversity,
and creativity that the state-of-the-art models could generate. However, a
prevalent limitation persists in the effective communication with these popular
T2I models, such as Stable Diffusion, using natural language descriptions. This
typically makes an engaging image hard to obtain without expertise in prompt
engineering with complex word compositions, magic tags, and annotations.
Inspired by the recently released DALLE3 - a T2I model directly built-in
ChatGPT that talks human language, we revisit the existing T2I systems
endeavoring to align human intent and introduce a new task - interactive text
to image (iT2I), where people can interact with LLM for interleaved
high-quality image generation/edit/refinement and question answering with
stronger images and text correspondences using natural language. In addressing
the iT2I problem, we present a simple approach that augments LLMs for iT2I with
prompting techniques and off-the-shelf T2I models. We evaluate our approach for
iT2I in a variety of common-used scenarios under different LLMs, e.g., ChatGPT,
LLAMA, Baichuan, and InternLM. We demonstrate that our approach could be a
convenient and low-cost way to introduce the iT2I ability for any existing LLMs
and any text-to-image models without any training while bringing little
degradation on LLMs' inherent capabilities in, e.g., question answering and
code generation. We hope this work could draw broader attention and provide
inspiration for boosting user experience in human-machine interactions
alongside the image quality of the next-generation T2I systems.
"
Promptor: A Conversational and Autonomous Prompt Generation Agent for  Intelligent Text Entry Techniques,Junxiao Shen,http://arxiv.org/pdf/2310.08101v2.pdf,2023-10-12,"['cs.cl', 'cs.ai']",2310.08101v2.pdf,"  Text entry is an essential task in our day-to-day digital interactions.
Numerous intelligent features have been developed to streamline this process,
making text entry more effective, efficient, and fluid. These improvements
include sentence prediction and user personalization. However, as deep
learning-based language models become the norm for these advanced features, the
necessity for data collection and model fine-tuning increases. These challenges
can be mitigated by harnessing the in-context learning capability of large
language models such as GPT-3.5. This unique feature allows the language model
to acquire new skills through prompts, eliminating the need for data collection
and fine-tuning. Consequently, large language models can learn various text
prediction techniques. We initially showed that, for a sentence prediction
task, merely prompting GPT-3.5 surpassed a GPT-2 backed system and is
comparable with a fine-tuned GPT-3.5 model, with the latter two methods
requiring costly data collection, fine-tuning and post-processing. However, the
task of prompting large language models to specialize in specific text
prediction tasks can be challenging, particularly for designers without
expertise in prompt engineering. To address this, we introduce Promptor, a
conversational prompt generation agent designed to engage proactively with
designers. Promptor can automatically generate complex prompts tailored to meet
specific needs, thus offering a solution to this challenge. We conducted a user
study involving 24 participants creating prompts for three intelligent text
entry tasks, half of the participants used Promptor while the other half
designed prompts themselves. The results show that Promptor-designed prompts
result in a 35% increase in similarity and 22% in coherence over those by
designers.
"
Human-in-the-loop Machine Translation with Large Language Model,Xinyi Yang,http://arxiv.org/pdf/2310.08908v1.pdf,2023-10-13,['cs.cl'],2310.08908v1.pdf,"  The large language model (LLM) has garnered significant attention due to its
in-context learning mechanisms and emergent capabilities. The research
community has conducted several pilot studies to apply LLMs to machine
translation tasks and evaluate their performance from diverse perspectives.
However, previous research has primarily focused on the LLM itself and has not
explored human intervention in the inference process of LLM. The
characteristics of LLM, such as in-context learning and prompt engineering,
closely mirror human cognitive abilities in language tasks, offering an
intuitive solution for human-in-the-loop generation. In this study, we propose
a human-in-the-loop pipeline that guides LLMs to produce customized outputs
with revision instructions. The pipeline initiates by prompting the LLM to
produce a draft translation, followed by the utilization of automatic retrieval
or human feedback as supervision signals to enhance the LLM's translation
through in-context learning. The human-machine interactions generated in this
pipeline are also stored in an external database to expand the in-context
retrieval database, enabling us to leverage human supervision in an offline
setting. We evaluate the proposed pipeline using GPT-3.5-turbo API on five
domain-specific benchmarks for German-English translation. The results
demonstrate the effectiveness of the pipeline in tailoring in-domain
translations and improving translation performance compared to direct
translation. Additionally, we discuss the results from the following
perspectives: 1) the effectiveness of different in-context retrieval methods;
2) the construction of a retrieval database under low-resource scenarios; 3)
the observed domains differences; 4) the quantitative analysis of linguistic
statistics; and 5) the qualitative analysis of translation cases. The code and
data are available at https://github.com/NLP2CT/HIL-MT/.
"
ConstitutionMaker: Interactively Critiquing Large Language Models by  Converting Feedback into Principles,Savvas Petridis,http://arxiv.org/pdf/2310.15428v1.pdf,2023-10-24,"['cs.hc', 'cs.ai']",2310.15428v1.pdf,"  Large language model (LLM) prompting is a promising new approach for users to
create and customize their own chatbots. However, current methods for steering
a chatbot's outputs, such as prompt engineering and fine-tuning, do not support
users in converting their natural feedback on the model's outputs to changes in
the prompt or model. In this work, we explore how to enable users to
interactively refine model outputs through their feedback, by helping them
convert their feedback into a set of principles (i.e. a constitution) that
dictate the model's behavior. From a formative study, we (1) found that users
needed support converting their feedback into principles for the chatbot and
(2) classified the different principle types desired by users. Inspired by
these findings, we developed ConstitutionMaker, an interactive tool for
converting user feedback into principles, to steer LLM-based chatbots. With
ConstitutionMaker, users can provide either positive or negative feedback in
natural language, select auto-generated feedback, or rewrite the chatbot's
response; each mode of feedback automatically generates a principle that is
inserted into the chatbot's prompt. In a user study with 14 participants, we
compare ConstitutionMaker to an ablated version, where users write their own
principles. With ConstitutionMaker, participants felt that their principles
could better guide the chatbot, that they could more easily convert their
feedback into principles, and that they could write principles more
efficiently, with less mental demand. ConstitutionMaker helped users identify
ways to improve the chatbot, formulate their intuitive responses to the model
into feedback, and convert this feedback into specific and clear principles.
Together, these findings inform future tools that support the interactive
critiquing of LLM outputs.
"
Few-shot learning for sentence pair classification and its applications  in software engineering,Robert Kraig Helmeczi,http://arxiv.org/pdf/2306.08058v1.pdf,2023-06-13,['cs.se'],2306.08058v1.pdf,"  Few-shot learning-the ability to train models with access to limited data-has
become increasingly popular in the natural language processing (NLP) domain, as
large language models such as GPT and T0 have been empirically shown to achieve
high performance in numerous tasks with access to just a handful of labeled
examples. Smaller language models such as BERT and its variants have also been
shown to achieve strong performance with just a handful of labeled examples
when combined with few-shot learning algorithms like pattern-exploiting
training (PET) and SetFit. The focus of this work is to investigate the
performance of alternative few-shot learning approaches with BERT-based models.
Specifically, vanilla fine-tuning, PET and SetFit are compared for numerous
BERT-based checkpoints over an array of training set sizes. To facilitate this
investigation, applications of few-shot learning are considered in software
engineering. For each task, high-performance techniques and their associated
model checkpoints are identified through detailed empirical analysis. Our
results establish PET as a strong few-shot learning approach, and our analysis
shows that with just a few hundred labeled examples it can achieve performance
near that of fine-tuning on full-sized data sets.
"
FewCLUE: A Chinese Few-shot Learning Evaluation Benchmark,Liang Xu,http://arxiv.org/pdf/2107.07498v2.pdf,2021-07-15,"['cs.cl', 'cs.ai']",2107.07498v2.pdf,"  Pretrained Language Models (PLMs) have achieved tremendous success in natural
language understanding tasks. While different learning schemes -- fine-tuning,
zero-shot, and few-shot learning -- have been widely explored and compared for
languages such as English, there is comparatively little work in Chinese to
fairly and comprehensively evaluate and compare these methods and thus hinders
cumulative progress. In this paper, we introduce the Chinese Few-shot Learning
Evaluation Benchmark (FewCLUE), the first comprehensive few-shot evaluation
benchmark in Chinese. It includes nine tasks, ranging from single-sentence and
sentence-pair classification tasks to machine reading comprehension tasks. We
systematically evaluate five state-of-the-art (SOTA) few-shot learning methods
(including PET, ADAPET, LM-BFF, P-tuning and EFL), and compare their
performance with fine-tuning and zero-shot learning schemes on the newly
constructed FewCLUE benchmark. Experimental results reveal that: 1) The effect
of different few-shot learning methods is sensitive to the pre-trained model to
which the methods are applied; 2) PET and P-tuning achieve the best overall
performance with RoBERTa and ERNIE respectively. Our benchmark is used in the
few-shot learning contest of NLPCC 2021. In addition, we provide a
user-friendly toolkit, as well as an online leaderboard to help facilitate
further progress on Chinese few-shot learning. We provide a baseline
performance on different learning methods, a reference for future research.
"
Proto-CLIP: Vision-Language Prototypical Network for Few-Shot Learning,Jishnu Jaykumar P,http://arxiv.org/pdf/2307.03073v2.pdf,2023-07-06,"['cs.cv', 'cs.ro']",2307.03073v2.pdf,"  We propose a novel framework for few-shot learning by leveraging large-scale
vision-language models such as CLIP. Motivated by the unimodal prototypical
networks for few-shot learning, we introduce PROTO-CLIP that utilizes image
prototypes and text prototypes for few-shot learning. Specifically, PROTO-CLIP
adapts the image encoder and text encoder in CLIP in a joint fashion using
few-shot examples. The two encoders are used to compute prototypes of image
classes for classification. During adaptation, we propose aligning the image
and text prototypes of corresponding classes. Such a proposed alignment is
beneficial for few-shot classification due to the contributions from both types
of prototypes. We demonstrate the effectiveness of our method by conducting
experiments on benchmark datasets for few-shot learning as well as in the real
world for robot perception.
"
A Survey on Recent Named Entity Recognition and Relation Classification  Methods with Focus on Few-Shot Learning Approaches,Sakher Alqaaidi,http://arxiv.org/pdf/2310.19055v1.pdf,2023-10-29,['cs.cl'],2310.19055v1.pdf,"  Named entity recognition and relation classification are key stages for
extracting information from unstructured text. Several natural language
processing applications utilize the two tasks, such as information retrieval,
knowledge graph construction and completion, question answering and other
domain-specific applications, such as biomedical data mining. We present a
survey of recent approaches in the two tasks with focus on few-shot learning
approaches. Our work compares the main approaches followed in the two
paradigms. Additionally, we report the latest metric scores in the two tasks
with a structured analysis that considers the results in the few-shot learning
scope.
"
True Few-Shot Learning with Prompts -- A Real-World Perspective,Timo Schick,http://arxiv.org/pdf/2111.13440v1.pdf,2021-11-26,['cs.cl'],2111.13440v1.pdf,"  Prompt-based approaches are strong at few-shot learning. However, Perez et
al. (2021) have recently cast doubt on their performance because they had
difficulty getting good results in a ""true"" few-shot setting in which prompts
and hyperparameters cannot be tuned on a dev set. In view of this, we conduct
an extensive study of PET, a method that combines textual instructions with
example-based finetuning. We show that, if correctly configured, PET performs
strongly in a true few-shot setting, i.e., without a dev set. Crucial for this
strong performance is PET's ability to intelligently handle multiple prompts.
We then put our findings to a real-world test by running PET on RAFT, a
benchmark of tasks taken directly from realistic NLP applications for which no
labeled dev or test sets are available. PET achieves a new state of the art on
RAFT and performs close to non-expert humans for 7 out of 11 tasks. These
results demonstrate that prompt-based learners like PET excel at true few-shot
learning and underpin our belief that learning from instructions will play an
important role on the path towards human-like few-shot learning capabilities.
"
Improving In-Context Few-Shot Learning via Self-Supervised Training,Mingda Chen,http://arxiv.org/pdf/2205.01703v2.pdf,2022-05-03,['cs.cl'],2205.01703v2.pdf,"  Self-supervised pretraining has made few-shot learning possible for many NLP
tasks. But the pretraining objectives are not typically adapted specifically
for in-context few-shot learning. In this paper, we propose to use
self-supervision in an intermediate training stage between pretraining and
downstream few-shot usage with the goal to teach the model to perform
in-context few shot learning. We propose and evaluate four self-supervised
objectives on two benchmarks. We find that the intermediate self-supervision
stage produces models that outperform strong baselines. Ablation study shows
that several factors affect the downstream performance, such as the amount of
training data and the diversity of the self-supervised objectives.
Human-annotated cross-task supervision and self-supervision are complementary.
Qualitative analysis suggests that the self-supervised-trained models are
better at following task requirements.
"
Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained  Models,Mengzhou Xia,http://arxiv.org/pdf/2205.15223v3.pdf,2022-05-30,"['cs.cl', 'cs.lg']",2205.15223v3.pdf,"  Pre-trained masked language models successfully perform few-shot learning by
formulating downstream tasks as text infilling. However, as a strong
alternative in full-shot settings, discriminative pre-trained models like
ELECTRA do not fit into the paradigm. In this work, we adapt prompt-based
few-shot learning to ELECTRA and show that it outperforms masked language
models in a wide range of tasks. ELECTRA is pre-trained to distinguish if a
token is generated or original. We naturally extend that to prompt-based
few-shot learning by training to score the originality of the target options
without introducing new parameters. Our method can be easily adapted to tasks
involving multi-token predictions without extra computation overhead. Analysis
shows that ELECTRA learns distributions that align better with downstream
tasks.
"
Revisiting Few-Shot Learning from a Causal Perspective,Guoliang Lin,http://arxiv.org/pdf/2209.13816v1.pdf,2022-09-28,"['cs.lg', 'cs.ai']",2209.13816v1.pdf,"  Few-shot learning with N-way K-shot scheme is an open challenge in machine
learning. Many approaches have been proposed to tackle this problem, e.g., the
Matching Networks and CLIP-Adapter. Despite that these approaches have shown
significant progress, the mechanism of why these methods succeed has not been
well explored. In this paper, we interpret these few-shot learning methods via
causal mechanism. We show that the existing approaches can be viewed as
specific forms of front-door adjustment, which is to remove the effects of
confounders. Based on this, we introduce a general causal method for few-shot
learning, which considers not only the relationship between examples but also
the diversity of representations. Experimental results demonstrate the
superiority of our proposed method in few-shot classification on various
benchmark datasets. Code is available in the supplementary material.
"
In-context Learning Distillation: Transferring Few-shot Learning Ability  of Pre-trained Language Models,Yukun Huang,http://arxiv.org/pdf/2212.10670v1.pdf,2022-12-20,"['cs.cl', 'cs.lg']",2212.10670v1.pdf,"  Given the success with in-context learning of large pre-trained language
models, we introduce in-context learning distillation to transfer in-context
few-shot learning ability from large models to smaller models. We propose to
combine in-context learning objectives with language modeling objectives to
distill both the ability to read in-context examples and task knowledge to the
smaller models. We perform in-context learning distillation under two different
few-shot learning paradigms: Meta In-context Tuning (Meta-ICT) and Multitask
In-context Tuning (Multitask-ICT). Multitask-ICT performs better on multitask
few-shot learning but also requires more computation than Meta-ICT. Our method
shows consistent improvements for both Meta-ICT and Multitask-ICT on two
benchmarks: LAMA and CrossFit. Our extensive experiments and analysis reveal
that in-context learning objectives and language modeling objectives are
complementary under the Multitask-ICT paradigm. In-context learning objectives
achieve the best performance when combined with language modeling objectives.
"
FILM: How can Few-Shot Image Classification Benefit from Pre-Trained  Language Models?,Zihao Jiang,http://arxiv.org/pdf/2307.04114v1.pdf,2023-07-09,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.cv', 'cs.mm']",2307.04114v1.pdf,"  Few-shot learning aims to train models that can be generalized to novel
classes with only a few samples. Recently, a line of works are proposed to
enhance few-shot learning with accessible semantic information from class
names. However, these works focus on improving existing modules such as visual
prototypes and feature extractors of the standard few-shot learning framework.
This limits the full potential use of semantic information. In this paper, we
propose a novel few-shot learning framework that uses pre-trained language
models based on contrastive learning. To address the challenge of alignment
between visual features and textual embeddings obtained from text-based
pre-trained language model, we carefully design the textual branch of our
framework and introduce a metric module to generalize the cosine similarity.
For better transferability, we let the metric module adapt to different
few-shot tasks and adopt MAML to train the model via bi-level optimization.
Moreover, we conduct extensive experiments on multiple benchmarks to
demonstrate the effectiveness of our method.
"
Reordering Examples Helps during Priming-based Few-Shot Learning,Sawan Kumar,http://arxiv.org/pdf/2106.01751v1.pdf,2021-06-03,['cs.cl'],2106.01751v1.pdf,"  The ability to learn from limited data, or few-shot learning, is a desirable
and often critical requirement for NLP systems. While many existing methods do
poorly at learning from a handful of examples, large pretrained language models
have recently been shown to be efficient few-shot learners. One approach to
few-shot learning, which does not require finetuning of model parameters, is to
augment the language model's input with priming text which is typically
constructed using task specific descriptions and examples. In this work, we
further explore priming-based few-shot learning, with focus on using examples
as prompts. We show that presenting examples in the right order is key for
generalization. We introduce PERO (Prompting with Examples in the Right Order),
where we formulate few-shot learning as search over the set of permutations of
the training examples. We show that PERO can learn to generalize efficiently
using as few as 10 examples, in contrast to existing approaches. While the
newline token is a natural choice for separating the examples in the prompt, we
show that learning a new separator token can potentially provide further gains
in performance. We demonstrate the effectiveness of the proposed method on the
tasks of sentiment classification, natural language inference and fact
retrieval. Finally, we analyze the learned prompts to reveal novel insights,
including the idea that two training examples in the right order alone can
provide competitive performance for sentiment classification and natural
language inference.
"
CLUES: Few-Shot Learning Evaluation in Natural Language Understanding,Subhabrata Mukherjee,http://arxiv.org/pdf/2111.02570v1.pdf,2021-11-04,"['cs.cl', 'cs.lg']",2111.02570v1.pdf,"  Most recent progress in natural language understanding (NLU) has been driven,
in part, by benchmarks such as GLUE, SuperGLUE, SQuAD, etc. In fact, many NLU
models have now matched or exceeded ""human-level"" performance on many tasks in
these benchmarks. Most of these benchmarks, however, give models access to
relatively large amounts of labeled data for training. As such, the models are
provided far more data than required by humans to achieve strong performance.
That has motivated a line of work that focuses on improving few-shot learning
performance of NLU models. However, there is a lack of standardized evaluation
benchmarks for few-shot NLU resulting in different experimental settings in
different papers. To help accelerate this line of work, we introduce CLUES
(Constrained Language Understanding Evaluation Standard), a benchmark for
evaluating the few-shot learning capabilities of NLU models. We demonstrate
that while recent models reach human performance when they have access to large
amounts of labeled data, there is a huge gap in performance in the few-shot
setting for most tasks. We also demonstrate differences between alternative
model families and adaptation techniques in the few shot setting. Finally, we
discuss several principles and choices in designing the experimental settings
for evaluating the true few-shot learning performance and suggest a unified
standardized approach to few-shot learning evaluation. We aim to encourage
research on NLU models that can generalize to new tasks with a small number of
examples. Code and data for CLUES are available at
https://github.com/microsoft/CLUES.
"
Tuning Language Models as Training Data Generators for  Augmentation-Enhanced Few-Shot Learning,Yu Meng,http://arxiv.org/pdf/2211.03044v2.pdf,2022-11-06,"['cs.cl', 'cs.lg']",2211.03044v2.pdf,"  Recent studies have revealed the intriguing few-shot learning ability of
pretrained language models (PLMs): They can quickly adapt to a new task when
fine-tuned on a small amount of labeled data formulated as prompts, without
requiring abundant task-specific annotations. Despite their promising
performance, most existing few-shot approaches that only learn from the small
training set still underperform fully supervised training by nontrivial
margins. In this work, we study few-shot learning with PLMs from a different
perspective: We first tune an autoregressive PLM on the few-shot samples and
then use it as a generator to synthesize a large amount of novel training
samples which augment the original training set. To encourage the generator to
produce label-discriminative samples, we train it via weighted maximum
likelihood where the weight of each token is automatically adjusted based on a
discriminative meta-learning objective. A classification PLM can then be
fine-tuned on both the few-shot and the synthetic samples with regularization
for better generalization and stability. Our approach FewGen achieves an
overall better result across seven classification tasks of the GLUE benchmark
than existing few-shot learning methods, improving no-augmentation methods by
5+ average points, and outperforming augmentation methods by 3+ average points.
"
Improving Few-Shot Generalization by Exploring and Exploiting Auxiliary  Data,Alon Albalak,http://arxiv.org/pdf/2302.00674v4.pdf,2023-02-01,"['cs.lg', 'cs.cl']",2302.00674v4.pdf,"  Few-shot learning is valuable in many real-world applications, but learning a
generalizable model without overfitting to the few labeled datapoints is
challenging. In this work, we focus on Few-shot Learning with Auxiliary Data
(FLAD), a training paradigm that assumes access to auxiliary data during
few-shot learning in hopes of improving generalization. Previous works have
proposed automated methods for mixing auxiliary and target data, but these
methods typically scale linearly (or worse) with the number of auxiliary
datasets, limiting their practicality. In this work we relate FLAD to the
explore-exploit dilemma that is central to the multi-armed bandit setting and
derive algorithms whose computational complexity is independent of the number
of auxiliary datasets, allowing us to scale to 100x more auxiliary datasets
than prior methods. We propose two algorithms -- EXP3-FLAD and UCB1-FLAD -- and
compare them with prior FLAD methods that either explore or exploit, finding
that the combination of exploration and exploitation is crucial. Through
extensive experimentation we find that our methods outperform all pre-existing
FLAD methods by 4% and lead to the first 3 billion parameter language models
that outperform the 175 billion parameter GPT-3. Overall, our work suggests
that the discovery of better, more efficient mixing strategies for FLAD may
provide a viable path towards substantially improving generalization in
few-shot learning.
"
Universal Few-shot Learning of Dense Prediction Tasks with Visual Token  Matching,Donggyun Kim,http://arxiv.org/pdf/2303.14969v1.pdf,2023-03-27,"['cs.cv', 'cs.ai']",2303.14969v1.pdf,"  Dense prediction tasks are a fundamental class of problems in computer
vision. As supervised methods suffer from high pixel-wise labeling cost, a
few-shot learning solution that can learn any dense task from a few labeled
images is desired. Yet, current few-shot learning methods target a restricted
set of tasks such as semantic segmentation, presumably due to challenges in
designing a general and unified model that is able to flexibly and efficiently
adapt to arbitrary tasks of unseen semantics. We propose Visual Token Matching
(VTM), a universal few-shot learner for arbitrary dense prediction tasks. It
employs non-parametric matching on patch-level embedded tokens of images and
labels that encapsulates all tasks. Also, VTM flexibly adapts to any task with
a tiny amount of task-specific parameters that modulate the matching algorithm.
We implement VTM as a powerful hierarchical encoder-decoder architecture
involving ViT backbones where token matching is performed at multiple feature
hierarchies. We experiment VTM on a challenging variant of Taskonomy dataset
and observe that it robustly few-shot learns various unseen dense prediction
tasks. Surprisingly, it is competitive with fully supervised baselines using
only 10 labeled examples of novel tasks (0.004% of full supervision) and
sometimes outperforms using 0.1% of full supervision. Codes are available at
https://github.com/GitGyun/visual_token_matching.
"
FD-Align: Feature Discrimination Alignment for Fine-tuning Pre-Trained  Models in Few-Shot Learning,Kun Song,http://arxiv.org/pdf/2310.15105v3.pdf,2023-10-23,['cs.cv'],2310.15105v3.pdf,"  Due to the limited availability of data, existing few-shot learning methods
trained from scratch fail to achieve satisfactory performance. In contrast,
large-scale pre-trained models such as CLIP demonstrate remarkable few-shot and
zero-shot capabilities. To enhance the performance of pre-trained models for
downstream tasks, fine-tuning the model on downstream data is frequently
necessary. However, fine-tuning the pre-trained model leads to a decrease in
its generalizability in the presence of distribution shift, while the limited
number of samples in few-shot learning makes the model highly susceptible to
overfitting. Consequently, existing methods for fine-tuning few-shot learning
primarily focus on fine-tuning the model's classification head or introducing
additional structure. In this paper, we introduce a fine-tuning approach termed
Feature Discrimination Alignment (FD-Align). Our method aims to bolster the
model's generalizability by preserving the consistency of spurious features
across the fine-tuning process. Extensive experimental results validate the
efficacy of our approach for both ID and OOD tasks. Once fine-tuned, the model
can seamlessly integrate with existing methods, leading to performance
improvements. Our code can be found in https://github.com/skingorz/FD-Align.
"
Few-Shot Learning with Localization in Realistic Settings,Davis Wertheimer,http://arxiv.org/pdf/1904.08502v2.pdf,2019-04-09,"['cs.cv', 'cs.ai', 'cs.lg', 'stat.ml']",1904.08502v2.pdf,"  Traditional recognition methods typically require large,
artificially-balanced training classes, while few-shot learning methods are
tested on artificially small ones. In contrast to both extremes, real world
recognition problems exhibit heavy-tailed class distributions, with cluttered
scenes and a mix of coarse and fine-grained class distinctions. We show that
prior methods designed for few-shot learning do not work out of the box in
these challenging conditions, based on a new ""meta-iNat"" benchmark. We
introduce three parameter-free improvements: (a) better training procedures
based on adapting cross-validation to meta-learning, (b) novel architectures
that localize objects using limited bounding box annotations before
classification, and (c) simple parameter-free expansions of the feature space
based on bilinear pooling. Together, these improvements double the accuracy of
state-of-the-art models on meta-iNat while generalizing to prior benchmarks,
complex neural architectures, and settings with substantial domain shift.
"
Model-Agnostic Graph Regularization for Few-Shot Learning,Ethan Shen,http://arxiv.org/pdf/2102.07077v1.pdf,2021-02-14,"['cs.lg', 'cs.cv']",2102.07077v1.pdf,"  In many domains, relationships between categories are encoded in the
knowledge graph. Recently, promising results have been achieved by
incorporating knowledge graph as side information in hard classification tasks
with severely limited data. However, prior models consist of highly complex
architectures with many sub-components that all seem to impact performance. In
this paper, we present a comprehensive empirical study on graph embedded
few-shot learning. We introduce a graph regularization approach that allows a
deeper understanding of the impact of incorporating graph information between
labels. Our proposed regularization is widely applicable and model-agnostic,
and boosts the performance of any few-shot learning model, including
fine-tuning, metric-based, and optimization-based meta-learning. Our approach
improves the performance of strong base learners by up to 2% on Mini-ImageNet
and 6.7% on ImageNet-FS, outperforming state-of-the-art graph embedded methods.
Additional analyses reveal that graph regularizing models result in a lower
loss for more difficult tasks, such as those with fewer shots and less
informative support examples.
"
Uniform Sampling over Episode Difficulty,Sébastien M. R. Arnold,http://arxiv.org/pdf/2108.01662v2.pdf,2021-08-03,"['cs.lg', 'cs.ai', 'cs.cv']",2108.01662v2.pdf,"  Episodic training is a core ingredient of few-shot learning to train models
on tasks with limited labelled data. Despite its success, episodic training
remains largely understudied, prompting us to ask the question: what is the
best way to sample episodes? In this paper, we first propose a method to
approximate episode sampling distributions based on their difficulty. Building
on this method, we perform an extensive analysis and find that sampling
uniformly over episode difficulty outperforms other sampling schemes, including
curriculum and easy-/hard-mining. As the proposed sampling method is algorithm
agnostic, we can leverage these insights to improve few-shot learning
accuracies across many episodic training algorithms. We demonstrate the
efficacy of our method across popular few-shot learning datasets, algorithms,
network architectures, and protocols.
"
CINS: Comprehensive Instruction for Few-shot Learning in Task-oriented  Dialog Systems,Fei Mi,http://arxiv.org/pdf/2109.04645v4.pdf,2021-09-10,"['cs.cl', 'cs.lg']",2109.04645v4.pdf,"  As labeling cost for different modules in task-oriented dialog (ToD) systems
is high, a major challenge in practice is to learn different tasks with the
least amount of labeled data. Recently, prompting methods over pre-trained
language models (PLMs) have shown promising results for few-shot learning in
ToD. To better utilize the power of PLMs, this paper proposes Comprehensive
Instruction (CINS) that exploits PLMs with extra task-specific instructions. We
design a schema (definition, constraint, prompt) of instructions and their
customized realizations for three important downstream tasks in ToD, i.e.
intent classification, dialog state tracking, and natural language generation.
A sequence-to-sequence model (T5) is adopted to solve these three tasks in a
unified framework. Extensive experiments are conducted on these ToD tasks in
realistic few-shot learning scenarios with small validation data. Empirical
results demonstrate that the proposed CINS approach consistently improves
techniques that finetune PLMs with raw input or short prompts.
"
Exploring Prompt-based Few-shot Learning for Grounded Dialog Generation,Chujie Zheng,http://arxiv.org/pdf/2109.06513v2.pdf,2021-09-14,['cs.cl'],2109.06513v2.pdf,"  Dialog models can be greatly strengthened through grounding on various
external information, but grounded dialog corpora are usually not naturally
accessible. In this work, we focus on the few-shot learning for grounded dialog
generation (GDG). We first propose a simple prompting method for GDG tasks,
where different constructs of model input, such as the grounding source and the
conversation context, are distinguished through continuous or discrete prompts.
On three typical GDG tasks, we empirically demonstrate and analyze in-depth the
effectiveness of our method. We then conduct extensive experiments to
thoroughly investigate how our prompting method works with different
pre-trained models. We show that prompted language models perform superiorly to
conversational models, and further analyze various factors that influence the
effects of prompting. Overall, our work introduces a prompt-based perspective
to the few-shot learning for GDG tasks, and provides valuable findings and
insights for future research.
"
Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning,Sungyong Baik,http://arxiv.org/pdf/2110.03909v2.pdf,2021-10-08,"['cs.lg', 'cs.cv']",2110.03909v2.pdf,"  In few-shot learning scenarios, the challenge is to generalize and perform
well on new unseen examples when only very few labeled examples are available
for each task. Model-agnostic meta-learning (MAML) has gained the popularity as
one of the representative few-shot learning methods for its flexibility and
applicability to diverse problems. However, MAML and its variants often resort
to a simple loss function without any auxiliary loss function or regularization
terms that can help achieve better generalization. The problem lies in that
each application and task may require different auxiliary loss function,
especially when tasks are diverse and distinct. Instead of attempting to
hand-design an auxiliary loss function for each application and task, we
introduce a new meta-learning framework with a loss function that adapts to
each task. Our proposed framework, named Meta-Learning with Task-Adaptive Loss
Function (MeTAL), demonstrates the effectiveness and the flexibility across
various domains, such as few-shot classification and few-shot regression.
"
Ontology-enhanced Prompt-tuning for Few-shot Learning,Hongbin Ye,http://arxiv.org/pdf/2201.11332v1.pdf,2022-01-27,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",2201.11332v1.pdf,"  Few-shot Learning (FSL) is aimed to make predictions based on a limited
number of samples. Structured data such as knowledge graphs and ontology
libraries has been leveraged to benefit the few-shot setting in various tasks.
However, the priors adopted by the existing methods suffer from challenging
knowledge missing, knowledge noise, and knowledge heterogeneity, which hinder
the performance for few-shot learning. In this study, we explore knowledge
injection for FSL with pre-trained language models and propose
ontology-enhanced prompt-tuning (OntoPrompt). Specifically, we develop the
ontology transformation based on the external knowledge graph to address the
knowledge missing issue, which fulfills and converts structure knowledge to
text. We further introduce span-sensitive knowledge injection via a visible
matrix to select informative knowledge to handle the knowledge noise issue. To
bridge the gap between knowledge and text, we propose a collective training
algorithm to optimize representations jointly. We evaluate our proposed
OntoPrompt in three tasks, including relation extraction, event extraction, and
knowledge graph completion, with eight datasets. Experimental results
demonstrate that our approach can obtain better few-shot performance than
baselines.
"
Impossible Triangle: What's Next for Pre-trained Language Models?,Chenguang Zhu,http://arxiv.org/pdf/2204.06130v2.pdf,2022-04-13,['cs.cl'],2204.06130v2.pdf,"  Recent development of large-scale pre-trained language models (PLM) have
significantly improved the capability of models in various NLP tasks, in terms
of performance after task-specific fine-tuning and zero-shot / few-shot
learning. However, many of such models come with a dauntingly huge size that
few institutions can afford to pre-train, fine-tune or even deploy, while
moderate-sized models usually lack strong generalized few-shot learning
capabilities. In this paper, we first elaborate the current obstacles of using
PLM models in terms of the Impossible Triangle: 1) moderate model size, 2)
state-of-the-art few-shot learning capability, and 3) state-of-the-art
fine-tuning capability. We argue that all existing PLM models lack one or more
properties from the Impossible Triangle. To remedy these missing properties of
PLMs, various techniques have been proposed, such as knowledge distillation,
data augmentation and prompt learning, which inevitably brings additional work
to the application of PLMs in real scenarios. We then offer insights into
future research directions of PLMs to achieve the Impossible Triangle, and
break down the task into several key phases.
"
A Study on Prompt-based Few-Shot Learning Methods for Belief State  Tracking in Task-oriented Dialog Systems,Debjoy Saha,http://arxiv.org/pdf/2204.08167v1.pdf,2022-04-18,"['cs.cl', 'cs.ai']",2204.08167v1.pdf,"  We tackle the Dialogue Belief State Tracking(DST) problem of task-oriented
conversational systems. Recent approaches to this problem leveraging
Transformer-based models have yielded great results. However, training these
models is expensive, both in terms of computational resources and time.
Additionally, collecting high quality annotated dialogue datasets remains a
challenge for researchers because of the extensive annotation required for
training these models. Driven by the recent success of pre-trained language
models and prompt-based learning, we explore prompt-based few-shot learning for
Dialogue Belief State Tracking. We formulate the DST problem as a 2-stage
prompt-based language modelling task and train language models for both tasks
and present a comprehensive empirical analysis of their separate and joint
performance. We demonstrate the potential of prompt-based methods in few-shot
learning for DST and provide directions for future improvement.
"
How to Prompt? Opportunities and Challenges of Zero- and Few-Shot  Learning for Human-AI Interaction in Creative Applications of Generative  Models,Hai Dang,http://arxiv.org/pdf/2209.01390v1.pdf,2022-09-03,"['cs.hc', 'cs.cl', 'h.5.2; i.2.7']",2209.01390v1.pdf,"  Deep generative models have the potential to fundamentally change the way we
create high-fidelity digital content but are often hard to control. Prompting a
generative model is a promising recent development that in principle enables
end-users to creatively leverage zero-shot and few-shot learning to assign new
tasks to an AI ad-hoc, simply by writing them down. However, for the majority
of end-users writing effective prompts is currently largely a trial and error
process. To address this, we discuss the key opportunities and challenges for
interactive creative applications that use prompting as a new paradigm for
Human-AI interaction. Based on our analysis, we propose four design goals for
user interfaces that support prompting. We illustrate these with concrete UI
design sketches, focusing on the use case of creative writing. The research
community in HCI and AI can take these as starting points to develop adequate
user interfaces for models capable of zero- and few-shot learning.
"
On Measuring the Intrinsic Few-Shot Hardness of Datasets,Xinran Zhao,http://arxiv.org/pdf/2211.09113v1.pdf,2022-11-16,['cs.cl'],2211.09113v1.pdf,"  While advances in pre-training have led to dramatic improvements in few-shot
learning of NLP tasks, there is limited understanding of what drives successful
few-shot adaptation in datasets. In particular, given a new dataset and a
pre-trained model, what properties of the dataset make it \emph{few-shot
learnable} and are these properties independent of the specific adaptation
techniques used? We consider an extensive set of recent few-shot learning
methods, and show that their performance across a large number of datasets is
highly correlated, showing that few-shot hardness may be intrinsic to datasets,
for a given pre-trained model. To estimate intrinsic few-shot hardness, we then
propose a simple and lightweight metric called ""Spread"" that captures the
intuition that few-shot learning is made possible by exploiting feature-space
invariances between training and test samples. Our metric better accounts for
few-shot hardness compared to existing notions of hardness, and is ~8-100x
faster to compute.
"
Differentiable Entailment for Parameter Efficient Few Shot Learning,Ethan Kim,http://arxiv.org/pdf/2301.13345v1.pdf,2023-01-31,['cs.cl'],2301.13345v1.pdf,"  Few-shot learning allows pre-trained language models to adapt to downstream
tasks while using a limited number of training examples. However, practical
applications are limited when all model parameters must be optimized. In this
work we apply a new technique for parameter efficient few shot learning while
adopting a strict definition of parameter efficiency. Our training method
combines 1) intermediate training by reformulating natural language tasks as
entailment tasks \cite{wang_entailment_2021} and 2) differentiable optimization
of template and label tokens \cite{zhang_differentiable_2021}. We quantify the
tradeoff between parameter efficiency and performance in the few-shot regime
and propose a simple model agnostic approach that can be extended to any task
By achieving competitive performance while only optimizing 3\% of a model's
parameters and allowing for batched inference, we allow for more efficient
practical deployment of models.
"
MerA: Merging Pretrained Adapters For Few-Shot Learning,Shwai He,http://arxiv.org/pdf/2308.15982v1.pdf,2023-08-30,['cs.cl'],2308.15982v1.pdf,"  Adapter tuning, which updates only a few parameters, has become a mainstream
method for fine-tuning pretrained language models to downstream tasks. However,
it often yields subpar results in few-shot learning. AdapterFusion, which
assembles pretrained adapters using composition layers tailored to specific
tasks, is a possible solution but significantly increases trainable parameters
and deployment costs. Despite this, our preliminary study reveals that even
single adapters can outperform Adapterfusion in few-shot learning, urging us to
propose \textbf{\texttt{Merging Pretrained Adapters}} (MerA) that efficiently
incorporates pretrained adapters to a single model through model fusion.
Extensive experiments on two PLMs demonstrate that MerA achieves substantial
improvements compared to both single adapters and AdapterFusion. To further
enhance the capacity of MerA, we also introduce a simple yet effective
technique, referred to as the ""\textit{same-track}"" setting, that merges
adapters from the same track of pretraining tasks. With the implementation of
the ""\textit{same-track}"" setting, we observe even more impressive gains,
surpassing the performance of both full fine-tuning and adapter tuning by a
substantial margin, e.g., 3.5\% in MRPC and 5.0\% in MNLI.
"
Meta-Adapter: An Online Few-shot Learner for Vision-Language Model,Cheng Cheng,http://arxiv.org/pdf/2311.03774v1.pdf,2023-11-07,['cs.cv'],2311.03774v1.pdf,"  The contrastive vision-language pre-training, known as CLIP, demonstrates
remarkable potential in perceiving open-world visual concepts, enabling
effective zero-shot image recognition. Nevertheless, few-shot learning methods
based on CLIP typically require offline fine-tuning of the parameters on
few-shot samples, resulting in longer inference time and the risk of
over-fitting in certain domains. To tackle these challenges, we propose the
Meta-Adapter, a lightweight residual-style adapter, to refine the CLIP features
guided by the few-shot samples in an online manner. With a few training
samples, our method can enable effective few-shot learning capabilities and
generalize to unseen data or tasks without additional fine-tuning, achieving
competitive performance and high efficiency. Without bells and whistles, our
approach outperforms the state-of-the-art online few-shot learning method by an
average of 3.6\% on eight image classification datasets with higher inference
speed. Furthermore, our model is simple and flexible, serving as a
plug-and-play module directly applicable to downstream tasks. Without further
fine-tuning, Meta-Adapter obtains notable performance improvements in
open-vocabulary object detection and segmentation tasks.
"
Pushing the Limits of Simple Pipelines for Few-Shot Learning: External  Data and Fine-Tuning Make a Difference,Shell Xu Hu,http://arxiv.org/pdf/2204.07305v1.pdf,2022-04-15,"['cs.cv', 'cs.lg']",2204.07305v1.pdf,"  Few-shot learning (FSL) is an important and topical problem in computer
vision that has motivated extensive research into numerous methods spanning
from sophisticated meta-learning methods to simple transfer learning baselines.
We seek to push the limits of a simple-but-effective pipeline for more
realistic and practical settings of few-shot image classification. To this end,
we explore few-shot learning from the perspective of neural network
architecture, as well as a three stage pipeline of network updates under
different data supplies, where unsupervised external data is considered for
pre-training, base categories are used to simulate few-shot tasks for
meta-training, and the scarcely labelled data of an novel task is taken for
fine-tuning. We investigate questions such as: (1) How pre-training on external
data benefits FSL? (2) How state-of-the-art transformer architectures can be
exploited? and (3) How fine-tuning mitigates domain shift? Ultimately, we show
that a simple transformer-based pipeline yields surprisingly good performance
on standard benchmarks such as Mini-ImageNet, CIFAR-FS, CDFSL and Meta-Dataset.
Our code and demo are available at https://hushell.github.io/pmf.
"
"Multi-Level Fine-Tuning, Data Augmentation, and Few-Shot Learning for  Specialized Cyber Threat Intelligence",Markus Bayer,http://arxiv.org/pdf/2207.11076v1.pdf,2022-07-22,"['cs.cr', 'cs.cl']",2207.11076v1.pdf,"  Gathering cyber threat intelligence from open sources is becoming
increasingly important for maintaining and achieving a high level of security
as systems become larger and more complex. However, these open sources are
often subject to information overload. It is therefore useful to apply machine
learning models that condense the amount of information to what is necessary.
Yet, previous studies and applications have shown that existing classifiers are
not able to extract specific information about emerging cybersecurity events
due to their low generalization ability. Therefore, we propose a system to
overcome this problem by training a new classifier for each new incident. Since
this requires a lot of labelled data using standard training methods, we
combine three different low-data regime techniques - transfer learning, data
augmentation, and few-shot learning - to train a high-quality classifier from
very few labelled instances. We evaluated our approach using a novel dataset
derived from the Microsoft Exchange Server data breach of 2021 which was
labelled by three experts. Our findings reveal an increase in F1 score of more
than 21 points compared to standard training methods and more than 18 points
compared to a state-of-the-art method in few-shot learning. Furthermore, the
classifier trained with this method and 32 instances is only less than 5 F1
score points worse than a classifier trained with 1800 instances.
"
Multitask Pre-training of Modular Prompt for Chinese Few-Shot Learning,Tianxiang Sun,http://arxiv.org/pdf/2210.07565v3.pdf,2022-10-14,['cs.cl'],2210.07565v3.pdf,"  Prompt tuning is a parameter-efficient approach to adapting pre-trained
language models to downstream tasks. Although prompt tuning has been shown to
match the performance of full model tuning when training data is sufficient, it
tends to struggle in few-shot learning settings. In this paper, we present
Multi-task Pre-trained Modular Prompt (MP2) to boost prompt tuning for few-shot
learning. MP2 is a set of combinable prompts pre-trained on 38 Chinese tasks.
On downstream tasks, the pre-trained prompts are selectively activated and
combined, leading to strong compositional generalization to unseen tasks. To
bridge the gap between pre-training and fine-tuning, we formulate upstream and
downstream tasks into a unified machine reading comprehension task. Extensive
experiments under two learning paradigms, i.e., gradient descent and black-box
tuning, show that MP2 significantly outperforms prompt tuning, full model
tuning, and prior prompt pre-training methods in few-shot settings. In
addition, we demonstrate that MP2 can achieve surprisingly fast and strong
adaptation to downstream tasks by merely learning 8 parameters to combine the
pre-trained modular prompts.
"
Few-shot Classification with Hypersphere Modeling of Prototypes,Ning Ding,http://arxiv.org/pdf/2211.05319v1.pdf,2022-11-10,"['cs.lg', 'cs.cl', 'cs.cv']",2211.05319v1.pdf,"  Metric-based meta-learning is one of the de facto standards in few-shot
learning. It composes of representation learning and metrics calculation
designs. Previous works construct class representations in different ways,
varying from mean output embedding to covariance and distributions. However,
using embeddings in space lacks expressivity and cannot capture class
information robustly, while statistical complex modeling poses difficulty to
metric designs. In this work, we use tensor fields (``areas'') to model classes
from the geometrical perspective for few-shot learning. We present a simple and
effective method, dubbed hypersphere prototypes (HyperProto), where class
information is represented by hyperspheres with dynamic sizes with two sets of
learnable parameters: the hypersphere's center and the radius. Extending from
points to areas, hyperspheres are much more expressive than embeddings.
Moreover, it is more convenient to perform metric-based classification with
hypersphere prototypes than statistical modeling, as we only need to calculate
the distance from a data point to the surface of the hypersphere. Following
this idea, we also develop two variants of prototypes under other measurements.
Extensive experiments and analysis on few-shot learning tasks across NLP and CV
and comparison with 20+ competitive baselines demonstrate the effectiveness of
our approach.
"
StyleAdv: Meta Style Adversarial Training for Cross-Domain Few-Shot  Learning,Yuqian Fu,http://arxiv.org/pdf/2302.09309v2.pdf,2023-02-18,['cs.cv'],2302.09309v2.pdf,"  Cross-Domain Few-Shot Learning (CD-FSL) is a recently emerging task that
tackles few-shot learning across different domains. It aims at transferring
prior knowledge learned on the source dataset to novel target datasets. The
CD-FSL task is especially challenged by the huge domain gap between different
datasets. Critically, such a domain gap actually comes from the changes of
visual styles, and wave-SAN empirically shows that spanning the style
distribution of the source data helps alleviate this issue. However, wave-SAN
simply swaps styles of two images. Such a vanilla operation makes the generated
styles ``real'' and ``easy'', which still fall into the original set of the
source styles. Thus, inspired by vanilla adversarial learning, a novel
model-agnostic meta Style Adversarial training (StyleAdv) method together with
a novel style adversarial attack method is proposed for CD-FSL. Particularly,
our style attack method synthesizes both ``virtual'' and ``hard'' adversarial
styles for model training. This is achieved by perturbing the original style
with the signed style gradients. By continually attacking styles and forcing
the model to recognize these challenging adversarial styles, our model is
gradually robust to the visual styles, thus boosting the generalization ability
for novel target datasets. Besides the typical CNN-based backbone, we also
employ our StyleAdv method on large-scale pretrained vision transformer.
Extensive experiments conducted on eight various target datasets show the
effectiveness of our method. Whether built upon ResNet or ViT, we achieve the
new state of the art for CD-FSL. Code is available at
https://github.com/lovelyqian/StyleAdv-CDFSL.
"
Few-Shot Learning with Visual Distribution Calibration and Cross-Modal  Distribution Alignment,Runqi Wang,http://arxiv.org/pdf/2305.11439v1.pdf,2023-05-19,['cs.cv'],2305.11439v1.pdf,"  Pre-trained vision-language models have inspired much research on few-shot
learning. However, with only a few training images, there exist two crucial
problems: (1) the visual feature distributions are easily distracted by
class-irrelevant information in images, and (2) the alignment between the
visual and language feature distributions is difficult. To deal with the
distraction problem, we propose a Selective Attack module, which consists of
trainable adapters that generate spatial attention maps of images to guide the
attacks on class-irrelevant image areas. By messing up these areas, the
critical features are captured and the visual distributions of image features
are calibrated. To better align the visual and language feature distributions
that describe the same object class, we propose a cross-modal distribution
alignment module, in which we introduce a vision-language prototype for each
class to align the distributions, and adopt the Earth Mover's Distance (EMD) to
optimize the prototypes. For efficient computation, the upper bound of EMD is
derived. In addition, we propose an augmentation strategy to increase the
diversity of the images and the text prompts, which can reduce overfitting to
the few-shot training images. Extensive experiments on 11 datasets demonstrate
that our method consistently outperforms prior arts in few-shot learning. The
implementation code will be available at https://github.com/bhrqw/SADA.
"
Federated Few-shot Learning for Cough Classification with Edge Devices,Ngan Dao Hoang,http://arxiv.org/pdf/2309.01076v1.pdf,2023-09-03,"['cs.lg', 'cs.sd', 'eess.as']",2309.01076v1.pdf,"  Automatically classifying cough sounds is one of the most critical tasks for
the diagnosis and treatment of respiratory diseases. However, collecting a huge
amount of labeled cough dataset is challenging mainly due to high laborious
expenses, data scarcity, and privacy concerns. In this work, our aim is to
develop a framework that can effectively perform cough classification even in
situations when enormous cough data is not available, while also addressing
privacy concerns. Specifically, we formulate a new problem to tackle these
challenges and adopt few-shot learning and federated learning to design a novel
framework, termed F2LCough, for solving the newly formulated problem. We
illustrate the superiority of our method compared with other approaches on
COVID-19 Thermal Face & Cough dataset, in which F2LCough achieves an average
F1-Score of 86%. Our results show the feasibility of few-shot learning combined
with federated learning to build a classification model of cough sounds. This
new methodology is able to classify cough sounds in data-scarce situations and
maintain privacy properties. The outcomes of this work can be a fundamental
framework for building support systems for the detection and diagnosis of
cough-related diseases.
"
Few-Shot Bot: Prompt-Based Learning for Dialogue Systems,Andrea Madotto,http://arxiv.org/pdf/2110.08118v1.pdf,2021-10-15,"['cs.cl', 'cs.ai']",2110.08118v1.pdf,"  Learning to converse using only a few examples is a great challenge in
conversational AI. The current best conversational models, which are either
good chit-chatters (e.g., BlenderBot) or goal-oriented systems (e.g., MinTL),
are language models (LMs) fine-tuned on large conversational datasets. Training
these models is expensive, both in terms of computational resources and time,
and it is hard to keep them up to date with new conversational skills. A simple
yet unexplored solution is prompt-based few-shot learning (Brown et al. 2020)
which does not require gradient-based fine-tuning but instead uses a few
examples in the LM context as the only source of learning. In this paper, we
explore prompt-based few-shot learning in dialogue tasks. We benchmark LMs of
different sizes in nine response generation tasks, which include four
knowledge-grounded tasks, a task-oriented generations task, three open-chat
tasks, and controlled stylistic generation, and five conversational parsing
tasks, which include dialogue state tracking, graph path generation, persona
information extraction, document retrieval, and internet query generation. The
current largest released LM (GPT-J-6B) using prompt-based few-shot learning,
and thus requiring no training, achieves competitive performance to fully
trained state-of-the-art models. Moreover, we propose a novel prompt-based
few-shot classifier, that also does not require any fine-tuning, to select the
most appropriate prompt given a dialogue history. Finally, by combining the
power of prompt-based few-shot learning and a Skill Selector, we create an
end-to-end chatbot named the Few-Shot Bot (FSB), which automatically selects
the most appropriate conversational skill, queries different knowledge bases or
the internet, and uses the retrieved knowledge to generate a human-like
response, all using only few dialogue examples per skill.
"
"A Neural Network Solves, Explains, and Generates University Math  Problems by Program Synthesis and Few-Shot Learning at Human Level",Iddo Drori,http://arxiv.org/pdf/2112.15594v4.pdf,2021-12-31,"['cs.lg', 'cs.ai']",2112.15594v4.pdf,"  We demonstrate that a neural network pre-trained on text and fine-tuned on
code solves mathematics course problems, explains solutions, and generates new
questions at a human level. We automatically synthesize programs using few-shot
learning and OpenAI's Codex transformer and execute them to solve course
problems at 81% automatic accuracy. We curate a new dataset of questions from
MIT's largest mathematics courses (Single Variable and Multivariable Calculus,
Differential Equations, Introduction to Probability and Statistics, Linear
Algebra, and Mathematics for Computer Science) and Columbia University's
Computational Linear Algebra. We solve questions from a MATH dataset (on
Prealgebra, Algebra, Counting and Probability, Intermediate Algebra, Number
Theory, and Precalculus), the latest benchmark of advanced mathematics problems
designed to assess mathematical reasoning. We randomly sample questions and
generate solutions with multiple modalities, including numbers, equations, and
plots. The latest GPT-3 language model pre-trained on text automatically solves
only 18.8% of these university questions using zero-shot learning and 30.8%
using few-shot learning and the most recent chain of thought prompting. In
contrast, program synthesis with few-shot learning using Codex fine-tuned on
code generates programs that automatically solve 81% of these questions. Our
approach improves the previous state-of-the-art automatic solution accuracy on
the benchmark topics from 8.8% to 81.1%. We perform a survey to evaluate the
quality and difficulty of generated questions. This work is the first to
automatically solve university-level mathematics course questions at a human
level and the first work to explain and generate university-level mathematics
course questions at scale, a milestone for higher education.
"
Is Support Set Diversity Necessary for Meta-Learning?,Amrith Setlur,http://arxiv.org/pdf/2011.14048v2.pdf,2020-11-28,"['cs.lg', 'stat.ml']",2011.14048v2.pdf,"  Meta-learning is a popular framework for learning with limited data in which
an algorithm is produced by training over multiple few-shot learning tasks. For
classification problems, these tasks are typically constructed by sampling a
small number of support and query examples from a subset of the classes. While
conventional wisdom is that task diversity should improve the performance of
meta-learning, in this work we find evidence to the contrary: we propose a
modification to traditional meta-learning approaches in which we keep the
support sets fixed across tasks, thus reducing task diversity. Surprisingly, we
find that not only does this modification not result in adverse effects, it
almost always improves the performance for a variety of datasets and
meta-learning methods. We also provide several initial analyses to understand
this phenomenon. Our work serves to: (i) more closely investigate the effect of
support set construction for the problem of meta-learning, and (ii) suggest a
simple, general, and competitive baseline for few-shot learning.
"
Detecting Hate Speech with GPT-3,Ke-Li Chiu,http://arxiv.org/pdf/2103.12407v4.pdf,2021-03-23,['cs.cl'],2103.12407v4.pdf,"  Sophisticated language models such as OpenAI's GPT-3 can generate hateful
text that targets marginalized groups. Given this capacity, we are interested
in whether large language models can be used to identify hate speech and
classify text as sexist or racist. We use GPT-3 to identify sexist and racist
text passages with zero-, one-, and few-shot learning. We find that with zero-
and one-shot learning, GPT-3 can identify sexist or racist text with an average
accuracy between 55 per cent and 67 per cent, depending on the category of text
and type of learning. With few-shot learning, the model's accuracy can be as
high as 85 per cent. Large language models have a role to play in hate speech
detection, and with further development they could eventually be used to
counter hate speech.
"
CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in  NLP,Qinyuan Ye,http://arxiv.org/pdf/2104.08835v2.pdf,2021-04-18,"['cs.cl', 'cs.lg']",2104.08835v2.pdf,"  Humans can learn a new language task efficiently with only few examples, by
leveraging their knowledge obtained when learning prior tasks. In this paper,
we explore whether and how such cross-task generalization ability can be
acquired, and further applied to build better few-shot learners across diverse
NLP tasks. We introduce CrossFit, a problem setup for studying cross-task
generalization ability, which standardizes seen/unseen task partitions, data
access during different learning stages, and the evaluation protocols. To
instantiate different seen/unseen task partitions in CrossFit and facilitate
in-depth analysis, we present the NLP Few-shot Gym, a repository of 160 diverse
few-shot NLP tasks created from open-access NLP datasets and converted to a
unified text-to-text format. Our analysis reveals that the few-shot learning
ability on unseen tasks can be improved via an upstream learning stage using a
set of seen tasks. We also observe that the selection of upstream learning
tasks can significantly influence few-shot performance on unseen tasks, asking
further analysis on task similarity and transferability.
"
Entailment as Few-Shot Learner,Sinong Wang,http://arxiv.org/pdf/2104.14690v1.pdf,2021-04-29,"['cs.cl', 'cs.ai']",2104.14690v1.pdf,"  Large pre-trained language models (LMs) have demonstrated remarkable ability
as few-shot learners. However, their success hinges largely on scaling model
parameters to a degree that makes it challenging to train and serve. In this
paper, we propose a new approach, named as EFL, that can turn small LMs into
better few-shot learners. The key idea of this approach is to reformulate
potential NLP task into an entailment one, and then fine-tune the model with as
little as 8 examples. We further demonstrate our proposed method can be: (i)
naturally combined with an unsupervised contrastive learning-based data
augmentation method; (ii) easily extended to multilingual few-shot learning. A
systematic evaluation on 18 standard NLP tasks demonstrates that this approach
improves the various existing SOTA few-shot learning methods by 12\%, and
yields competitive few-shot performance with 500 times larger models, such as
GPT-3.
"
True Few-Shot Learning with Language Models,Ethan Perez,http://arxiv.org/pdf/2105.11447v1.pdf,2021-05-24,"['cs.cl', 'cs.lg', 'stat.ml']",2105.11447v1.pdf,"  Pretrained language models (LMs) perform well on many tasks even when
learning from a few examples, but prior work uses many held-out examples to
tune various aspects of learning, such as hyperparameters, training objectives,
and natural language templates (""prompts""). Here, we evaluate the few-shot
ability of LMs when such held-out examples are unavailable, a setting we call
true few-shot learning. We test two model selection criteria, cross-validation
and minimum description length, for choosing LM prompts and hyperparameters in
the true few-shot setting. On average, both marginally outperform random
selection and greatly underperform selection based on held-out examples.
Moreover, selection criteria often prefer models that perform significantly
worse than randomly-selected ones. We find similar results even when taking
into account our uncertainty in a model's true performance during selection, as
well as when varying the amount of computation and number of examples used for
selection. Overall, our findings suggest that prior work significantly
overestimated the true few-shot ability of LMs given the difficulty of few-shot
model selection.
"
"Generate, Annotate, and Learn: NLP with Synthetic Text",Xuanli He,http://arxiv.org/pdf/2106.06168v3.pdf,2021-06-11,['cs.lg'],2106.06168v3.pdf,"  This paper studies the use of language models as a source of synthetic
unlabeled text for NLP. We formulate a general framework called ``generate,
annotate, and learn (GAL)'' to take advantage of synthetic text within
knowledge distillation, self-training, and few-shot learning applications. To
generate high-quality task-specific text, we either fine-tune LMs on inputs
from the task of interest, or prompt large LMs with few examples. We use the
best available classifier to annotate synthetic text with soft pseudo labels
for knowledge distillation and self-training, and use LMs to obtain hard labels
for few-shot learning. We train new supervised models on the combination of
labeled and pseudo-labeled data, which results in significant gains across
several applications. We investigate key components of GAL and present
theoretical and empirical arguments against the use of class-conditional LMs to
generate synthetic labeled text instead of unlabeled text. GAL achieves new
state-of-the-art knowledge distillation results for 6-layer transformers on the
GLUE leaderboard.
"
Multimodal Few-Shot Learning with Frozen Language Models,Maria Tsimpoukelli,http://arxiv.org/pdf/2106.13884v2.pdf,2021-06-25,"['cs.cv', 'cs.cl', 'cs.lg']",2106.13884v2.pdf,"  When trained at sufficient scale, auto-regressive language models exhibit the
notable ability to learn a new language task after being prompted with just a
few examples. Here, we present a simple, yet effective, approach for
transferring this few-shot learning ability to a multimodal setting (vision and
language). Using aligned image and caption data, we train a vision encoder to
represent each image as a sequence of continuous embeddings, such that a
pre-trained, frozen language model prompted with this prefix generates the
appropriate caption. The resulting system is a multimodal few-shot learner,
with the surprising ability to learn a variety of new tasks when conditioned on
examples, represented as a sequence of multiple interleaved image and text
embeddings. We demonstrate that it can rapidly learn words for new objects and
novel visual categories, do visual question-answering with only a handful of
examples, and make use of outside knowledge, by measuring a single model on a
variety of established and new benchmarks.
"
Revisiting Self-Training for Few-Shot Learning of Language Model,Yiming Chen,http://arxiv.org/pdf/2110.01256v1.pdf,2021-10-04,['cs.cl'],2110.01256v1.pdf,"  As unlabeled data carry rich task-relevant information, they are proven
useful for few-shot learning of language model. The question is how to
effectively make use of such data. In this work, we revisit the self-training
technique for language model fine-tuning and present a state-of-the-art
prompt-based few-shot learner, SFLM. Given two views of a text sample via weak
and strong augmentation techniques, SFLM generates a pseudo label on the weakly
augmented version. Then, the model predicts the same pseudo label when
fine-tuned with the strongly augmented version. This simple approach is shown
to outperform other state-of-the-art supervised and semi-supervised
counterparts on six sentence classification and six sentence-pair
classification benchmarking tasks. In addition, SFLM only relies on a few
in-domain unlabeled data. We conduct a comprehensive analysis to demonstrate
the robustness of our proposed approach under various settings, including
augmentation techniques, model scale, and few-shot knowledge transfer across
tasks.
"
In-Context Learning for Few-Shot Dialogue State Tracking,Yushi Hu,http://arxiv.org/pdf/2203.08568v3.pdf,2022-03-16,['cs.cl'],2203.08568v3.pdf,"  Collecting and annotating task-oriented dialogues is time-consuming and
costly; thus, zero and few shot learning could greatly benefit dialogue state
tracking (DST). In this work, we propose an in-context learning (ICL) framework
for zero-shot and few-shot learning DST, where a large pre-trained language
model (LM) takes a test instance and a few exemplars as input, and directly
decodes the dialogue state without any parameter updates. To better leverage a
tabular domain description in the LM prompt, we reformulate DST into a
text-to-SQL problem. We also propose a novel approach to retrieve annotated
dialogues as exemplars. Empirical results on MultiWOZ show that our method
IC-DST substantially outperforms previous fine-tuned state-of-the-art models in
few-shot settings. In addition, we test IC-DST in zero-shot settings, in which
the model only takes a fixed task instruction as input, finding that it
outperforms previous zero-shot methods by a large margin.
"
WAVPROMPT: Towards Few-Shot Spoken Language Understanding with Frozen  Language Models,Heting Gao,http://arxiv.org/pdf/2203.15863v2.pdf,2022-03-29,"['eess.as', 'cs.ai', 'cs.cl']",2203.15863v2.pdf,"  Large-scale auto-regressive language models pretrained on massive text have
demonstrated their impressive ability to perform new natural language tasks
with only a few text examples, without the need for fine-tuning. Recent studies
further show that such a few-shot learning ability can be extended to the
text-image setting by training an encoder to encode the images into embeddings
functioning like the text embeddings of the language model. Interested in
exploring the possibility of transferring the few-shot learning ability to the
audio-text setting, we propose a novel speech understanding framework,
WavPrompt, where we finetune a wav2vec model to generate a sequence of audio
embeddings understood by the language model. We show that WavPrompt is a
few-shot learner that can perform speech understanding tasks better than a
naive text baseline. We conduct detailed ablation studies on different
components and hyperparameters to empirically identify the best model
configuration. In addition, we conduct a non-speech understanding experiment to
show WavPrompt can extract more information than just the transcriptions. Code
is available at https://github.com/Hertin/WavPrompt
"
Enabling Classifiers to Make Judgements Explicitly Aligned with Human  Values,Yejin Bang,http://arxiv.org/pdf/2210.07652v1.pdf,2022-10-14,"['cs.cl', 'cs.ai']",2210.07652v1.pdf,"  Many NLP classification tasks, such as sexism/racism detection or toxicity
detection, are based on human values. Yet, human values can vary under diverse
cultural conditions. Therefore, we introduce a framework for value-aligned
classification that performs prediction based on explicitly written human
values in the command. Along with the task, we propose a practical approach
that distills value-aligned knowledge from large-scale language models (LLMs)
to construct value-aligned classifiers in two steps. First, we generate
value-aligned training data from LLMs by prompt-based few-shot learning. Next,
we fine-tune smaller classification models with the generated data for the
task. Empirical results show that our VA-Models surpass multiple baselines by
at least 15.56% on the F1-score, including few-shot learning with OPT-175B and
existing text augmentation methods. We suggest that using classifiers with
explicit human value input improves both inclusivity & explainability in AI.
"
Aligning MAGMA by Few-Shot Learning and Finetuning,Jean-Charles Layoun,http://arxiv.org/pdf/2210.14161v1.pdf,2022-10-18,"['cs.cv', 'cs.ai']",2210.14161v1.pdf,"  The goal of vision-language modeling is to allow models to tie language
understanding with visual inputs. The aim of this paper is to evaluate and
align the Visual Language Model (VLM) called Multimodal Augmentation of
Generative Models through Adapter-based finetuning (MAGMA) with human values.
MAGMA is a VLM that is capable of image captioning and visual
question-answering. We will evaluate its alignment in three different
scenarios. To begin, we assess MAGMA's out-of-the-box alignment through the
checkpoint provided by Hugging Face. Then, we measure if few-shot learning
manages to improve the results. Finally, we finetune the model on aligned
examples and evaluate its behavior.
"
GPS: Genetic Prompt Search for Efficient Few-shot Learning,Hanwei Xu,http://arxiv.org/pdf/2210.17041v1.pdf,2022-10-31,['cs.cl'],2210.17041v1.pdf,"  Prompt-based techniques have demostrated great potential for improving the
few-shot generalization of pretrained language models. However, their
performance heavily relies on the manual design of prompts and thus requires a
lot of human efforts. In this paper, we introduce Genetic Prompt Search (GPS)
to improve few-shot learning with prompts, which utilizes a genetic algorithm
to automatically search for high-performing prompts. GPS is gradient-free and
requires no update of model parameters but only a small validation set.
Experiments on diverse datasets proved the effectiveness of GPS, which
outperforms manual prompts by a large margin of 2.6 points. Our method is also
better than other parameter-efficient tuning methods such as prompt tuning.
"
MEAL: Stable and Active Learning for Few-Shot Prompting,Abdullatif Köksal,http://arxiv.org/pdf/2211.08358v2.pdf,2022-11-15,['cs.cl'],2211.08358v2.pdf,"  Few-shot classification has made great strides due to foundation models that,
through priming and prompting, are highly effective few-shot learners. However,
this approach has high variance both across different sets of few shots (data
selection) and across different finetuning runs (run variability). This is
problematic not only because it impedes the fair comparison of different
approaches, but especially because it makes few-shot learning too unreliable
for many real-world applications. To alleviate these issues, we make two
contributions for more stable and effective few-shot learning: First, we
propose novel ensembling methods and show that they substantially reduce run
variability. Second, we introduce a new active learning (AL) criterion for data
selection and present the first AL-based approach specifically tailored towards
prompt-based learning. In our experiments, we show that our combined method,
MEAL (Multiprompt finetuning and prediction Ensembling with Active Learning),
improves overall performance of prompt-based finetuning by 2.3 points on five
diverse tasks.
"
Few-shot Query-Focused Summarization with Prefix-Merging,Ruifeng Yuan,http://arxiv.org/pdf/2211.16164v1.pdf,2022-11-29,"['cs.cl', 'cs.ai']",2211.16164v1.pdf,"  Query-focused summarization has been considered as an important extension for
text summarization. It aims to generate a concise highlight for a given query.
Different from text summarization, query-focused summarization has long been
plagued by the problem of lacking high-quality large-scale datasets. In this
paper, we investigate the idea that whether we can integrate and transfer the
knowledge of text summarization and question answering to assist the few-shot
learning in query-focused summarization. Here, we propose prefix-merging, a
prefix-based pretraining strategy for few-shot learning in query-focused
summarization. Drawn inspiration from prefix-tuning, we are allowed to
integrate the task knowledge from text summarization and question answering
into a properly designed prefix and apply the merged prefix to query-focused
summarization. With only a small amount of trainable parameters, prefix-merging
outperforms fine-tuning on query-focused summarization. We further discuss the
influence of different prefix designs and propose a visualized explanation for
how prefix-merging works.
"
JASMINE: Arabic GPT Models for Few-Shot Learning,El Moatez Billah Nagoudi,http://arxiv.org/pdf/2212.10755v2.pdf,2022-12-21,['cs.cl'],2212.10755v2.pdf,"  Scholarship on generative pretraining (GPT) remains acutely Anglocentric,
leaving serious gaps in our understanding of the whole class of autoregressive
models. For example, we have little knowledge about the potential of these
models and their societal impacts in diverse linguistic and cultural settings.
We alleviate this issue for Arabic, a wide collection of languages and
dialectal varieties with more than 400 million population, by introducing
JASMINE. JASMINE is a suite of powerful Arabic autoregressive Transformer
language models ranging in size between 300 million-6.7 billion parameters
pretrained on a large and diverse dataset (~ 235 GB of text). We also carefully
design and release a comprehensive benchmark for both automated and human
evaluation of Arabic autoregressive models, with coverage of potential social
biases, harms, and toxicity. Using our novel benchmark, we evaluate JASMINE
extensively showing powerful performance intrinsically as well as in few-shot
learning on a wide range of NLP tasks. We aim to responsibly release our models
and evaluation benchmark with interested researchers, along with code for
experimenting with them.
"
Log Parsing with Prompt-based Few-shot Learning,Van-Hoang Le,http://arxiv.org/pdf/2302.07435v1.pdf,2023-02-15,['cs.se'],2302.07435v1.pdf,"  Logs generated by large-scale software systems provide crucial information
for engineers to understand the system status and diagnose problems of the
systems. Log parsing, which converts raw log messages into structured data, is
the first step to enabling automated log analytics. Existing log parsers
extract the common part as log templates using statistical features. However,
these log parsers often fail to identify the correct templates and parameters
because: 1) they often overlook the semantic meaning of log messages, and 2)
they require domain-specific knowledge for different log datasets. To address
the limitations of existing methods, in this paper, we propose LogPPT to
capture the patterns of templates using prompt-based few-shot learning. LogPPT
utilises a novel prompt tuning method to recognise keywords and parameters
based on a few labelled log data. In addition, an adaptive random sampling
algorithm is designed to select a small yet diverse training set. We have
conducted extensive experiments on 16 public log datasets. The experimental
results show that LogPPT is effective and efficient for log parsing.
"
Conversation Style Transfer using Few-Shot Learning,Shamik Roy,http://arxiv.org/pdf/2302.08362v2.pdf,2023-02-16,['cs.cl'],2302.08362v2.pdf,"  Conventional text style transfer approaches focus on sentence-level style
transfer without considering contextual information, and the style is described
with attributes (e.g., formality). When applying style transfer in
conversations such as task-oriented dialogues, existing approaches suffer from
these limitations as context can play an important role and the style
attributes are often difficult to define in conversations. In this paper, we
introduce conversation style transfer as a few-shot learning problem, where the
model learns to perform style transfer by observing only a few example
dialogues in the target style. We propose a novel in-context learning approach
to solve the task with style-free dialogues as a pivot. Human evaluation shows
that by incorporating multi-turn context, the model is able to match the target
style while having better appropriateness and semantic correctness compared to
utterance/sentence-level style transfer. Additionally, we show that
conversation style transfer can also benefit downstream tasks. For example, in
multi-domain intent classification tasks, the F1 scores improve after
transferring the style of training data to match the style of the test data.
"
STUNT: Few-shot Tabular Learning with Self-generated Tasks from  Unlabeled Tables,Jaehyun Nam,http://arxiv.org/pdf/2303.00918v1.pdf,2023-03-02,"['cs.lg', 'cs.ai']",2303.00918v1.pdf,"  Learning with few labeled tabular samples is often an essential requirement
for industrial machine learning applications as varieties of tabular data
suffer from high annotation costs or have difficulties in collecting new
samples for novel tasks. Despite the utter importance, such a problem is quite
under-explored in the field of tabular learning, and existing few-shot learning
schemes from other domains are not straightforward to apply, mainly due to the
heterogeneous characteristics of tabular data. In this paper, we propose a
simple yet effective framework for few-shot semi-supervised tabular learning,
coined Self-generated Tasks from UNlabeled Tables (STUNT). Our key idea is to
self-generate diverse few-shot tasks by treating randomly chosen columns as a
target label. We then employ a meta-learning scheme to learn generalizable
knowledge with the constructed tasks. Moreover, we introduce an unsupervised
validation scheme for hyperparameter search (and early stopping) by generating
a pseudo-validation set using STUNT from unlabeled data. Our experimental
results demonstrate that our simple framework brings significant performance
gain under various tabular few-shot learning benchmarks, compared to prior
semi- and self-supervised baselines. Code is available at
https://github.com/jaehyun513/STUNT.
"
CancerGPT: Few-shot Drug Pair Synergy Prediction using Large Pre-trained  Language Models,Tianhao Li,http://arxiv.org/pdf/2304.10946v1.pdf,2023-04-18,"['cs.cl', 'cs.lg', 'q-bio.bm']",2304.10946v1.pdf,"  Large pre-trained language models (LLMs) have been shown to have significant
potential in few-shot learning across various fields, even with minimal
training data. However, their ability to generalize to unseen tasks in more
complex fields, such as biology, has yet to be fully evaluated. LLMs can offer
a promising alternative approach for biological inference, particularly in
cases where structured data and sample size are limited, by extracting prior
knowledge from text corpora. Our proposed few-shot learning approach uses LLMs
to predict the synergy of drug pairs in rare tissues that lack structured data
and features. Our experiments, which involved seven rare tissues from different
cancer types, demonstrated that the LLM-based prediction model achieved
significant accuracy with very few or zero samples. Our proposed model, the
CancerGPT (with $\sim$ 124M parameters), was even comparable to the larger
fine-tuned GPT-3 model (with $\sim$ 175B parameters). Our research is the first
to tackle drug pair synergy prediction in rare tissues with limited data. We
are also the first to utilize an LLM-based prediction model for biological
reaction prediction tasks.
"
Automated Few-shot Classification with Instruction-Finetuned Language  Models,Rami Aly,http://arxiv.org/pdf/2305.12576v2.pdf,2023-05-21,['cs.cl'],2305.12576v2.pdf,"  A particularly successful class of approaches for few-shot learning combines
language models with prompts -- hand-crafted task descriptions that complement
data samples. However, designing prompts by hand for each task commonly
requires domain knowledge and substantial guesswork. We observe, in the context
of classification tasks, that instruction finetuned language models exhibit
remarkable prompt robustness, and we subsequently propose a simple method to
eliminate the need for handcrafted prompts, named AuT-Few. This approach
consists of (i) a prompt retrieval module that selects suitable task
instructions from the instruction-tuning knowledge base, and (ii) the
generation of two distinct, semantically meaningful, class descriptions and a
selection mechanism via cross-validation. Over $12$ datasets, spanning $8$
classification tasks, we show that AuT-Few outperforms current state-of-the-art
few-shot learning methods. Moreover, AuT-Few is the best ranking method across
datasets on the RAFT few-shot benchmark. Notably, these results are achieved
without task-specific handcrafted prompts on unseen tasks.
"
Active Learning Principles for In-Context Learning with Large Language  Models,Katerina Margatina,http://arxiv.org/pdf/2305.14264v1.pdf,2023-05-23,"['cs.cl', 'cs.ai']",2305.14264v1.pdf,"  The remarkable advancements in large language models (LLMs) have
significantly enhanced the performance in few-shot learning settings. By using
only a small number of labeled examples, referred to as demonstrations, LLMs
can effectively grasp the task at hand through in-context learning. However,
the process of selecting appropriate demonstrations has received limited
attention in prior work. This paper addresses the issue of identifying the most
informative demonstrations for few-shot learning by approaching it as a
pool-based Active Learning (AL) problem over a single iteration. Our objective
is to investigate how AL algorithms can serve as effective demonstration
selection methods for in-context learning. We compare various standard AL
algorithms based on uncertainty, diversity, and similarity, and consistently
observe that the latter outperforms all other methods, including random
sampling. Notably, uncertainty sampling, despite its success in conventional
supervised learning scenarios, performs poorly in this context. Our extensive
experimentation involving a diverse range of GPT and OPT models across $24$
classification and multi-choice tasks, coupled with thorough analysis,
unambiguously demonstrates that in-context example selection through AL
prioritizes high-quality examples that exhibit low uncertainty and bear
similarity to the test examples.
"
Zero-shot Approach to Overcome Perturbation Sensitivity of Prompts,Mohna Chakraborty,http://arxiv.org/pdf/2305.15689v2.pdf,2023-05-25,"['cs.cl', 'cs.ai']",2305.15689v2.pdf,"  Recent studies have demonstrated that natural-language prompts can help to
leverage the knowledge learned by pre-trained language models for the binary
sentence-level sentiment classification task. Specifically, these methods
utilize few-shot learning settings to fine-tune the sentiment classification
model using manual or automatically generated prompts. However, the performance
of these methods is sensitive to the perturbations of the utilized prompts.
Furthermore, these methods depend on a few labeled instances for automatic
prompt generation and prompt ranking. This study aims to find high-quality
prompts for the given task in a zero-shot setting. Given a base prompt, our
proposed approach automatically generates multiple prompts similar to the base
prompt employing positional, reasoning, and paraphrasing techniques and then
ranks the prompts using a novel metric. We empirically demonstrate that the
top-ranked prompts are high-quality and significantly outperform the base
prompt and the prompts generated using few-shot learning for the binary
sentence-level sentiment classification task.
"
FLamE: Few-shot Learning from Natural Language Explanations,Yangqiaoyu Zhou,http://arxiv.org/pdf/2306.08042v1.pdf,2023-06-13,"['cs.cl', 'cs.ai']",2306.08042v1.pdf,"  Natural language explanations have the potential to provide rich information
that in principle guides model reasoning. Yet, recent work by Lampinen et al.
(2022) has shown limited utility of natural language explanations in improving
classification. To effectively learn from explanations, we present FLamE, a
two-stage few-shot learning framework that first generates explanations using
GPT-3, and then finetunes a smaller model (e.g., RoBERTa) with generated
explanations. Our experiments on natural language inference demonstrate
effectiveness over strong baselines, increasing accuracy by 17.6% over GPT-3
Babbage and 5.7% over GPT-3 Davinci in e-SNLI. Despite improving classification
performance, human evaluation surprisingly reveals that the majority of
generated explanations does not adequately justify classification decisions.
Additional analyses point to the important role of label-specific cues (e.g.,
""not know"" for the neutral label) in generated explanations.
"
Exploiting the Potential of Seq2Seq Models as Robust Few-Shot Learners,Jihyeon Lee,http://arxiv.org/pdf/2307.14856v1.pdf,2023-07-27,"['cs.cl', 'cs.ai']",2307.14856v1.pdf,"  In-context learning, which offers substantial advantages over fine-tuning, is
predominantly observed in decoder-only models, while encoder-decoder (i.e.,
seq2seq) models excel in methods that rely on weight updates. Recently, a few
studies have demonstrated the feasibility of few-shot learning with seq2seq
models; however, this has been limited to tasks that align well with the
seq2seq architecture, such as summarization and translation. Inspired by these
initial studies, we provide a first-ever extensive experiment comparing the
in-context few-shot learning capabilities of decoder-only and encoder-decoder
models on a broad range of tasks. Furthermore, we propose two methods to more
effectively elicit in-context learning ability in seq2seq models:
objective-aligned prompting and a fusion-based approach. Remarkably, our
approach outperforms a decoder-only model that is six times larger and exhibits
significant performance improvements compared to conventional seq2seq models
across a variety of settings. We posit that, with the right configuration and
prompt design, seq2seq models can be highly effective few-shot learners for a
wide spectrum of applications.
"
Prototypes-oriented Transductive Few-shot Learning with Conditional  Transport,Long Tian,http://arxiv.org/pdf/2308.03047v1.pdf,2023-08-06,['cs.cv'],2308.03047v1.pdf,"  Transductive Few-Shot Learning (TFSL) has recently attracted increasing
attention since it typically outperforms its inductive peer by leveraging
statistics of query samples. However, previous TFSL methods usually encode
uniform prior that all the classes within query samples are equally likely,
which is biased in imbalanced TFSL and causes severe performance degradation.
  Given this pivotal issue, in this work, we propose a novel Conditional
Transport (CT) based imbalanced TFSL model called {\textbf P}rototypes-oriented
{\textbf U}nbiased {\textbf T}ransfer {\textbf M}odel (PUTM) to fully exploit
unbiased statistics of imbalanced query samples, which employs forward and
backward navigators as transport matrices to balance the prior of query samples
per class between uniform and adaptive data-driven distributions. For
efficiently transferring statistics learned by CT, we further derive a closed
form solution to refine prototypes based on MAP given the learned navigators.
The above two steps of discovering and transferring unbiased statistics follow
an iterative manner, formulating our EM-based solver.
  Experimental results on four standard benchmarks including miniImageNet,
tieredImageNet, CUB, and CIFAR-FS demonstrate superiority of our model in
class-imbalanced generalization.
"
Approximating Human-Like Few-shot Learning with GPT-based Compression,Cynthia Huang,http://arxiv.org/pdf/2308.06942v1.pdf,2023-08-14,"['cs.ai', 'cs.cl', 'cs.it', 'math.it']",2308.06942v1.pdf,"  In this work, we conceptualize the learning process as information
compression. We seek to equip generative pre-trained models with human-like
learning capabilities that enable data compression during inference. We present
a novel approach that utilizes the Generative Pre-trained Transformer (GPT) to
approximate Kolmogorov complexity, with the aim of estimating the optimal
Information Distance for few-shot learning. We first propose using GPT as a
prior for lossless text compression, achieving a noteworthy compression ratio.
Experiment with LLAMA2-7B backbone achieves a compression ratio of 15.5 on
enwik9. We justify the pre-training objective of GPT models by demonstrating
its equivalence to the compression length, and, consequently, its ability to
approximate the information distance for texts. Leveraging the approximated
information distance, our method allows the direct application of GPT models in
quantitative text similarity measurements. Experiment results show that our
method overall achieves superior performance compared to embedding and prompt
baselines on challenging NLP tasks, including semantic similarity, zero and
one-shot text classification, and zero-shot text ranking.
"
COCA: Classifier-Oriented Calibration for Source-Free Universal Domain  Adaptation via Textual Prototype,Xinghong Liu,http://arxiv.org/pdf/2308.10450v1.pdf,2023-08-21,['cs.cv'],2308.10450v1.pdf,"  Universal Domain Adaptation (UniDA) aims to distinguish common and private
classes between the source and target domains where domain shift exists.
Recently, due to more stringent data restrictions, researchers have introduced
Source-Free UniDA (SF-UniDA) in more realistic scenarios. SF-UniDA methods
eliminate the need for direct access to source samples when performing
adaptation to the target domain. However, existing SF-UniDA methods still
require an extensive quantity of labeled source samples to train a source
model, resulting in significant labeling costs. To tackle this issue, we
present a novel Classifier-Oriented Calibration (COCA) method. This method,
which leverages textual prototypes, is formulated for the source model based on
few-shot learning. Specifically, we propose studying few-shot learning, usually
explored for closed-set scenarios, to identify common and domain-private
classes despite a significant domain shift between source and target domains.
Essentially, we present a novel paradigm based on the vision-language model to
learn SF-UniDA and hugely reduce the labeling costs on the source domain.
Experimental results demonstrate that our approach outperforms state-of-the-art
UniDA and SF-UniDA models.
"
Evaluating the Decency and Consistency of Data Validation Tests  Generated by LLMs,Rohan Alexander,http://arxiv.org/pdf/2310.01402v1.pdf,2023-10-02,['stat.me'],2310.01402v1.pdf,"  We investigated the potential of large language models (LLMs) in developing
dataset validation tests. We carried out 96 experiments each for both GPT-3.5
and GPT-4, examining different prompt scenarios, learning modes, temperature
settings, and roles. The prompt scenarios were: 1) Asking for expectations, 2)
Asking for expectations with a given context, 3) Asking for expectations after
requesting a simulation, and 4) Asking for expectations with a provided data
sample. For learning modes, we tested: 1) zero-shot, 2) one-shot, and 3)
few-shot learning. We also tested four temperature settings: 0, 0.4, 0.6, and
1. Furthermore, two distinct roles were considered: 1) ""helpful assistant"", 2)
""expert data scientist"". To gauge consistency, every setup was tested five
times. The LLM-generated responses were benchmarked against a gold standard
suite, created by an experienced data scientist knowledgeable about the data in
question. We find there are considerable returns to the use of few-shot
learning, and that the more explicit the data setting can be the better. The
best LLM configurations complement, rather than substitute, the gold standard
results. This study underscores the value LLMs can bring to the data cleaning
and preparation stages of the data science workflow.
"
Improving generalization in large language models by learning prefix  subspaces,Louis Falissard,http://arxiv.org/pdf/2310.15793v1.pdf,2023-10-24,"['cs.lg', 'cs.ai', 'cs.cl']",2310.15793v1.pdf,"  This article focuses on large language models (LLMs) fine-tuning in the
scarce data regime (also known as the ""few-shot"" learning setting). We propose
a method to increase the generalization capabilities of LLMs based on neural
network subspaces. This optimization method, recently introduced in computer
vision, aims to improve model generalization by identifying wider local optima
through the joint optimization of an entire simplex of models in parameter
space. Its adaptation to massive, pretrained transformers, however, poses some
challenges. First, their considerable number of parameters makes it difficult
to train several models jointly, and second, their deterministic parameter
initialization schemes make them unfit for the subspace method as originally
proposed. We show in this paper that ""Parameter Efficient Fine-Tuning"" (PEFT)
methods, however, are perfectly compatible with this original approach, and
propose to learn entire simplex of continuous prefixes. We test our method on a
variant of the GLUE benchmark adapted to the few-shot learning setting, and
show that both our contributions jointly lead to a gain in average performances
compared to sota methods. The implementation can be found at the following
link: https://github.com/Liloulou/prefix_subspace
"
Zero-shot and Few-shot Learning with Knowledge Graphs: A Comprehensive  Survey,Jiaoyan Chen,http://arxiv.org/pdf/2112.10006v6.pdf,2021-12-18,"['cs.lg', 'cs.ai']",2112.10006v6.pdf,"  Machine learning especially deep neural networks have achieved great success
but many of them often rely on a number of labeled samples for supervision. As
sufficient labeled training data are not always ready due to e.g., continuously
emerging prediction targets and costly sample annotation in real world
applications, machine learning with sample shortage is now being widely
investigated. Among all these studies, many prefer to utilize auxiliary
information including those in the form of Knowledge Graph (KG) to reduce the
reliance on labeled samples. In this survey, we have comprehensively reviewed
over 90 papers about KG-aware research for two major sample shortage settings
-- zero-shot learning (ZSL) where some classes to be predicted have no labeled
samples, and few-shot learning (FSL) where some classes to be predicted have
only a small number of labeled samples that are available. We first introduce
KGs used in ZSL and FSL as well as their construction methods, and then
systematically categorize and summarize KG-aware ZSL and FSL methods, dividing
them into different paradigms such as the mapping-based, the data augmentation,
the propagation-based and the optimization-based. We next present different
applications, including not only KG augmented prediction tasks such as image
classification, question answering, text classification and knowledge
extraction, but also KG completion tasks, and some typical evaluation resources
for each task. We eventually discuss some challenges and open problems from
different perspectives.
"
Few-shot Learning with Multilingual Language Models,Xi Victoria Lin,http://arxiv.org/pdf/2112.10668v3.pdf,2021-12-20,"['cs.cl', 'cs.ai']",2112.10668v3.pdf,"  Large-scale generative language models such as GPT-3 are competitive few-shot
learners. While these models are known to be able to jointly represent many
different languages, their training data is dominated by English, potentially
limiting their cross-lingual generalization. In this work, we train
multilingual generative language models on a corpus covering a diverse set of
languages, and study their few- and zero-shot learning capabilities in a wide
range of tasks. Our largest model with 7.5 billion parameters sets new state of
the art in few-shot learning in more than 20 representative languages,
outperforming GPT-3 of comparable size in multilingual commonsense reasoning
(with +7.4% absolute accuracy improvement in 0-shot settings and +9.4% in
4-shot settings) and natural language inference (+5.4% in each of 0-shot and
4-shot settings). On the FLORES-101 machine translation benchmark, our model
outperforms GPT-3 on 171 out of 182 directions with 32 training examples, while
surpassing the official supervised baseline in 45 directions. We conduct an
in-depth analysis of different multilingual prompting approaches, showing in
particular that strong few-shot learning performance across languages can be
achieved via cross-lingual transfer through both templates and demonstration
examples. Finally, we evaluate our models in social value tasks such as hate
speech detection in five languages and find it has limitations similar to
comparable sized GPT-3 models.
"
Flamingo: a Visual Language Model for Few-Shot Learning,Jean-Baptiste Alayrac,http://arxiv.org/pdf/2204.14198v2.pdf,2022-04-29,"['cs.cv', 'cs.ai', 'cs.lg']",2204.14198v2.pdf,"  Building models that can be rapidly adapted to novel tasks using only a
handful of annotated examples is an open challenge for multimodal machine
learning research. We introduce Flamingo, a family of Visual Language Models
(VLM) with this ability. We propose key architectural innovations to: (i)
bridge powerful pretrained vision-only and language-only models, (ii) handle
sequences of arbitrarily interleaved visual and textual data, and (iii)
seamlessly ingest images or videos as inputs. Thanks to their flexibility,
Flamingo models can be trained on large-scale multimodal web corpora containing
arbitrarily interleaved text and images, which is key to endow them with
in-context few-shot learning capabilities. We perform a thorough evaluation of
our models, exploring and measuring their ability to rapidly adapt to a variety
of image and video tasks. These include open-ended tasks such as visual
question-answering, where the model is prompted with a question which it has to
answer; captioning tasks, which evaluate the ability to describe a scene or an
event; and close-ended tasks such as multiple-choice visual question-answering.
For tasks lying anywhere on this spectrum, a single Flamingo model can achieve
a new state of the art with few-shot learning, simply by prompting the model
with task-specific examples. On numerous benchmarks, Flamingo outperforms
models fine-tuned on thousands of times more task-specific data.
"
"Code Generation Tools (Almost) for Free? A Study of Few-Shot,  Pre-Trained Language Models on Code",Patrick BareiĂź,http://arxiv.org/pdf/2206.01335v2.pdf,2022-06-02,"['cs.se', 'cs.lg']",2206.01335v2.pdf,"  Few-shot learning with large-scale, pre-trained language models is a powerful
way to answer questions about code, e.g., how to complete a given code example,
or even generate code snippets from scratch. The success of these models raises
the question whether they could serve as a basis for building a wide range code
generation tools. Traditionally, such tools are built manually and separately
for each task. Instead, few-shot learning may allow to obtain different tools
from a single pre-trained language model by simply providing a few examples or
a natural language description of the expected tool behavior. This paper
studies to what extent a state-of-the-art, pre-trained language model of code,
Codex, may serve this purpose. We consider three code manipulation and code
generation tasks targeted by a range of traditional tools: (i) code mutation;
(ii) test oracle generation from natural language documentation; and (iii) test
case generation. For each task, we compare few-shot learning to a manually
built tool. Our results show that the model-based tools complement (code
mutation), are on par (test oracle generation), or even outperform their
respective traditionally built tool (test case generation), while imposing far
less effort to develop them. By comparing the effectiveness of different
variants of the model-based tools, we provide insights on how to design an
appropriate input (""prompt"") to the model and what influence the size of the
model has. For example, we find that providing a small natural language
description of the code generation task is an easy way to improve predictions.
Overall, we conclude that few-shot language models are surprisingly effective,
yet there is still more work to be done, such as exploring more diverse ways of
prompting and tackling even more involved tasks.
"
From Human Days to Machine Seconds: Automatically Answering and  Generating Machine Learning Final Exams,Iddo Drori,http://arxiv.org/pdf/2206.05442v7.pdf,2022-06-11,['cs.lg'],2206.05442v7.pdf,"  A final exam in machine learning at a top institution such as MIT, Harvard,
or Cornell typically takes faculty days to write, and students hours to solve.
We demonstrate that large language models pass machine learning finals at a
human level, on finals available online after the models were trained, and
automatically generate new human-quality final exam questions in seconds.
Previous work has developed program synthesis and few-shot learning methods to
solve university-level problem set questions in mathematics and STEM courses.
In this work, we develop and compare methods that solve final exams, which
differ from problem sets in several ways: the questions are longer, have
multiple parts, are more complicated, and span a broader set of topics. We
curate a dataset and benchmark of questions from machine learning final exams
available online and code for answering these questions and generating new
questions. We show how to generate new questions from other questions and
course notes. For reproducibility and future research on this final exam
benchmark, we use automatic checkers for multiple-choice, numeric, and
questions with expression answers. We perform ablation studies comparing
zero-shot learning with few-shot learning and chain-of-thought prompting using
GPT-3, OPT, Codex, and ChatGPT across machine learning topics and find that
few-shot learning methods perform best. We highlight the transformative
potential of language models to streamline the writing and solution of
large-scale assessments, significantly reducing the workload from human days to
mere machine seconds. Our results suggest that rather than banning large
language models such as ChatGPT in class, instructors should teach students to
harness them by asking students meta-questions about correctness, completeness,
and originality of the responses generated, encouraging critical thinking in
academic studies.
"
Model Tuning or Prompt Tuning? A Study of Large Language Models for  Clinical Concept and Relation Extraction,Cheng Peng,http://arxiv.org/pdf/2310.06239v1.pdf,2023-10-10,"['cs.cl', 'cs.ai']",2310.06239v1.pdf,"  Objective To develop soft prompt-based learning algorithms for large language
models (LLMs), examine the shape of prompts, prompt-tuning using
frozen/unfrozen LLMs, transfer learning, and few-shot learning abilities.
Methods We developed a soft prompt-based LLM model and compared 4 training
strategies including (1) fine-tuning without prompts; (2) hard-prompt with
unfrozen LLMs; (3) soft-prompt with unfrozen LLMs; and (4) soft-prompt with
frozen LLMs. We evaluated 7 pretrained LLMs using the 4 training strategies for
clinical concept and relation extraction on two benchmark datasets. We
evaluated the transfer learning ability of the prompt-based learning algorithms
in a cross-institution setting. We also assessed the few-shot learning ability.
Results and Conclusion When LLMs are unfrozen, GatorTron-3.9B with soft
prompting achieves the best strict F1-scores of 0.9118 and 0.8604 for concept
extraction, outperforming the traditional fine-tuning and hard prompt-based
models by 0.6~3.1% and 1.2~2.9%, respectively; GatorTron-345M with soft
prompting achieves the best F1-scores of 0.8332 and 0.7488 for end-to-end
relation extraction, outperforming the other two models by 0.2~2% and
0.6~11.7%, respectively. When LLMs are frozen, small (i.e., 345 million
parameters) LLMs have a big gap to be competitive with unfrozen models; scaling
LLMs up to billions of parameters makes frozen LLMs competitive with unfrozen
LLMs. For cross-institute evaluation, soft prompting with a frozen
GatorTron-8.9B model achieved the best performance. This study demonstrates
that (1) machines can learn soft prompts better than humans, (2) frozen LLMs
have better few-shot learning ability and transfer learning ability to
facilitate muti-institution applications, and (3) frozen LLMs require large
models.
"
On Unifying Misinformation Detection,Nayeon Lee,http://arxiv.org/pdf/2104.05243v1.pdf,2021-04-12,"['cs.ai', 'cs.cl']",2104.05243v1.pdf,"  In this paper, we introduce UnifiedM2, a general-purpose misinformation model
that jointly models multiple domains of misinformation with a single, unified
setup. The model is trained to handle four tasks: detecting news bias,
clickbait, fake news, and verifying rumors. By grouping these tasks together,
UnifiedM2learns a richer representation of misinformation, which leads to
state-of-the-art or comparable performance across all tasks. Furthermore, we
demonstrate that UnifiedM2's learned representation is helpful for few-shot
learning of unseen misinformation tasks/datasets and model's generalizability
to unseen events.
"
Discrete and Soft Prompting for Multilingual Models,Mengjie Zhao,http://arxiv.org/pdf/2109.03630v1.pdf,2021-09-08,['cs.cl'],2109.03630v1.pdf,"  It has been shown for English that discrete and soft prompting perform
strongly in few-shot learning with pretrained language models (PLMs). In this
paper, we show that discrete and soft prompting perform better than finetuning
in multilingual cases: Crosslingual transfer and in-language training of
multilingual natural language inference. For example, with 48 English training
examples, finetuning obtains 33.74% accuracy in crosslingual transfer, barely
surpassing the majority baseline (33.33%). In contrast, discrete and soft
prompting outperform finetuning, achieving 36.43% and 38.79%. We also
demonstrate good performance of prompting with training data in multiple
languages other than English.
"
Cedille: A large autoregressive French language model,Martin MĂĽller,http://arxiv.org/pdf/2202.03371v1.pdf,2022-02-07,"['cs.cl', '68t50', 'i.2.7']",2202.03371v1.pdf,"  Scaling up the size and training of autoregressive language models has
enabled novel ways of solving Natural Language Processing tasks using zero-shot
and few-shot learning. While extreme-scale language models such as GPT-3 offer
multilingual capabilities, zero-shot learning for languages other than English
remain largely unexplored. Here, we introduce Cedille, a large open source
auto-regressive language model, specifically trained for the French language.
Our results show that Cedille outperforms existing French language models and
is competitive with GPT-3 on a range of French zero-shot benchmarks.
Furthermore, we provide an in-depth comparison of the toxicity exhibited by
these models, showing that Cedille marks an improvement in language model
safety thanks to dataset filtering.
"
Human in the loop: How to effectively create coherent topics by manually  labeling only a few documents per class,Anton Thielmann,http://arxiv.org/pdf/2212.09422v1.pdf,2022-12-19,['cs.cl'],2212.09422v1.pdf,"  Few-shot methods for accurate modeling under sparse label-settings have
improved significantly. However, the applications of few-shot modeling in
natural language processing remain solely in the field of document
classification. With recent performance improvements, supervised few-shot
methods, combined with a simple topic extraction method pose a significant
challenge to unsupervised topic modeling methods. Our research shows that
supervised few-shot learning, combined with a simple topic extraction method,
can outperform unsupervised topic modeling techniques in terms of generating
coherent topics, even when only a few labeled documents per class are used.
"
Sentence Simplification via Large Language Models,Yutao Feng,http://arxiv.org/pdf/2302.11957v1.pdf,2023-02-23,"['cs.cl', 'cs.ai']",2302.11957v1.pdf,"  Sentence Simplification aims to rephrase complex sentences into simpler
sentences while retaining original meaning. Large Language models (LLMs) have
demonstrated the ability to perform a variety of natural language processing
tasks. However, it is not yet known whether LLMs can be served as a
high-quality sentence simplification system. In this work, we empirically
analyze the zero-/few-shot learning ability of LLMs by evaluating them on a
number of benchmark test sets. Experimental results show LLMs outperform
state-of-the-art sentence simplification methods, and are judged to be on a par
with human annotators.
"
NeuroCLIP: Neuromorphic Data Understanding by CLIP and SNN,Yufei Guo,http://arxiv.org/pdf/2306.12073v1.pdf,2023-06-21,['cs.cv'],2306.12073v1.pdf,"  Recently, the neuromorphic vision sensor has received more and more interest.
However, the neuromorphic data consists of asynchronous event spikes, which is
not natural and difficult to construct a benchmark, thus limiting the
neuromorphic data understanding for ""unseen"" objects by deep learning.
Zero-shot and few-shot learning via Contrastive Vision-Language Pre-training
(CLIP) have shown inspirational performance in 2D frame image recognition. To
handle ""unseen"" recognition for the neuromorphic data, in this paper, we
propose NeuroCLIP, which transfers the CLIP's 2D pre-trained knowledge to event
spikes. To improve the few-shot performance, we also provide an inter-timestep
adapter based on a spiking neural network. Our code is open-sourced at
https://github.com/yfguo91/NeuroCLIP.git.
"
Leveraging Few-Shot Data Augmentation and Waterfall Prompting for  Response Generation,Lea Krause,http://arxiv.org/pdf/2308.01080v1.pdf,2023-08-02,['cs.cl'],2308.01080v1.pdf,"  This paper discusses our approaches for task-oriented conversational
modelling using subjective knowledge, with a particular emphasis on response
generation. Our methodology was shaped by an extensive data analysis that
evaluated key factors such as response length, sentiment, and dialogue acts
present in the provided dataset. We used few-shot learning to augment the data
with newly generated subjective knowledge items and present three approaches
for DSTC11: (1) task-specific model exploration, (2) incorporation of the most
frequent question into all generated responses, and (3) a waterfall prompting
technique using a combination of both GPT-3 and ChatGPT.
"
Making Pre-trained Language Models Better Few-shot Learners,Tianyu Gao,http://arxiv.org/pdf/2012.15723v2.pdf,2020-12-31,"['cs.cl', 'cs.lg']",2012.15723v2.pdf,"  The recent GPT-3 model (Brown et al., 2020) achieves remarkable few-shot
performance solely by leveraging a natural-language prompt and a few task
demonstrations as input context. Inspired by their findings, we study few-shot
learning in a more practical scenario, where we use smaller language models for
which fine-tuning is computationally efficient. We present LM-BFF--better
few-shot fine-tuning of language models--a suite of simple and complementary
techniques for fine-tuning language models on a small number of annotated
examples. Our approach includes (1) prompt-based fine-tuning together with a
novel pipeline for automating prompt generation; and (2) a refined strategy for
dynamically and selectively incorporating demonstrations into each context.
Finally, we present a systematic evaluation for analyzing few-shot performance
on a range of NLP tasks, including classification and regression. Our
experiments demonstrate that our methods combine to dramatically outperform
standard fine-tuning procedures in this low resource setting, achieving up to
30% absolute improvement, and 11% on average across all tasks. Our approach
makes minimal assumptions on task resources and domain expertise, and hence
constitutes a strong task-agnostic method for few-shot learning.
"
GPT-3 Models are Poor Few-Shot Learners in the Biomedical Domain,Milad Moradi,http://arxiv.org/pdf/2109.02555v2.pdf,2021-09-06,"['cs.cl', 'cs.ai', 'cs.lg']",2109.02555v2.pdf,"  Deep neural language models have set new breakthroughs in many tasks of
Natural Language Processing (NLP). Recent work has shown that deep transformer
language models (pretrained on large amounts of texts) can achieve high levels
of task-specific few-shot performance comparable to state-of-the-art models.
However, the ability of these large language models in few-shot transfer
learning has not yet been explored in the biomedical domain. We investigated
the performance of two powerful transformer language models, i.e. GPT-3 and
BioBERT, in few-shot settings on various biomedical NLP tasks. The experimental
results showed that, to a great extent, both the models underperform a language
model fine-tuned on the full training data. Although GPT-3 had already achieved
near state-of-the-art results in few-shot knowledge transfer on open-domain NLP
tasks, it could not perform as effectively as BioBERT, which is orders of
magnitude smaller than GPT-3. Regarding that BioBERT was already pretrained on
large biomedical text corpora, our study suggests that language models may
largely benefit from in-domain pretraining in task-specific few-shot learning.
However, in-domain pretraining seems not to be sufficient; novel pretraining
and few-shot learning strategies are required in the biomedical NLP domain.
"
PPT: Pre-trained Prompt Tuning for Few-shot Learning,Yuxian Gu,http://arxiv.org/pdf/2109.04332v3.pdf,2021-09-09,['cs.cl'],2109.04332v3.pdf,"  Prompts for pre-trained language models (PLMs) have shown remarkable
performance by bridging the gap between pre-training tasks and various
downstream tasks. Among these methods, prompt tuning, which freezes PLMs and
only tunes soft prompts, provides an efficient and effective solution for
adapting large-scale PLMs to downstream tasks. However, prompt tuning is yet to
be fully explored. In our pilot experiments, we find that prompt tuning
performs comparably with conventional full-model fine-tuning when downstream
data are sufficient, whereas it performs much worse under few-shot learning
settings, which may hinder the application of prompt tuning in practice. We
attribute this low performance to the manner of initializing soft prompts.
Therefore, in this work, we propose to pre-train prompts by adding soft prompts
into the pre-training stage to obtain a better initialization. We name this
Pre-trained Prompt Tuning framework ""PPT"". To ensure the generalization of PPT,
we formulate similar classification tasks into a unified task form and
pre-train soft prompts for this unified task. Extensive experiments show that
tuning pre-trained prompts for downstream tasks can reach or even outperform
full-model fine-tuning under both full-data and few-shot settings. Our approach
is effective and efficient for using large-scale PLMs in practice.
"
Yuan 1.0: Large-Scale Pre-trained Language Model in Zero-Shot and  Few-Shot Learning,Shaohua Wu,http://arxiv.org/pdf/2110.04725v2.pdf,2021-10-10,"['cs.cl', 'cs.ai']",2110.04725v2.pdf,"  Recent work like GPT-3 has demonstrated excellent performance of Zero-Shot
and Few-Shot learning on many natural language processing (NLP) tasks by
scaling up model size, dataset size and the amount of computation. However,
training a model like GPT-3 requires huge amount of computational resources
which makes it challengeable to researchers. In this work, we propose a method
that incorporates large-scale distributed training performance into model
architecture design. With this method, Yuan 1.0, the current largest singleton
language model with 245B parameters, achieves excellent performance on
thousands GPUs during training, and the state-of-the-art results on NLP tasks.
A data processing method is designed to efficiently filter massive amount of
raw data. The current largest high-quality Chinese corpus with 5TB high quality
texts is built based on this method. In addition, a calibration and label
expansion method is proposed to improve the Zero-Shot and Few-Shot performance,
and steady improvement is observed on the accuracy of various tasks. Yuan 1.0
presents strong capacity of natural language generation, and the generated
articles are difficult to distinguish from the human-written ones.
"
LiST: Lite Prompted Self-training Makes Parameter-Efficient Few-shot  Learners,Yaqing Wang,http://arxiv.org/pdf/2110.06274v2.pdf,2021-10-12,['cs.cl'],2110.06274v2.pdf,"  We present a new method LiST is short for Lite Prompted Self-Training for
parameter-efficient fine-tuning of large pre-trained language models (PLMs) for
few-shot learning. LiST improves over recent methods that adopt prompt-based
fine-tuning (FN) using two key techniques. The first is the use of
self-training to leverage large amounts of unlabeled data for prompt-based FN
in few-shot settings. We use self-training in conjunction with meta-learning
for re-weighting noisy pseudo-prompt labels. Self-training is expensive as it
requires updating all the model parameters repetitively. Therefore, we use a
second technique for light-weight fine-tuning where we introduce a small number
of task-specific parameters that are fine-tuned during self-training while
keeping the PLM encoder frozen. Our experiments show that LiST can effectively
leverage unlabeled data to improve the model performance for few-shot learning.
Additionally, the fine-tuning is efficient as it only updates a small
percentage of parameters and the overall model footprint is reduced since
several tasks can share a common PLM encoder as backbone. A comprehensive study
on six NLU tasks demonstrate LiST to improve by 35% over classic fine-tuning
and 6% over prompt-based FN with 96% reduction in number of trainable
parameters when fine-tuned with no more than 30 labeled examples from each
task. With only 14M tunable parameters, LiST outperforms GPT-3 in-context
learning by 33% on few-shot NLU tasks.
"
PERFECT: Prompt-free and Efficient Few-shot Learning with Language  Models,Rabeeh Karimi Mahabadi,http://arxiv.org/pdf/2204.01172v2.pdf,2022-04-03,['cs.cl'],2204.01172v2.pdf,"  Current methods for few-shot fine-tuning of pretrained masked language models
(PLMs) require carefully engineered prompts and verbalizers for each new task
to convert examples into a cloze-format that the PLM can score. In this work,
we propose PERFECT, a simple and efficient method for few-shot fine-tuning of
PLMs without relying on any such handcrafting, which is highly effective given
as few as 32 data points. PERFECT makes two key design choices: First, we show
that manually engineered task prompts can be replaced with task-specific
adapters that enable sample-efficient fine-tuning and reduce memory and storage
costs by roughly factors of 5 and 100, respectively. Second, instead of using
handcrafted verbalizers, we learn new multi-token label embeddings during
fine-tuning, which are not tied to the model vocabulary and which allow us to
avoid complex auto-regressive decoding. These embeddings are not only learnable
from limited data but also enable nearly 100x faster training and inference.
Experiments on a wide range of few-shot NLP tasks demonstrate that PERFECT,
while being simple and efficient, also outperforms existing state-of-the-art
few-shot learning methods. Our code is publicly available at
https://github.com/facebookresearch/perfect.git.
"
On the Effect of Pretraining Corpora on In-context Learning by a  Large-scale Language Model,Seongjin Shin,http://arxiv.org/pdf/2204.13509v2.pdf,2022-04-28,['cs.cl'],2204.13509v2.pdf,"  Many recent studies on large-scale language models have reported successful
in-context zero- and few-shot learning ability. However, the in-depth analysis
of when in-context learning occurs is still lacking. For example, it is unknown
how in-context learning performance changes as the training corpus varies.
Here, we investigate the effects of the source and size of the pretraining
corpus on in-context learning in HyperCLOVA, a Korean-centric GPT-3 model. From
our in-depth investigation, we introduce the following observations: (1)
in-context learning performance heavily depends on the corpus domain source,
and the size of the pretraining corpus does not necessarily determine the
emergence of in-context learning, (2) in-context learning ability can emerge
when a language model is trained on a combination of multiple corpora, even
when each corpus does not result in in-context learning on its own, (3)
pretraining with a corpus related to a downstream task does not always
guarantee the competitive in-context learning performance of the downstream
task, especially in the few-shot setting, and (4) the relationship between
language modeling (measured in perplexity) and in-context learning does not
always correlate: e.g., low perplexity does not always imply high in-context
few-shot learning performance.
"
Few-Shot Stance Detection via Target-Aware Prompt Distillation,Yan Jiang,http://arxiv.org/pdf/2206.13214v1.pdf,2022-06-27,['cs.cl'],2206.13214v1.pdf,"  Stance detection aims to identify whether the author of a text is in favor
of, against, or neutral to a given target. The main challenge of this task
comes two-fold: few-shot learning resulting from the varying targets and the
lack of contextual information of the targets. Existing works mainly focus on
solving the second issue by designing attention-based models or introducing
noisy external knowledge, while the first issue remains under-explored. In this
paper, inspired by the potential capability of pre-trained language models
(PLMs) serving as knowledge bases and few-shot learners, we propose to
introduce prompt-based fine-tuning for stance detection. PLMs can provide
essential contextual information for the targets and enable few-shot learning
via prompts. Considering the crucial role of the target in stance detection
task, we design target-aware prompts and propose a novel verbalizer. Instead of
mapping each label to a concrete word, our verbalizer maps each label to a
vector and picks the label that best captures the correlation between the
stance and the target. Moreover, to alleviate the possible defect of dealing
with varying targets with a single hand-crafted prompt, we propose to distill
the information learned from multiple prompts. Experimental results show the
superior performance of our proposed model in both full-data and few-shot
scenarios.
"
Few-Shot Learning for Clinical Natural Language Processing Using Siamese  Neural Networks,David Oniani,http://arxiv.org/pdf/2208.14923v2.pdf,2022-08-31,['cs.cl'],2208.14923v2.pdf,"  Clinical Natural Language Processing (NLP) has become an emerging technology
in healthcare that leverages a large amount of free-text data in electronic
health records (EHRs) to improve patient care, support clinical decisions, and
facilitate clinical and translational science research. Recently, deep learning
has achieved state-of-the-art performance in many clinical NLP tasks. However,
training deep learning models usually requires large annotated datasets, which
are normally not publicly available and can be time-consuming to build in
clinical domains. Working with smaller annotated datasets is typical in
clinical NLP and therefore, ensuring that deep learning models perform well is
crucial for the models to be used in real-world applications. A widely adopted
approach is fine-tuning existing Pre-trained Language Models (PLMs), but these
attempts fall short when the training dataset contains only a few annotated
samples. Few-Shot Learning (FSL) has recently been investigated to tackle this
problem. Siamese Neural Network (SNN) has been widely utilized as an FSL
approach in computer vision, but has not been studied well in NLP. Furthermore,
the literature on its applications in clinical domains is scarce. In this
paper, we propose two SNN-based FSL approaches for clinical NLP, including
Pre-Trained SNN (PT-SNN) and SNN with Second-Order Embeddings (SOE-SNN). We
evaluated the proposed approaches on two clinical tasks, namely clinical text
classification and clinical named entity recognition. We tested three few-shot
settings including 4-shot, 8-shot, and 16-shot learning. Both clinical NLP
tasks were benchmarked using three PLMs, including BERT,BioBERT, and
BioClinicalBERT. The experimental results verified the effectiveness of the
proposed SNN-based FSL approaches in both NLP tasks.
"
Prompting through Prototype: A Prototype-based Prompt Learning on  Pretrained Vision-Language Models,Yue Zhang,http://arxiv.org/pdf/2210.10841v1.pdf,2022-10-19,"['cs.cl', 'cs.cv']",2210.10841v1.pdf,"  Prompt learning is a new learning paradigm which reformulates downstream
tasks as similar pretraining tasks on pretrained models by leveraging textual
prompts. Recent works have demonstrated that prompt learning is particularly
useful for few-shot learning, where there is limited training data. Depending
on the granularity of prompts, those methods can be roughly divided into
task-level prompting and instance-level prompting. Task-level prompting methods
learn one universal prompt for all input samples, which is efficient but
ineffective to capture subtle differences among different classes.
Instance-level prompting methods learn a specific prompt for each input, though
effective but inefficient. In this work, we develop a novel prototype-based
prompt learning method to overcome the above limitations. In particular, we
focus on few-shot image recognition tasks on pretrained vision-language models
(PVLMs) and develop a method of prompting through prototype (PTP), where we
define $K$ image prototypes and $K$ prompt prototypes. In PTP, the image
prototype represents a centroid of a certain image cluster in the latent space
and a prompt prototype is defined as a soft prompt in the continuous space. The
similarity between a query image and an image prototype determines how much
this prediction relies on the corresponding prompt prototype. Hence, in PTP,
similar images will utilize similar prompting ways. Through extensive
experiments on seven real-world benchmarks, we show that PTP is an effective
method to leverage the latent knowledge and adaptive to various PVLMs.
Moreover, through detailed analysis, we discuss pros and cons for prompt
learning and parameter-efficient fine-tuning under the context of few-shot
learning.
"
SgVA-CLIP: Semantic-guided Visual Adapting of Vision-Language Models for  Few-shot Image Classification,Fang Peng,http://arxiv.org/pdf/2211.16191v2.pdf,2022-11-28,"['cs.cv', 'cs.mm']",2211.16191v2.pdf,"  Although significant progress has been made in few-shot learning, most of
existing few-shot image classification methods require supervised pre-training
on a large amount of samples of base classes, which limits their generalization
ability in real world application. Recently, large-scale Vision-Language
Pre-trained models (VLPs) have been gaining increasing attention in few-shot
learning because they can provide a new paradigm for transferable visual
representation learning with easily available text on the Web. However, the
VLPs may neglect detailed visual information that is difficult to describe by
language sentences, but important for learning an effective classifier to
distinguish different images. To address the above problem, we propose a new
framework, named Semantic-guided Visual Adapting (SgVA), which can effectively
extend vision-language pre-trained models to produce discriminative adapted
visual features by comprehensively using an implicit knowledge distillation, a
vision-specific contrastive loss, and a cross-modal contrastive loss. The
implicit knowledge distillation is designed to transfer the fine-grained
cross-modal knowledge to guide the updating of the vision adapter.
State-of-the-art results on 13 datasets demonstrate that the adapted visual
features can well complement the cross-modal features to improve few-shot image
classification.
"
Finetune like you pretrain: Improved finetuning of zero-shot vision  models,Sachin Goyal,http://arxiv.org/pdf/2212.00638v1.pdf,2022-12-01,"['cs.cv', 'cs.lg']",2212.00638v1.pdf,"  Finetuning image-text models such as CLIP achieves state-of-the-art
accuracies on a variety of benchmarks. However, recent works like WiseFT
(Wortsman et al., 2021) and LP-FT (Kumar et al., 2022) have shown that even
subtle differences in the finetuning process can lead to surprisingly large
differences in the final performance, both for in-distribution (ID) and
out-of-distribution (OOD) data. In this work, we show that a natural and simple
approach of mimicking contrastive pretraining consistently outperforms
alternative finetuning approaches. Specifically, we cast downstream class
labels as text prompts and continue optimizing the contrastive loss between
image embeddings and class-descriptive prompt embeddings (contrastive
finetuning).
  Our method consistently outperforms baselines across 7 distribution shifts, 6
transfer learning, and 3 few-shot learning benchmarks. On WILDS-iWILDCam, our
proposed approach FLYP outperforms the top of the leaderboard by $2.3\%$ ID and
$2.7\%$ OOD, giving the highest reported accuracy. Averaged across 7 OOD
datasets (2 WILDS and 5 ImageNet associated shifts), FLYP gives gains of
$4.2\%$ OOD over standard finetuning and outperforms the current state of the
art (LP-FT) by more than $1\%$ both ID and OOD. Similarly, on 3 few-shot
learning benchmarks, our approach gives gains up to $4.6\%$ over standard
finetuning and $4.4\%$ over the state of the art. In total, these benchmarks
establish contrastive finetuning as a simple, intuitive, and state-of-the-art
approach for supervised finetuning of image-text models like CLIP. Code is
available at https://github.com/locuslab/FLYP.
"
Multimodality Helps Unimodality: Cross-Modal Few-Shot Learning with  Multimodal Models,Zhiqiu Lin,http://arxiv.org/pdf/2301.06267v4.pdf,2023-01-16,"['cs.cv', 'cs.ai', 'cs.lg', 'cs.sd', 'eess.as']",2301.06267v4.pdf,"  The ability to quickly learn a new task with minimal instruction - known as
few-shot learning - is a central aspect of intelligent agents. Classical
few-shot benchmarks make use of few-shot samples from a single modality, but
such samples may not be sufficient to characterize an entire concept class. In
contrast, humans use cross-modal information to learn new concepts efficiently.
In this work, we demonstrate that one can indeed build a better ${\bf visual}$
dog classifier by ${\bf read}$ing about dogs and ${\bf listen}$ing to them
bark. To do so, we exploit the fact that recent multimodal foundation models
such as CLIP are inherently cross-modal, mapping different modalities to the
same representation space. Specifically, we propose a simple cross-modal
adaptation approach that learns from few-shot examples spanning different
modalities. By repurposing class names as additional one-shot training samples,
we achieve SOTA results with an embarrassingly simple linear classifier for
vision-language adaptation. Furthermore, we show that our approach can benefit
existing methods such as prefix tuning, adapters, and classifier ensembling.
Finally, to explore other modalities beyond vision and language, we construct
the first (to our knowledge) audiovisual few-shot benchmark and use cross-modal
training to improve the performance of both image and audio classification.
"
AugGPT: Leveraging ChatGPT for Text Data Augmentation,Haixing Dai,http://arxiv.org/pdf/2302.13007v3.pdf,2023-02-25,"['cs.cl', 'cs.ai', 'cs.lg']",2302.13007v3.pdf,"  Text data augmentation is an effective strategy for overcoming the challenge
of limited sample sizes in many natural language processing (NLP) tasks. This
challenge is especially prominent in the few-shot learning scenario, where the
data in the target domain is generally much scarcer and of lowered quality. A
natural and widely-used strategy to mitigate such challenges is to perform data
augmentation to better capture the data invariance and increase the sample
size. However, current text data augmentation methods either can't ensure the
correct labeling of the generated data (lacking faithfulness) or can't ensure
sufficient diversity in the generated data (lacking compactness), or both.
Inspired by the recent success of large language models, especially the
development of ChatGPT, which demonstrated improved language comprehension
abilities, in this work, we propose a text data augmentation approach based on
ChatGPT (named AugGPT). AugGPT rephrases each sentence in the training samples
into multiple conceptually similar but semantically different samples. The
augmented samples can then be used in downstream model training. Experiment
results on few-shot learning text classification tasks show the superior
performance of the proposed AugGPT approach over state-of-the-art text data
augmentation methods in terms of testing accuracy and distribution of the
augmented samples.
"
Meta Learning to Bridge Vision and Language Models for Multimodal  Few-Shot Learning,Ivona Najdenkoska,http://arxiv.org/pdf/2302.14794v1.pdf,2023-02-28,['cs.cv'],2302.14794v1.pdf,"  Multimodal few-shot learning is challenging due to the large domain gap
between vision and language modalities. Existing methods are trying to
communicate visual concepts as prompts to frozen language models, but rely on
hand-engineered task induction to reduce the hypothesis space. To make the
whole process learnable, we introduce a multimodal meta-learning approach.
Specifically, our approach decomposes the training of the model into a set of
related multimodal few-shot tasks. We define a meta-mapper network, acting as a
meta-learner, to efficiently bridge frozen large-scale vision and language
models and leverage their already learned capacity. By updating the learnable
parameters only of the meta-mapper, it learns to accrue shared meta-knowledge
among these tasks. Thus, it can rapidly adapt to newly presented samples with
only a few gradient updates. Importantly, it induces the task in a completely
data-driven manner, with no need for a hand-engineered task induction. We
evaluate our approach on recently proposed multimodal few-shot benchmarks,
measuring how rapidly the model can bind novel visual concepts to words and
answer visual questions by observing only a limited set of labeled examples.
The experimental results show that our meta-learning approach outperforms the
baseline across multiple datasets and various training settings while being
computationally more efficient.
"
Semantic Prompt for Few-Shot Image Recognition,Wentao Chen,http://arxiv.org/pdf/2303.14123v1.pdf,2023-03-24,['cs.cv'],2303.14123v1.pdf,"  Few-shot learning is a challenging problem since only a few examples are
provided to recognize a new class. Several recent studies exploit additional
semantic information, e.g. text embeddings of class names, to address the issue
of rare samples through combining semantic prototypes with visual prototypes.
However, these methods still suffer from the spurious visual features learned
from the rare support samples, resulting in limited benefits. In this paper, we
propose a novel Semantic Prompt (SP) approach for few-shot learning. Instead of
the naive exploitation of semantic information for remedying classifiers, we
explore leveraging semantic information as prompts to tune the visual feature
extraction network adaptively. Specifically, we design two complementary
mechanisms to insert semantic prompts into the feature extractor: one is to
enable the interaction between semantic prompts and patch embeddings along the
spatial dimension via self-attention, another is to supplement visual features
with the transformed semantic prompts along the channel dimension. By combining
these two mechanisms, the feature extractor presents a better ability to attend
to the class-specific features and obtains more generalized image
representations with merely a few support samples. Through extensive
experiments on four datasets, the proposed approach achieves promising results,
improving the 1-shot learning accuracy by 3.67% on average.
"
RPLKG: Robust Prompt Learning with Knowledge Graph,Yewon Kim,http://arxiv.org/pdf/2304.10805v1.pdf,2023-04-21,"['cs.ai', 'cs.lg']",2304.10805v1.pdf,"  Large-scale pre-trained models have been known that they are transferable,
and they generalize well on the unseen dataset. Recently, multimodal
pre-trained models such as CLIP show significant performance improvement in
diverse experiments. However, when the labeled dataset is limited, the
generalization of a new dataset or domain is still challenging. To improve the
generalization performance on few-shot learning, there have been diverse
efforts, such as prompt learning and adapter. However, the current few-shot
adaptation methods are not interpretable, and they require a high computation
cost for adaptation. In this study, we propose a new method, robust prompt
learning with knowledge graph (RPLKG). Based on the knowledge graph, we
automatically design diverse interpretable and meaningful prompt sets. Our
model obtains cached embeddings of prompt sets after one forwarding from a
large pre-trained model. After that, model optimizes the prompt selection
processes with GumbelSoftmax. In this way, our model is trained using
relatively little memory and learning time. Also, RPLKG selects the optimal
interpretable prompt automatically, depending on the dataset. In summary, RPLKG
is i) interpretable, ii) requires small computation resources, and iii) easy to
incorporate prior human knowledge. To validate the RPLKG, we provide
comprehensive experimental results on few-shot learning, domain generalization
and new class generalization setting. RPLKG shows a significant performance
improvement compared to zero-shot learning and competitive performance against
several prompt learning methods using much lower resources.
"
The CoT Collection: Improving Zero-shot and Few-shot Learning of  Language Models via Chain-of-Thought Fine-Tuning,Seungone Kim,http://arxiv.org/pdf/2305.14045v2.pdf,2023-05-23,"['cs.cl', 'cs.ai', 'cs.lg']",2305.14045v2.pdf,"  Language models (LMs) with less than 100B parameters are known to perform
poorly on chain-of-thought (CoT) reasoning in contrast to large LMs when
solving unseen tasks. In this work, we aim to equip smaller LMs with the
step-by-step reasoning capability by instruction tuning with CoT rationales. In
order to achieve this goal, we first introduce a new instruction-tuning dataset
called the CoT Collection, which augments the existing Flan Collection
(including only 9 CoT tasks) with additional 1.84 million rationales across
1,060 tasks. We show that CoT fine-tuning Flan-T5 (3B & 11B) with CoT
Collection enables smaller LMs to have better CoT capabilities on unseen tasks.
On the BIG-Bench-Hard (BBH) benchmark, we report an average improvement of
+4.34% (Flan-T5 3B) and +2.60% (Flan-T5 11B), in terms of zero-shot task
accuracy. Furthermore, we show that instruction tuning with CoT Collection
allows LMs to possess stronger few-shot learning capabilities on 4
domain-specific tasks, resulting in an improvement of +2.24% (Flan-T5 3B) and
+2.37% (Flan-T5 11B), even outperforming ChatGPT utilizing demonstrations until
the max length by a +13.98% margin. Our code, the CoT Collection data, and
model checkpoints are publicly available.
"
Adversarial Robustness of Prompt-based Few-Shot Learning for Natural  Language Understanding,Venkata Prabhakara Sarath Nookala,http://arxiv.org/pdf/2306.11066v2.pdf,2023-06-19,"['cs.cl', 'cs.lg']",2306.11066v2.pdf,"  State-of-the-art few-shot learning (FSL) methods leverage prompt-based
fine-tuning to obtain remarkable results for natural language understanding
(NLU) tasks. While much of the prior FSL methods focus on improving downstream
task performance, there is a limited understanding of the adversarial
robustness of such methods. In this work, we conduct an extensive study of
several state-of-the-art FSL methods to assess their robustness to adversarial
perturbations. To better understand the impact of various factors towards
robustness (or the lack of it), we evaluate prompt-based FSL methods against
fully fine-tuned models for aspects such as the use of unlabeled data, multiple
prompts, number of few-shot examples, model size and type. Our results on six
GLUE tasks indicate that compared to fully fine-tuned models, vanilla FSL
methods lead to a notable relative drop in task performance (i.e., are less
robust) in the face of adversarial perturbations. However, using (i) unlabeled
data for prompt-based FSL and (ii) multiple prompts flip the trend. We further
demonstrate that increasing the number of few-shot examples and model size lead
to increased adversarial robustness of vanilla FSL methods. Broadly, our work
sheds light on the adversarial robustness evaluation of prompt-based FSL
methods for NLU tasks.
"
Few-shot Learning for Inference in Medical Imaging with Subspace Feature  Representations,Jiahui Liu,http://arxiv.org/pdf/2306.11152v1.pdf,2023-06-19,"['math.na', 'cs.na']",2306.11152v1.pdf,"  Unlike the field of visual scene recognition where tremendous advances have
taken place due to the availability of very large datasets to train deep neural
networks, inference from medical images is often hampered by the fact that only
small amounts of data may be available. When working with very small dataset
problems, of the order of a few hundred items of data, the power of deep
learning may still be exploited by using a model pre-trained on natural images
as a feature extractor and carrying out classic pattern recognition techniques
in this feature space, the so-called few-shot learning problem. In regimes
where the dimension of this feature space is comparable to or even larger than
the number of items of data, dimensionality reduction is a necessity and is
often achieved by principal component analysis, i.e., singular value
decomposition (SVD). In this paper, noting the inappropriateness of using SVD
for this setting, we usher in and explore two alternatives based on
discriminant analysis and non-negative matrix factorization (NMF). Using 14
different datasets spanning $11$ distinct disease types, we demonstrate that
discriminant subspaces at low dimensions achieve significant improvements over
SVD-based subspaces and the original feature space. We also show that NMF at
modest dimensions is a competitive alternative to SVD in this setting.
"
Visually grounded few-shot word learning in low-resource settings,Leanne Nortje,http://arxiv.org/pdf/2306.11371v2.pdf,2023-06-20,"['eess.as', 'cs.cl']",2306.11371v2.pdf,"  We propose a visually grounded speech model that learns new words and their
visual depictions from just a few word-image example pairs. Given a set of test
images and a spoken query, we ask the model which image depicts the query word.
Previous work has simplified this few-shot learning problem by either using an
artificial setting with digit word-image pairs or by using a large number of
examples per class. Moreover, all previous studies were performed using English
speech-image data. We propose an approach that can work on natural word-image
pairs but with less examples, i.e. fewer shots, and then illustrate how this
approach can be applied for multimodal few-shot learning in a real low-resource
language, Yoruba. Our approach involves using the given word-image example
pairs to mine new unsupervised word-image training pairs from large collections
of unlabelledspeech and images. Additionally, we use a word-to-image attention
mechanism to determine word-image similarity. With this new model, we achieve
better performance with fewer shots than previous approaches on an existing
English benchmark. Many of the model's mistakes are due to confusion between
visual concepts co-occurring in similar contexts. The experiments on Yoruba
show the benefit of transferring knowledge from a multimodal model trained on a
larger set of English speech-image data.
"
Cross-Modal Concept Learning and Inference for Vision-Language Models,Yi Zhang,http://arxiv.org/pdf/2307.15460v1.pdf,2023-07-28,"['cs.cv', 'cs.cl']",2307.15460v1.pdf,"  Large-scale pre-trained Vision-Language Models (VLMs), such as CLIP,
establish the correlation between texts and images, achieving remarkable
success on various downstream tasks with fine-tuning. In existing fine-tuning
methods, the class-specific text description is matched against the whole
image. We recognize that this whole image matching is not effective since
images from the same class often contain a set of different semantic objects,
and an object further consists of a set of semantic parts or concepts.
Individual semantic parts or concepts may appear in image samples from
different classes. To address this issue, in this paper, we develop a new
method called cross-model concept learning and inference (CCLI). Using the
powerful text-image correlation capability of CLIP, our method automatically
learns a large set of distinctive visual concepts from images using a set of
semantic text concepts. Based on these visual concepts, we construct a
discriminative representation of images and learn a concept inference network
to perform downstream image classification tasks, such as few-shot learning and
domain generalization. Extensive experimental results demonstrate that our CCLI
method is able to improve the performance upon the current state-of-the-art
methods by large margins, for example, by up to 8.0% improvement on few-shot
learning and by up to 1.3% for domain generalization.
"
Demonstration-based learning for few-shot biomedical named entity  recognition under machine reading comprehension,Leilei Su,http://arxiv.org/pdf/2308.06454v1.pdf,2023-08-12,['cs.cl'],2308.06454v1.pdf,"  Although deep learning techniques have shown significant achievements, they
frequently depend on extensive amounts of hand-labeled data and tend to perform
inadequately in few-shot scenarios. The objective of this study is to devise a
strategy that can improve the model's capability to recognize biomedical
entities in scenarios of few-shot learning. By redefining biomedical named
entity recognition (BioNER) as a machine reading comprehension (MRC) problem,
we propose a demonstration-based learning method to address few-shot BioNER,
which involves constructing appropriate task demonstrations. In assessing our
proposed method, we compared the proposed method with existing advanced methods
using six benchmark datasets, including BC4CHEMD, BC5CDR-Chemical,
BC5CDR-Disease, NCBI-Disease, BC2GM, and JNLPBA. We examined the models'
efficacy by reporting F1 scores from both the 25-shot and 50-shot learning
experiments. In 25-shot learning, we observed 1.1% improvements in the average
F1 scores compared to the baseline method, reaching 61.7%, 84.1%, 69.1%, 70.1%,
50.6%, and 59.9% on six datasets, respectively. In 50-shot learning, we further
improved the average F1 scores by 1.0% compared to the baseline method,
reaching 73.1%, 86.8%, 76.1%, 75.6%, 61.7%, and 65.4%, respectively. We
reported that in the realm of few-shot learning BioNER, MRC-based language
models are much more proficient in recognizing biomedical entities compared to
the sequence labeling approach. Furthermore, our MRC-language models can
compete successfully with fully-supervised learning methodologies that rely
heavily on the availability of abundant annotated data. These results highlight
possible pathways for future advancements in few-shot BioNER methodologies.
"
Robustness Over Time: Understanding Adversarial Examples' Effectiveness  on Longitudinal Versions of Large Language Models,Yugeng Liu,http://arxiv.org/pdf/2308.07847v1.pdf,2023-08-15,['cs.cr'],2308.07847v1.pdf,"  Large Language Models (LLMs) have led to significant improvements in many
tasks across various domains, such as code interpretation, response generation,
and ambiguity handling. These LLMs, however, when upgrading, primarily
prioritize enhancing user experience while neglecting security, privacy, and
safety implications. Consequently, unintended vulnerabilities or biases can be
introduced. Previous studies have predominantly focused on specific versions of
the models and disregard the potential emergence of new attack vectors
targeting the updated versions. Through the lens of adversarial examples within
the in-context learning framework, this longitudinal study addresses this gap
by conducting a comprehensive assessment of the robustness of successive
versions of LLMs, vis-\`a-vis GPT-3.5. We conduct extensive experiments to
analyze and understand the impact of the robustness in two distinct learning
categories: zero-shot learning and few-shot learning. Our findings indicate
that, in comparison to earlier versions of LLMs, the updated versions do not
exhibit the anticipated level of robustness against adversarial attacks. In
addition, our study emphasizes the increased effectiveness of synergized
adversarial queries in most zero-shot learning and few-shot learning cases. We
hope that our study can lead to a more refined assessment of the robustness of
LLMs over time and provide valuable insights of these models for both
developers and users.
"
UniAP: Towards Universal Animal Perception in Vision via Few-shot  Learning,Meiqi Sun,http://arxiv.org/pdf/2308.09953v1.pdf,2023-08-19,['cs.cv'],2308.09953v1.pdf,"  Animal visual perception is an important technique for automatically
monitoring animal health, understanding animal behaviors, and assisting
animal-related research. However, it is challenging to design a deep
learning-based perception model that can freely adapt to different animals
across various perception tasks, due to the varying poses of a large diversity
of animals, lacking data on rare species, and the semantic inconsistency of
different tasks. We introduce UniAP, a novel Universal Animal Perception model
that leverages few-shot learning to enable cross-species perception among
various visual tasks. Our proposed model takes support images and labels as
prompt guidance for a query image. Images and labels are processed through a
Transformer-based encoder and a lightweight label encoder, respectively. Then a
matching module is designed for aggregating information between prompt guidance
and the query image, followed by a multi-head label decoder to generate outputs
for various tasks. By capitalizing on the shared visual characteristics among
different animals and tasks, UniAP enables the transfer of knowledge from
well-studied species to those with limited labeled data or even unseen species.
We demonstrate the effectiveness of UniAP through comprehensive experiments in
pose estimation, segmentation, and classification tasks on diverse animal
species, showcasing its ability to generalize and adapt to new classes with
minimal labeled examples.
"
PaLM: Scaling Language Modeling with Pathways,Aakanksha Chowdhery,http://arxiv.org/pdf/2204.02311v5.pdf,2022-04-05,['cs.cl'],2204.02311v5.pdf,"  Large language models have been shown to achieve remarkable performance
across a variety of natural language tasks using few-shot learning, which
drastically reduces the number of task-specific training examples needed to
adapt the model to a particular application. To further our understanding of
the impact of scale on few-shot learning, we trained a 540-billion parameter,
densely activated, Transformer language model, which we call Pathways Language
Model PaLM. We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML
system which enables highly efficient training across multiple TPU Pods. We
demonstrate continued benefits of scaling by achieving state-of-the-art
few-shot learning results on hundreds of language understanding and generation
benchmarks. On a number of these tasks, PaLM 540B achieves breakthrough
performance, outperforming the finetuned state-of-the-art on a suite of
multi-step reasoning tasks, and outperforming average human performance on the
recently released BIG-bench benchmark. A significant number of BIG-bench tasks
showed discontinuous improvements from model scale, meaning that performance
steeply increased as we scaled to our largest model. PaLM also has strong
capabilities in multilingual tasks and source code generation, which we
demonstrate on a wide array of benchmarks. We additionally provide a
comprehensive analysis on bias and toxicity, and study the extent of training
data memorization with respect to model scale. Finally, we discuss the ethical
considerations related to large language models and discuss potential
mitigation strategies.
"
Few-Shot Electronic Health Record Coding through Graph Contrastive  Learning,Shanshan Wang,http://arxiv.org/pdf/2106.15467v1.pdf,2021-06-29,"['cs.ai', 'cs.cl']",2106.15467v1.pdf,"  Electronic health record (EHR) coding is the task of assigning ICD codes to
each EHR. Most previous studies either only focus on the frequent ICD codes or
treat rare and frequent ICD codes in the same way. These methods perform well
on frequent ICD codes but due to the extremely unbalanced distribution of ICD
codes, the performance on rare ones is far from satisfactory. We seek to
improve the performance for both frequent and rare ICD codes by using a
contrastive graph-based EHR coding framework, CoGraph, which re-casts EHR
coding as a few-shot learning task. First, we construct a heterogeneous EHR
word-entity (HEWE) graph for each EHR, where the words and entities extracted
from an EHR serve as nodes and the relations between them serve as edges. Then,
CoGraph learns similarities and dissimilarities between HEWE graphs from
different ICD codes so that information can be transferred among them. In a
few-shot learning scenario, the model only has access to frequent ICD codes
during training, which might force it to encode features that are useful for
frequent ICD codes only. To mitigate this risk, CoGraph devises two graph
contrastive learning schemes, GSCL and GECL, that exploit the HEWE graph
structures so as to encode transferable features. GSCL utilizes the
intra-correlation of different sub-graphs sampled from HEWE graphs while GECL
exploits the inter-correlation among HEWE graphs at different clinical stages.
Experiments on the MIMIC-III benchmark dataset show that CoGraph significantly
outperforms state-of-the-art methods on EHR coding, not only on frequent ICD
codes, but also on rare codes, in terms of several evaluation indicators. On
frequent ICD codes, GSCL and GECL improve the classification accuracy and F1 by
1.31% and 0.61%, respectively, and on rare ICD codes CoGraph has more obvious
improvements by 2.12% and 2.95%.
"
ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language  Understanding and Generation,Yu Sun,http://arxiv.org/pdf/2107.02137v1.pdf,2021-07-05,['cs.cl'],2107.02137v1.pdf,"  Pre-trained models have achieved state-of-the-art results in various Natural
Language Processing (NLP) tasks. Recent works such as T5 and GPT-3 have shown
that scaling up pre-trained language models can improve their generalization
abilities. Particularly, the GPT-3 model with 175 billion parameters shows its
strong task-agnostic zero-shot/few-shot learning capabilities. Despite their
success, these large-scale models are trained on plain texts without
introducing knowledge such as linguistic knowledge and world knowledge. In
addition, most large-scale models are trained in an auto-regressive way. As a
result, this kind of traditional fine-tuning approach demonstrates relatively
weak performance when solving downstream language understanding tasks. In order
to solve the above problems, we propose a unified framework named ERNIE 3.0 for
pre-training large-scale knowledge enhanced models. It fuses auto-regressive
network and auto-encoding network, so that the trained model can be easily
tailored for both natural language understanding and generation tasks with
zero-shot learning, few-shot learning or fine-tuning. We trained the model with
10 billion parameters on a 4TB corpus consisting of plain texts and a
large-scale knowledge graph. Empirical results show that the model outperforms
the state-of-the-art models on 54 Chinese NLP tasks, and its English version
achieves the first place on the SuperGLUE benchmark (July 3, 2021), surpassing
the human performance by +0.8% (90.6% vs. 89.8%).
"
UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding  with Text-to-Text Language Models,Tianbao Xie,http://arxiv.org/pdf/2201.05966v3.pdf,2022-01-16,['cs.cl'],2201.05966v3.pdf,"  Structured knowledge grounding (SKG) leverages structured knowledge to
complete user requests, such as semantic parsing over databases and question
answering over knowledge bases. Since the inputs and outputs of SKG tasks are
heterogeneous, they have been studied separately by different communities,
which limits systematic and compatible research on SKG. In this paper, we
overcome this limitation by proposing the UnifiedSKG framework, which unifies
21 SKG tasks into a text-to-text format, aiming to promote systematic SKG
research, instead of being exclusive to a single task, domain, or dataset. We
use UnifiedSKG to benchmark T5 with different sizes and show that T5, with
simple modifications when necessary, achieves state-of-the-art performance on
almost all of the 21 tasks. We further demonstrate that multi-task
prefix-tuning improves the performance on most tasks, largely improving the
overall performance. UnifiedSKG also facilitates the investigation of zero-shot
and few-shot learning, and we show that T0, GPT-3, and Codex struggle in
zero-shot and few-shot learning for SKG. We also use UnifiedSKG to conduct a
series of controlled experiments on structured knowledge encoding variants
across SKG tasks. UnifiedSKG is easily extensible to more tasks, and it is
open-sourced at https://github.com/hkunlp/unifiedskg.
"
A Prompt-based Few-shot Learning Approach to Software Conflict Detection,Robert K. Helmeczi,http://arxiv.org/pdf/2211.02709v1.pdf,2022-11-04,['cs.se'],2211.02709v1.pdf,"  A software requirement specification (SRS) document is an essential part of
the software development life cycle which outlines the requirements that a
software program in development must satisfy. This document is often specified
by a diverse group of stakeholders and is subject to continual change, making
the process of maintaining the document and detecting conflicts between
requirements an essential task in software development. Notably, projects that
do not address conflicts in the SRS document early on face considerable
problems later in the development life cycle. These problems incur substantial
costs in terms of time and money, and these costs often become insurmountable
barriers that ultimately result in the termination of a software project
altogether. As a result, early detection of SRS conflicts is critical to
project sustainability. The conflict detection task is approached in numerous
ways, many of which require a significant amount of manual intervention from
developers, or require access to a large amount of labeled, task-specific
training data. In this work, we propose using a prompt-based learning approach
to perform few-shot learning for conflict detection. We compare our results to
supervised learning approaches that use pretrained language models, such as
BERT and its variants. Our results show that prompting with just 32 labeled
examples can achieve a similar level of performance in many key metrics to that
of supervised learning on training sets that are magnitudes larger in size. In
contrast to many other conflict detection approaches, we make no assumptions
about the type of underlying requirements, allowing us to analyze pairings of
both functional and non-functional requirements. This allows us to omit the
potentially expensive task of filtering out non-functional requirements from
our dataset.
"
"Cross-Lingual Alignment of Contextual Word Embeddings, with Applications  to Zero-shot Dependency Parsing",Tal Schuster,http://arxiv.org/pdf/1902.09492v2.pdf,2019-02-25,"['cs.cl', 'cs.lg']",1902.09492v2.pdf,"  We introduce a novel method for multilingual transfer that utilizes deep
contextual embeddings, pretrained in an unsupervised fashion. While contextual
embeddings have been shown to yield richer representations of meaning compared
to their static counterparts, aligning them poses a challenge due to their
dynamic nature. To this end, we construct context-independent variants of the
original monolingual spaces and utilize their mapping to derive an alignment
for the context-dependent spaces. This mapping readily supports processing of a
target language, improving transfer by context-aware embeddings. Our
experimental results demonstrate the effectiveness of this approach for
zero-shot and few-shot learning of dependency parsing. Specifically, our method
consistently outperforms the previous state-of-the-art on 6 tested languages,
yielding an improvement of 6.8 LAS points on average.
"
Few-shot Natural Language Generation for Task-Oriented Dialog,Baolin Peng,http://arxiv.org/pdf/2002.12328v1.pdf,2020-02-27,['cs.cl'],2002.12328v1.pdf,"  As a crucial component in task-oriented dialog systems, the Natural Language
Generation (NLG) module converts a dialog act represented in a semantic form
into a response in natural language. The success of traditional template-based
or statistical models typically relies on heavily annotated data, which is
infeasible for new domains. Therefore, it is pivotal for an NLG system to
generalize well with limited labelled data in real applications. To this end,
we present FewShotWoz, the first NLG benchmark to simulate the few-shot
learning setting in task-oriented dialog systems. Further, we develop the
SC-GPT model. It is pre-trained on a large set of annotated NLG corpus to
acquire the controllable generation ability, and fine-tuned with only a few
domain-specific labels to adapt to new domains. Experiments on FewShotWoz and
the large Multi-Domain-WOZ datasets show that the proposed SC-GPT significantly
outperforms existing methods, measured by various automatic metrics and human
evaluations.
"
Alleviating the Incompatibility between Cross Entropy Loss and Episode  Training for Few-shot Skin Disease Classification,Wei Zhu,http://arxiv.org/pdf/2004.09694v1.pdf,2020-04-21,"['eess.iv', 'cs.cv', 'cs.lg']",2004.09694v1.pdf,"  Skin disease classification from images is crucial to dermatological
diagnosis. However, identifying skin lesions involves a variety of aspects in
terms of size, color, shape, and texture. To make matters worse, many
categories only contain very few samples, posing great challenges to
conventional machine learning algorithms and even human experts. Inspired by
the recent success of Few-Shot Learning (FSL) in natural image classification,
we propose to apply FSL to skin disease identification to address the extreme
scarcity of training sample problem. However, directly applying FSL to this
task does not work well in practice, and we find that the problem can be
largely attributed to the incompatibility between Cross Entropy (CE) and
episode training, which are both commonly used in FSL. Based on a detailed
analysis, we propose the Query-Relative (QR) loss, which proves superior to CE
under episode training and is closely related to recently proposed mutual
information estimation. Moreover, we further strengthen the proposed QR loss
with a novel adaptive hard margin strategy. Comprehensive experiments validate
the effectiveness of the proposed FSL scheme and the possibility to diagnosis
rare skin disease with a few labeled samples.
"
Few-shot learning through contextual data augmentation,Farid Arthaud,http://arxiv.org/pdf/2103.16911v1.pdf,2021-03-31,['cs.cl'],2103.16911v1.pdf,"  Machine translation (MT) models used in industries with constantly changing
topics, such as translation or news agencies, need to adapt to new data to
maintain their performance over time. Our aim is to teach a pre-trained MT
model to translate previously unseen words accurately, based on very few
examples. We propose (i) an experimental setup allowing us to simulate novel
vocabulary appearing in human-submitted translations, and (ii) corresponding
evaluation metrics to compare our approaches. We extend a data augmentation
approach using a pre-trained language model to create training examples with
similar contexts for novel words. We compare different fine-tuning and data
augmentation approaches and show that adaptation on the scale of one to five
examples is possible. Combining data augmentation with randomly selected
training sentences leads to the highest BLEU score and accuracy improvements.
Impressively, with only 1 to 5 examples, our model reports better accuracy
scores than a reference system trained with on average 313 parallel examples.
"
Meta-Learning GNN Initializations for Low-Resource Molecular Property  Prediction,Cuong Q. Nguyen,http://arxiv.org/pdf/2003.05996v2.pdf,2020-03-12,"['cs.lg', 'physics.chem-ph', 'stat.ml']",2003.05996v2.pdf,"  Building in silico models to predict chemical properties and activities is a
crucial step in drug discovery. However, limited labeled data often hinders the
application of deep learning in this setting. Meanwhile advances in
meta-learning have enabled state-of-the-art performances in few-shot learning
benchmarks, naturally prompting the question: Can meta-learning improve deep
learning performance in low-resource drug discovery projects? In this work, we
assess the transferability of graph neural networks initializations learned by
the Model-Agnostic Meta-Learning (MAML) algorithm - and its variants FO-MAML
and ANIL - for chemical properties and activities tasks. Using the ChEMBL20
dataset to emulate low-resource settings, our benchmark shows that
meta-initializations perform comparably to or outperform multi-task
pre-training baselines on 16 out of 20 in-distribution tasks and on all
out-of-distribution tasks, providing an average improvement in AUPRC of 11.2%
and 26.9% respectively. Finally, we observe that meta-initializations
consistently result in the best performing models across fine-tuning sets with
$k \in \{16, 32, 64, 128, 256\}$ instances.
"
Neural Data Augmentation via Example Extrapolation,Kenton Lee,http://arxiv.org/pdf/2102.01335v1.pdf,2021-02-02,"['cs.cl', 'cs.ai']",2102.01335v1.pdf,"  In many applications of machine learning, certain categories of examples may
be underrepresented in the training data, causing systems to underperform on
such ""few-shot"" cases at test time. A common remedy is to perform data
augmentation, such as by duplicating underrepresented examples, or
heuristically synthesizing new examples. But these remedies often fail to cover
the full diversity and complexity of real examples.
  We propose a data augmentation approach that performs neural Example
Extrapolation (Ex2). Given a handful of exemplars sampled from some
distribution, Ex2 synthesizes new examples that also belong to the same
distribution. The Ex2 model is learned by simulating the example generation
procedure on data-rich slices of the data, and it is applied to
underrepresented, few-shot slices.
  We apply Ex2 to a range of language understanding tasks and significantly
improve over state-of-the-art methods on multiple few-shot learning benchmarks,
including for relation extraction (FewRel) and intent classification + slot
filling (SNIPS).
"
One-shot learning for the long term: consolidation with an artificial  hippocampal algorithm,Gideon Kowadlo,http://arxiv.org/pdf/2102.07503v2.pdf,2021-02-15,"['cs.lg', 'cs.ai', 'cs.ne', 'i.2.6; i.5.0; i.5.1']",2102.07503v2.pdf,"  Standard few-shot experiments involve learning to efficiently match
previously unseen samples by class. We claim that few-shot learning should be
long term, assimilating knowledge for the future, without forgetting previous
concepts. In the mammalian brain, the hippocampus is understood to play a
significant role in this process, by learning rapidly and consolidating
knowledge to the neocortex incrementally over a short period. In this research
we tested whether an artificial hippocampal algorithm (AHA), could be used with
a conventional Machine Learning (ML) model that learns incrementally analogous
to the neocortex, to achieve one-shot learning both short and long term. The
results demonstrated that with the addition of AHA, the system could learn in
one-shot and consolidate the knowledge for the long term without catastrophic
forgetting. This study is one of the first examples of using a CLS model of
hippocampus to consolidate memories, and it constitutes a step toward few-shot
continual learning.
"
Calibrate Before Use: Improving Few-Shot Performance of Language Models,Tony Z. Zhao,http://arxiv.org/pdf/2102.09690v2.pdf,2021-02-19,"['cs.cl', 'cs.lg']",2102.09690v2.pdf,"  GPT-3 can perform numerous tasks when provided a natural language prompt that
contains a few training examples. We show that this type of few-shot learning
can be unstable: the choice of prompt format, training examples, and even the
order of the training examples can cause accuracy to vary from near chance to
near state-of-the-art. We demonstrate that this instability arises from the
bias of language models towards predicting certain answers, e.g., those that
are placed near the end of the prompt or are common in the pre-training data.
To mitigate this, we first estimate the model's bias towards each answer by
asking for its prediction when given the training prompt and a content-free
test input such as ""N/A"". We then fit calibration parameters that cause the
prediction for this input to be uniform across answers. On a diverse set of
tasks, this contextual calibration procedure substantially improves GPT-3 and
GPT-2's average accuracy (up to 30.0% absolute) and reduces variance across
different choices of the prompt.
"
The Power of Scale for Parameter-Efficient Prompt Tuning,Brian Lester,http://arxiv.org/pdf/2104.08691v2.pdf,2021-04-18,['cs.cl'],2104.08691v2.pdf,"  In this work, we explore ""prompt tuning"", a simple yet effective mechanism
for learning ""soft prompts"" to condition frozen language models to perform
specific downstream tasks. Unlike the discrete text prompts used by GPT-3, soft
prompts are learned through backpropagation and can be tuned to incorporate
signal from any number of labeled examples. Our end-to-end learned approach
outperforms GPT-3's ""few-shot"" learning by a large margin. More remarkably,
through ablations on model size using T5, we show that prompt tuning becomes
more competitive with scale: as models exceed billions of parameters, our
method ""closes the gap"" and matches the strong performance of model tuning
(where all model weights are tuned). This finding is especially relevant in
that large models are costly to share and serve, and the ability to reuse one
frozen model for multiple downstream tasks can ease this burden. Our method can
be seen as a simplification of the recently proposed ""prefix tuning"" of Li and
Liang (2021), and we provide a comparison to this and other similar approaches.
Finally, we show that conditioning a frozen model with soft prompts confers
benefits in robustness to domain transfer, as compared to full model tuning.
"
What's in a Measurement? Using GPT-3 on SemEval 2021 Task 8 -- MeasEval,Curt Kohler,http://arxiv.org/pdf/2106.14720v1.pdf,2021-06-28,['cs.cl'],2106.14720v1.pdf,"  In the summer of 2020 OpenAI released its GPT-3 autoregressive language model
to much fanfare. While the model has shown promise on tasks in several areas,
it has not always been clear when the results were cherry-picked or when they
were the unvarnished output. We were particularly interested in what benefits
GPT-3 could bring to the SemEval 2021 MeasEval task - identifying measurements
and their associated attributes in scientific literature. We had already
experimented with multi-turn questions answering as a solution to this task. We
wanted to see if we could use GPT-3's few-shot learning capabilities to more
easily develop a solution that would have better performance than our prior
work. Unfortunately, we have not been successful in that effort. This paper
discusses the approach we used, challenges we encountered, and results we
observed. Some of the problems we encountered were simply due to the state of
the art. For example, the limits on the size of the prompt and answer limited
the amount of the training signal that could be offered. Others are more
fundamental. We are unaware of generative models that excel in retaining
factual information. Also, the impact of changes in the prompts is
unpredictable, making it hard to reliably improve performance.
"
FLEX: Unifying Evaluation for Few-Shot NLP,Jonathan Bragg,http://arxiv.org/pdf/2107.07170v2.pdf,2021-07-15,"['cs.cl', 'cs.lg', 'i.2.7']",2107.07170v2.pdf,"  Few-shot NLP research is highly active, yet conducted in disjoint research
threads with evaluation suites that lack challenging-yet-realistic testing
setups and fail to employ careful experimental design. Consequently, the
community does not know which techniques perform best or even if they
outperform simple baselines. In response, we formulate the FLEX Principles, a
set of requirements and best practices for unified, rigorous, valid, and
cost-sensitive few-shot NLP evaluation. These principles include Sample Size
Design, a novel approach to benchmark design that optimizes statistical
accuracy and precision while keeping evaluation costs manageable. Following the
principles, we release the FLEX benchmark, which includes four few-shot
transfer settings, zero-shot evaluation, and a public leaderboard that covers
diverse NLP tasks. In addition, we present UniFew, a prompt-based model for
few-shot learning that unifies pretraining and finetuning prompt formats,
eschewing complex machinery of recent prompt-based approaches in adapting
downstream task formats to language model pretraining objectives. We
demonstrate that despite simplicity, UniFew achieves results competitive with
both popular meta-learning and prompt-based approaches.
"
Wordcraft: a Human-AI Collaborative Editor for Story Writing,Andy Coenen,http://arxiv.org/pdf/2107.07430v1.pdf,2021-07-15,['cs.cl'],2107.07430v1.pdf,"  As neural language models grow in effectiveness, they are increasingly being
applied in real-world settings. However these applications tend to be limited
in the modes of interaction they support. In this extended abstract, we propose
Wordcraft, an AI-assisted editor for story writing in which a writer and a
dialog system collaborate to write a story. Our novel interface uses few-shot
learning and the natural affordances of conversation to support a variety of
interactions. Our editor provides a sandbox for writers to probe the boundaries
of transformer-based language models and paves the way for future
human-in-the-loop training pipelines and novel evaluation methods.
"
Design of a Graphical User Interface for Few-Shot Machine Learning  Classification of Electron Microscopy Data,Christina Doty,http://arxiv.org/pdf/2107.10387v1.pdf,2021-07-21,"['cond-mat.mtrl-sci', 'cs.lg']",2107.10387v1.pdf,"  The recent growth in data volumes produced by modern electron microscopes
requires rapid, scalable, and flexible approaches to image segmentation and
analysis. Few-shot machine learning, which can richly classify images from a
handful of user-provided examples, is a promising route to high-throughput
analysis. However, current command-line implementations of such approaches can
be slow and unintuitive to use, lacking the real-time feedback necessary to
perform effective classification. Here we report on the development of a
Python-based graphical user interface that enables end users to easily conduct
and visualize the output of few-shot learning models. This interface is
lightweight and can be hosted locally or on the web, providing the opportunity
to reproducibly conduct, share, and crowd-source few-shot analyses.
"
Noisy Channel Language Model Prompting for Few-Shot Text Classification,Sewon Min,http://arxiv.org/pdf/2108.04106v3.pdf,2021-08-09,"['cs.cl', 'cs.ai']",2108.04106v3.pdf,"  We introduce a noisy channel approach for language model prompting in
few-shot text classification. Instead of computing the likelihood of the label
given the input (referred as direct models), channel models compute the
conditional probability of the input given the label, and are thereby required
to explain every word in the input. We use channel models for recently proposed
few-shot learning methods with no or very limited updates to the language model
parameters, via either in-context demonstration or prompt tuning. Our
experiments show that, for both methods, channel models significantly
outperform their direct counterparts, which we attribute to their stability,
i.e., lower variance and higher worst-case accuracy. We also present extensive
ablations that provide recommendations for when to use channel prompt tuning
instead of other competitive methods (e.g., direct head tuning): channel prompt
tuning is preferred when the number of training examples is small, labels in
the training data are imbalanced, or generalization to unseen labels is
required.
"
FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning,Jing Zhou,http://arxiv.org/pdf/2108.06332v2.pdf,2021-08-13,['cs.cl'],2108.06332v2.pdf,"  Most previous methods for text data augmentation are limited to simple tasks
and weak baselines. We explore data augmentation on hard tasks (i.e., few-shot
natural language understanding) and strong baselines (i.e., pretrained models
with over one billion parameters). Under this setting, we reproduced a large
number of previous augmentation methods and found that these methods bring
marginal gains at best and sometimes degrade the performance much. To address
this challenge, we propose a novel data augmentation method FlipDA that jointly
uses a generative model and a classifier to generate label-flipped data.
Central to the idea of FlipDA is the discovery that generating label-flipped
data is more crucial to the performance than generating label-preserved data.
Experiments show that FlipDA achieves a good tradeoff between effectiveness and
robustness -- it substantially improves many tasks while not negatively
affecting the others.
"
On the Multilingual Capabilities of Very Large-Scale English Language  Models,Jordi Armengol-Estapé,http://arxiv.org/pdf/2108.13349v1.pdf,2021-08-30,"['cs.cl', 'cs.ai']",2108.13349v1.pdf,"  Generative Pre-trained Transformers (GPTs) have recently been scaled to
unprecedented sizes in the history of machine learning. These models, solely
trained on the language modeling objective, have been shown to exhibit
outstanding few-shot learning capabilities in a number of different tasks.
Nevertheless, aside from anecdotal experiences, little is known regarding their
multilingual capabilities, given the fact that the pre-training corpus is
almost entirely composed of English text. In this work, we investigate the
multilingual skills of GPT-3, focusing on one language that barely appears in
the pre-training corpus, Catalan, which makes the results especially
meaningful; we assume that our results may be relevant for other languages as
well. We find that the model shows an outstanding performance, particularly in
generative tasks, with predictable limitations mostly in language understanding
tasks but still with remarkable results given the zero-shot scenario. We
investigate its potential and limits in extractive question-answering and
natural language generation, as well as the effect of scale in terms of model
size.
"
Want To Reduce Labeling Cost? GPT-3 Can Help,Shuohang Wang,http://arxiv.org/pdf/2108.13487v1.pdf,2021-08-30,"['cs.cl', 'cs.ai']",2108.13487v1.pdf,"  Data annotation is a time-consuming and labor-intensive process for many NLP
tasks. Although there exist various methods to produce pseudo data labels, they
are often task-specific and require a decent amount of labeled data to start
with. Recently, the immense language model GPT-3 with 175 billion parameters
has achieved tremendous improvement across many few-shot learning tasks. In
this paper, we explore ways to leverage GPT-3 as a low-cost data labeler to
train other models. We find that, to make the downstream model achieve the same
performance on a variety of NLU and NLG tasks, it costs 50% to 96% less to use
labels from GPT-3 than using labels from humans. Furthermore, we propose a
novel framework of combining pseudo labels from GPT-3 with human labels, which
leads to even better performance with limited labeling budget. These results
present a cost-effective data labeling methodology that is generalizable to
many practical applications.
"
ConQX: Semantic Expansion of Spoken Queries for Intent Detection based  on Conditioned Text Generation,Eyup Halit Yilmaz,http://arxiv.org/pdf/2109.00729v1.pdf,2021-09-02,"['cs.cl', 'cs.ai']",2109.00729v1.pdf,"  Intent detection of spoken queries is a challenging task due to their noisy
structure and short length. To provide additional information regarding the
query and enhance the performance of intent detection, we propose a method for
semantic expansion of spoken queries, called ConQX, which utilizes the text
generation ability of an auto-regressive language model, GPT-2. To avoid
off-topic text generation, we condition the input query to a structured context
with prompt mining. We then apply zero-shot, one-shot, and few-shot learning.
We lastly use the expanded queries to fine-tune BERT and RoBERTa for intent
detection. The experimental results show that the performance of intent
detection can be improved by our semantic expansion method.
"
Do Prompt-Based Models Really Understand the Meaning of their Prompts?,Albert Webson,http://arxiv.org/pdf/2109.01247v2.pdf,2021-09-02,['cs.cl'],2109.01247v2.pdf,"  Recently, a boom of papers has shown extraordinary progress in zero-shot and
few-shot learning with various prompt-based models. It is commonly argued that
prompts help models to learn faster in the same way that humans learn faster
when provided with task instructions expressed in natural language. In this
study, we experiment with over 30 prompt templates manually written for natural
language inference (NLI). We find that models learn just as fast with many
prompts that are intentionally irrelevant or even pathologically misleading as
they do with instructively ""good"" prompts. Further, such patterns hold even for
models as large as 175 billion parameters (Brown et al., 2020) as well as the
recently proposed instruction-tuned models which are trained on hundreds of
prompts (Sanh et al., 2022). That is, instruction-tuned models often produce
good predictions with irrelevant and misleading prompts even at zero shots. In
sum, notwithstanding prompt-based models' impressive improvement, we find
evidence of serious limitations that question the degree to which such
improvement is derived from models understanding task instructions in ways
analogous to humans' use of task instructions.
"
Learning Opinion Summarizers by Selecting Informative Reviews,Arthur BraĹľinskas,http://arxiv.org/pdf/2109.04325v1.pdf,2021-09-09,"['cs.cl', 'cs.ai', 'cs.lg']",2109.04325v1.pdf,"  Opinion summarization has been traditionally approached with unsupervised,
weakly-supervised and few-shot learning techniques. In this work, we collect a
large dataset of summaries paired with user reviews for over 31,000 products,
enabling supervised training. However, the number of reviews per product is
large (320 on average), making summarization - and especially training a
summarizer - impractical. Moreover, the content of many reviews is not
reflected in the human-written summaries, and, thus, the summarizer trained on
random review subsets hallucinates. In order to deal with both of these
challenges, we formulate the task as jointly learning to select informative
subsets of reviews and summarizing the opinions expressed in these subsets. The
choice of the review subset is treated as a latent variable, predicted by a
small and simple selector. The subset is then fed into a more powerful
summarizer. For joint training, we use amortized variational inference and
policy gradient methods. Our experiments demonstrate the importance of
selecting informative reviews resulting in improved quality of summaries and
reduced hallucinations.
"
STraTA: Self-Training with Task Augmentation for Better Few-shot  Learning,Tu Vu,http://arxiv.org/pdf/2109.06270v2.pdf,2021-09-13,['cs.cl'],2109.06270v2.pdf,"  Despite their recent successes in tackling many NLP tasks, large-scale
pre-trained language models do not perform as well in few-shot settings where
only a handful of training examples are available. To address this shortcoming,
we propose STraTA, which stands for Self-Training with Task Augmentation, an
approach that builds on two key ideas for effective leverage of unlabeled data.
First, STraTA uses task augmentation, a novel technique that synthesizes a
large amount of data for auxiliary-task fine-tuning from target-task unlabeled
texts. Second, STraTA performs self-training by further fine-tuning the strong
base model created by task augmentation on a broad distribution of
pseudo-labeled data. Our experiments demonstrate that STraTA can substantially
improve sample efficiency across 12 few-shot benchmarks. Remarkably, on the
SST-2 sentiment dataset, STraTA, with only 8 training examples per class,
achieves comparable results to standard fine-tuning with 67K training examples.
Our analyses reveal that task augmentation and self-training are both
complementary and independently effective.
"
Few-Shot Emotion Recognition in Conversation with Sequential  Prototypical Networks,Gaël Guibon,http://arxiv.org/pdf/2109.09366v1.pdf,2021-09-20,"['cs.cl', 'cs.lg']",2109.09366v1.pdf,"  Several recent studies on dyadic human-human interactions have been done on
conversations without specific business objectives. However, many companies
might benefit from studies dedicated to more precise environments such as after
sales services or customer satisfaction surveys. In this work, we place
ourselves in the scope of a live chat customer service in which we want to
detect emotions and their evolution in the conversation flow. This context
leads to multiple challenges that range from exploiting restricted, small and
mostly unlabeled datasets to finding and adapting methods for such context.We
tackle these challenges by using Few-Shot Learning while making the hypothesis
it can serve conversational emotion classification for different languages and
sparse labels. We contribute by proposing a variation of Prototypical Networks
for sequence labeling in conversation that we name ProtoSeq. We test this
method on two datasets with different languages: daily conversations in English
and customer service chat conversations in French. When applied to emotion
classification in conversations, our method proved to be competitive even when
compared to other ones.
"
UserIdentifier: Implicit User Representations for Simple and Effective  Personalized Sentiment Analysis,Fatemehsadat Mireshghallah,http://arxiv.org/pdf/2110.00135v2.pdf,2021-10-01,"['cs.lg', 'cs.ai', 'cs.cl']",2110.00135v2.pdf,"  Global models are trained to be as generalizable as possible, with user
invariance considered desirable since the models are shared across multitudes
of users. As such, these models are often unable to produce personalized
responses for individual users, based on their data. Contrary to widely-used
personalization techniques based on few-shot learning, we propose
UserIdentifier, a novel scheme for training a single shared model for all
users. Our approach produces personalized responses by adding fixed,
non-trainable user identifiers to the input data. We empirically demonstrate
that this proposed method outperforms the prefix-tuning based state-of-the-art
approach by up to 13%, on a suite of sentiment analysis datasets. We also show
that, unlike prior work, this method needs neither any additional model
parameters nor any extra rounds of few-shot fine-tuning.
"
Instance-aware Prompt Learning for Language Understanding and Generation,Feihu Jin,http://arxiv.org/pdf/2201.07126v1.pdf,2022-01-18,['cs.cl'],2201.07126v1.pdf,"  Recently, prompt learning has become a new paradigm to utilize pre-trained
language models (PLMs) and achieves promising results in downstream tasks with
a negligible increase of parameters. The current usage of discrete and
continuous prompts assumes that the prompt is fixed for a specific task and all
samples in the task share the same prompt. However, a task may contain quite
diverse samples in which some are easy and others are difficult, and diverse
prompts are desirable. In this paper, we propose an instance-aware prompt
learning method that learns a different prompt for each instance. Specifically,
we suppose that each learnable prompt token has a different contribution to
different instances, and we learn the contribution by calculating the relevance
score between an instance and each prompt token. The contribution weighted
prompt would be instance aware. We apply our method to both unidirectional and
bidirectional PLMs on both language understanding and generation tasks.
Extensive experiments demonstrate that our method obtains considerable
improvements compared to strong baselines. Especially, our method achieves the
state-of-the-art on the SuperGLUE few-shot learning benchmark.
"
Generating Training Data with Language Models: Towards Zero-Shot  Language Understanding,Yu Meng,http://arxiv.org/pdf/2202.04538v2.pdf,2022-02-09,"['cs.cl', 'cs.lg']",2202.04538v2.pdf,"  Pretrained language models (PLMs) have demonstrated remarkable performance in
various natural language processing tasks: Unidirectional PLMs (e.g., GPT) are
well known for their superior text generation capabilities; bidirectional PLMs
(e.g., BERT) have been the prominent choice for natural language understanding
(NLU) tasks. While both types of models have achieved promising few-shot
learning performance, their potential for zero-shot learning has been
underexplored. In this paper, we present a simple approach that uses both types
of PLMs for fully zero-shot learning of NLU tasks without requiring any
task-specific data: A unidirectional PLM generates class-conditioned texts
guided by prompts, which are used as the training data for fine-tuning a
bidirectional PLM. With quality training data selected based on the generation
probability and regularization techniques (label smoothing and temporal
ensembling) applied to the fine-tuning stage for better generalization and
stability, our approach demonstrates strong performance across seven
classification tasks of the GLUE benchmark (e.g., 72.3/73.8 on MNLI-m/mm and
92.8 on SST-2), significantly outperforming zero-shot prompting methods and
achieving even comparable results to strong few-shot approaches using 32
training samples per class.
"
Variational Autoencoder with Disentanglement Priors for Low-Resource  Task-Specific Natural Language Generation,Zhuang Li,http://arxiv.org/pdf/2202.13363v3.pdf,2022-02-27,['cs.cl'],2202.13363v3.pdf,"  In this paper, we propose a variational autoencoder with disentanglement
priors, VAE-DPRIOR, for task-specific natural language generation with none or
a handful of task-specific labeled examples. In order to tackle compositional
generalization across tasks, our model performs disentangled representation
learning by introducing a conditional prior for the latent content space and
another conditional prior for the latent label space. Both types of priors
satisfy a novel property called $\epsilon$-disentangled. We show both
empirically and theoretically that the novel priors can disentangle
representations even without specific regularizations as in the prior work. The
content prior enables directly sampling diverse content representations from
the content space learned from the seen tasks, and fuse them with the
representations of novel tasks for generating semantically diverse texts in the
low-resource settings. Our extensive experiments demonstrate the superior
performance of our model over competitive baselines in terms of i) data
augmentation in continuous zero/few-shot learning, and ii) text style transfer
in the few-shot setting.
"
ClarET: Pre-training a Correlation-Aware Context-To-Event Transformer  for Event-Centric Generation and Classification,Yucheng Zhou,http://arxiv.org/pdf/2203.02225v2.pdf,2022-03-04,['cs.cl'],2203.02225v2.pdf,"  Generating new events given context with correlated ones plays a crucial role
in many event-centric reasoning tasks. Existing works either limit their scope
to specific scenarios or overlook event-level correlations. In this paper, we
propose to pre-train a general Correlation-aware context-to-Event Transformer
(ClarET) for event-centric reasoning. To achieve this, we propose three novel
event-centric objectives, i.e., whole event recovering, contrastive
event-correlation encoding and prompt-based event locating, which highlight
event-level correlations with effective training. The proposed ClarET is
applicable to a wide range of event-centric reasoning scenarios, considering
its versatility of (i) event-correlation types (e.g., causal, temporal,
contrast), (ii) application formulations (i.e., generation and classification),
and (iii) reasoning types (e.g., abductive, counterfactual and ending
reasoning). Empirical fine-tuning results, as well as zero- and few-shot
learning, on 9 benchmarks (5 generation and 4 classification tasks covering 4
reasoning types with diverse event correlations), verify its effectiveness and
generalization ability.
"
Pre-trained Token-replaced Detection Model as Few-shot Learner,Zicheng Li,http://arxiv.org/pdf/2203.03235v2.pdf,2022-03-07,"['cs.cl', 'cs.ai']",2203.03235v2.pdf,"  Pre-trained masked language models have demonstrated remarkable ability as
few-shot learners. In this paper, as an alternative, we propose a novel
approach to few-shot learning with pre-trained token-replaced detection models
like ELECTRA. In this approach, we reformulate a classification or a regression
task as a token-replaced detection problem. Specifically, we first define a
template and label description words for each task and put them into the input
to form a natural language prompt. Then, we employ the pre-trained
token-replaced detection model to predict which label description word is the
most original (i.e., least replaced) among all label description words in the
prompt. A systematic evaluation on 16 datasets demonstrates that our approach
outperforms few-shot learners with pre-trained masked language models in both
one-sentence and two-sentence learning tasks.
"
InstructionNER: A Multi-Task Instruction-Based Generative Framework for  Few-shot NER,Liwen Wang,http://arxiv.org/pdf/2203.03903v1.pdf,2022-03-08,['cs.cl'],2203.03903v1.pdf,"  Recently, prompt-based methods have achieved significant performance in
few-shot learning scenarios by bridging the gap between language model
pre-training and fine-tuning for downstream tasks. However, existing prompt
templates are mostly designed for sentence-level tasks and are inappropriate
for sequence labeling objectives. To address the above issue, we propose a
multi-task instruction-based generative framework, named InstructionNER, for
low-resource named entity recognition. Specifically, we reformulate the NER
task as a generation problem, which enriches source sentences with
task-specific instructions and answer options, then inferences the entities and
types in natural language. We further propose two auxiliary tasks, including
entity extraction and entity typing, which enable the model to capture more
boundary information of entities and deepen the understanding of entity type
semantics, respectively. Experimental results show that our method consistently
outperforms other baselines on five datasets in few-shot settings.
"
Prototypical Verbalizer for Prompt-based Few-shot Tuning,Ganqu Cui,http://arxiv.org/pdf/2203.09770v1.pdf,2022-03-18,"['cs.cl', 'cs.lg']",2203.09770v1.pdf,"  Prompt-based tuning for pre-trained language models (PLMs) has shown its
effectiveness in few-shot learning. Typically, prompt-based tuning wraps the
input text into a cloze question. To make predictions, the model maps the
output words to labels via a verbalizer, which is either manually designed or
automatically built. However, manual verbalizers heavily depend on
domain-specific prior knowledge and human efforts, while finding appropriate
label words automatically still remains challenging.In this work, we propose
the prototypical verbalizer (ProtoVerb) which is built directly from training
data. Specifically, ProtoVerb learns prototype vectors as verbalizers by
contrastive learning. In this way, the prototypes summarize training instances
and are able to enclose rich class-level semantics. We conduct experiments on
both topic classification and entity typing tasks, and the results demonstrate
that ProtoVerb significantly outperforms current automatic verbalizers,
especially when training data is extremely scarce. More surprisingly, ProtoVerb
consistently boosts prompt-based tuning even on untuned PLMs, indicating an
elegant non-tuning way to utilize PLMs. Our codes are avaliable at
https://github.com/thunlp/OpenPrompt.
"
Few-Shot Learning with Siamese Networks and Label Tuning,Thomas MĂĽller,http://arxiv.org/pdf/2203.14655v2.pdf,2022-03-28,"['cs.cl', 'cs.lg']",2203.14655v2.pdf,"  We study the problem of building text classifiers with little or no training
data, commonly known as zero and few-shot text classification. In recent years,
an approach based on neural textual entailment models has been found to give
strong results on a diverse range of tasks. In this work, we show that with
proper pre-training, Siamese Networks that embed texts and labels offer a
competitive alternative. These models allow for a large reduction in inference
cost: constant in the number of labels rather than linear. Furthermore, we
introduce label tuning, a simple and computationally efficient approach that
allows to adapt the models in a few-shot setup by only changing the label
embeddings. While giving lower performance than model fine-tuning, this
approach has the architectural advantage that a single encoder can be shared by
many different tasks.
"
Inverse is Better! Fast and Accurate Prompt for Few-shot Slot Tagging,Yutai Hou,http://arxiv.org/pdf/2204.00885v1.pdf,2022-04-02,"['cs.cl', 'cs.ai']",2204.00885v1.pdf,"  Prompting methods recently achieve impressive success in few-shot learning.
These methods modify input samples with prompt sentence pieces, and decode
label tokens to map samples to corresponding labels. However, such a paradigm
is very inefficient for the task of slot tagging. Since slot tagging samples
are multiple consecutive words in a sentence, the prompting methods have to
enumerate all n-grams token spans to find all the possible slots, which greatly
slows down the prediction. To tackle this, we introduce an inverse paradigm for
prompting. Different from the classic prompts mapping tokens to labels, we
reversely predict slot values given slot types. Such inverse prompting only
requires a one-turn prediction for each slot type and greatly speeds up the
prediction. Besides, we propose a novel Iterative Prediction Strategy, from
which the model learns to refine predictions by considering the relations
between different slot types. We find, somewhat surprisingly, the proposed
method not only predicts faster but also significantly improves the effect
(improve over 6.1 F1-scores on 10-shot setting) and achieves new
state-of-the-art performance.
"
Leveraging pre-trained language models for conversational information  seeking from text,Patrizio Bellan,http://arxiv.org/pdf/2204.03542v1.pdf,2022-03-31,"['cs.cl', 'cs.ai']",2204.03542v1.pdf,"  Recent advances in Natural Language Processing, and in particular on the
construction of very large pre-trained language representation models, is
opening up new perspectives on the construction of conversational information
seeking (CIS) systems. In this paper we investigate the usage of in-context
learning and pre-trained language representation models to address the problem
of information extraction from process description documents, in an incremental
question and answering oriented fashion. In particular we investigate the usage
of the native GPT-3 (Generative Pre-trained Transformer 3) model, together with
two in-context learning customizations that inject conceptual definitions and a
limited number of samples in a few shot-learning fashion. The results highlight
the potential of the approach and the usefulness of the in-context learning
customizations, which can substantially contribute to address the ""training
data challenge"" of deep learning based NLP techniques the BPM field. It also
highlight the challenge posed by control flow relations for which further
training needs to be devised.
"
MGIMN: Multi-Grained Interactive Matching Network for Few-shot Text  Classification,Jianhai Zhang,http://arxiv.org/pdf/2204.04952v3.pdf,2022-04-11,['cs.cl'],2204.04952v3.pdf,"  Text classification struggles to generalize to unseen classes with very few
labeled text instances per class. In such a few-shot learning (FSL) setting,
metric-based meta-learning approaches have shown promising results. Previous
studies mainly aim to derive a prototype representation for each class.
However, they neglect that it is challenging-yet-unnecessary to construct a
compact representation which expresses the entire meaning for each class. They
also ignore the importance to capture the inter-dependency between query and
the support set for few-shot text classification. To deal with these issues, we
propose a meta-learning based method MGIMN which performs instance-wise
comparison followed by aggregation to generate class-wise matching vectors
instead of prototype learning. The key of instance-wise comparison is the
interactive matching within the class-specific context and episode-specific
context. Extensive experiments demonstrate that the proposed method
significantly outperforms the existing state-of-the-art approaches, under both
the standard FSL and generalized FSL settings.
"
Zero and Few-shot Learning for Author Profiling,Mara Chinea-Rios,http://arxiv.org/pdf/2204.10543v2.pdf,2022-04-22,['cs.cl'],2204.10543v2.pdf,"  Author profiling classifies author characteristics by analyzing how language
is shared among people. In this work, we study that task from a low-resource
viewpoint: using little or no training data. We explore different zero and
few-shot models based on entailment and evaluate our systems on several
profiling tasks in Spanish and English. In addition, we study the effect of
both the entailment hypothesis and the size of the few-shot training sample. We
find that entailment-based models out-perform supervised text classifiers based
on roberta-XLM and that we can reach 80% of the accuracy of previous approaches
using less than 50\% of the training data on average.
"
Super-Prompting: Utilizing Model-Independent Contextual Data to Reduce  Data Annotation Required in Visual Commonsense Tasks,Navid Rezaei,http://arxiv.org/pdf/2204.11922v1.pdf,2022-04-25,"['cs.cl', 'cs.ai']",2204.11922v1.pdf,"  Pre-trained language models have shown excellent results in few-shot learning
scenarios using in-context learning. Although it is impressive, the size of
language models can be prohibitive to make them usable in on-device
applications, such as sensors or smartphones. With smaller language models,
task-specific data annotation is needed to fine-tune the language model for a
specific purpose. However, data annotation can have a substantial financial and
time burden for small research groups, startups, and even companies. In this
paper, we analyze different prompt-based fine-tuning techniques to improve
results on both language and multimodal causal transformer models. To evaluate
our results, we use a dataset focusing on visual commonsense reasoning in time.
Our results show that by simple model-agnostic prompt-based fine-tuning,
comparable results can be reached by only using 35%-40% of the fine-tuning
training dataset. The proposed approaches result in significant time and
financial savings. As the proposed methods make minimal architectural
assumptions, other researchers can use the results in their transformer models
with minimal adaptations. We plan to release the source code freely to make it
easier for the community to use and contribute to our work.
"
Building a Role Specified Open-Domain Dialogue System Leveraging  Large-Scale Language Models,Sanghwan Bae,http://arxiv.org/pdf/2205.00176v1.pdf,2022-04-30,['cs.cl'],2205.00176v1.pdf,"  Recent open-domain dialogue models have brought numerous breakthroughs.
However, building a chat system is not scalable since it often requires a
considerable volume of human-human dialogue data, especially when enforcing
features such as persona, style, or safety. In this work, we study the
challenge of imposing roles on open-domain dialogue systems, with the goal of
making the systems maintain consistent roles while conversing naturally with
humans. To accomplish this, the system must satisfy a role specification that
includes certain conditions on the stated features as well as a system policy
on whether or not certain types of utterances are allowed. For this, we propose
an efficient data collection framework leveraging in-context few-shot learning
of large-scale language models for building role-satisfying dialogue dataset
from scratch. We then compare various architectures for open-domain dialogue
systems in terms of meeting role specifications while maintaining
conversational abilities. Automatic and human evaluations show that our models
return few out-of-bounds utterances, keeping competitive performance on general
metrics. We release a Korean dialogue dataset we built for further research.
"
EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language  Processing,Chengyu Wang,http://arxiv.org/pdf/2205.00258v2.pdf,2022-04-30,['cs.cl'],2205.00258v2.pdf,"  The success of Pre-Trained Models (PTMs) has reshaped the development of
Natural Language Processing (NLP). Yet, it is not easy to obtain
high-performing models and deploy them online for industrial practitioners. To
bridge this gap, EasyNLP is designed to make it easy to build NLP applications,
which supports a comprehensive suite of NLP algorithms. It further features
knowledge-enhanced pre-training, knowledge distillation and few-shot learning
functionalities for large-scale PTMs, and provides a unified framework of model
training, inference and deployment for real-world applications. Currently,
EasyNLP has powered over ten business units within Alibaba Group and is
seamlessly integrated to the Platform of AI (PAI) products on Alibaba Cloud.
The source code of our EasyNLP toolkit is released at GitHub
(https://github.com/alibaba/EasyNLP).
"
POLITICS: Pretraining with Same-story Article Comparison for Ideology  Prediction and Stance Detection,Yujian Liu,http://arxiv.org/pdf/2205.00619v1.pdf,2022-05-02,['cs.cl'],2205.00619v1.pdf,"  Ideology is at the core of political science research. Yet, there still does
not exist general-purpose tools to characterize and predict ideology across
different genres of text. To this end, we study Pretrained Language Models
using novel ideology-driven pretraining objectives that rely on the comparison
of articles on the same story written by media of different ideologies. We
further collect a large-scale dataset, consisting of more than 3.6M political
news articles, for pretraining. Our model POLITICS outperforms strong baselines
and the previous state-of-the-art models on ideology prediction and stance
detection tasks. Further analyses show that POLITICS is especially good at
understanding long or formally written texts, and is also robust in few-shot
learning scenarios.
"
KECP: Knowledge Enhanced Contrastive Prompting for Few-shot Extractive  Question Answering,Jianing Wang,http://arxiv.org/pdf/2205.03071v1.pdf,2022-05-06,"['cs.cl', 'cs.ai']",2205.03071v1.pdf,"  Extractive Question Answering (EQA) is one of the most important tasks in
Machine Reading Comprehension (MRC), which can be solved by fine-tuning the
span selecting heads of Pre-trained Language Models (PLMs). However, most
existing approaches for MRC may perform poorly in the few-shot learning
scenario. To solve this issue, we propose a novel framework named Knowledge
Enhanced Contrastive Prompt-tuning (KECP). Instead of adding pointer heads to
PLMs, we introduce a seminal paradigm for EQA that transform the task into a
non-autoregressive Masked Language Modeling (MLM) generation problem.
Simultaneously, rich semantics from the external knowledge base (KB) and the
passage context are support for enhancing the representations of the query. In
addition, to boost the performance of PLMs, we jointly train the model by the
MLM and contrastive learning objectives. Experiments on multiple benchmarks
demonstrate that our method consistently outperforms state-of-the-art
approaches in few-shot settings by a large margin.
"
ProQA: Structural Prompt-based Pre-training for Unified Question  Answering,Wanjun Zhong,http://arxiv.org/pdf/2205.04040v2.pdf,2022-05-09,['cs.cl'],2205.04040v2.pdf,"  Question Answering (QA) is a longstanding challenge in natural language
processing. Existing QA works mostly focus on specific question types,
knowledge domains, or reasoning skills. The specialty in QA research hinders
systems from modeling commonalities between tasks and generalization for wider
applications. To address this issue, we present ProQA, a unified QA paradigm
that solves various tasks through a single model. ProQA takes a unified
structural prompt as the bridge and improves the QA-centric ability by
structural prompt-based pre-training. Through a structurally designed
prompt-based input schema, ProQA concurrently models the knowledge
generalization for all QA tasks while keeping the knowledge customization for
every specific QA task. Furthermore, ProQA is pre-trained with structural
prompt-formatted large-scale synthesized corpus, which empowers the model with
the commonly-required QA ability. Experimental results on 11 QA benchmarks
demonstrate that ProQA consistently boosts performance on both full data
fine-tuning, few-shot learning, and zero-shot testing scenarios. Furthermore,
ProQA exhibits strong ability in both continual learning and transfer learning
by taking the advantages of the structural prompt.
"
ALLSH: Active Learning Guided by Local Sensitivity and Hardness,Shujian Zhang,http://arxiv.org/pdf/2205.04980v2.pdf,2022-05-10,"['cs.cl', 'cs.ai', 'cs.lg']",2205.04980v2.pdf,"  Active learning, which effectively collects informative unlabeled data for
annotation, reduces the demand for labeled data. In this work, we propose to
retrieve unlabeled samples with a local sensitivity and hardness-aware
acquisition function. The proposed method generates data copies through local
perturbations and selects data points whose predictive likelihoods diverge the
most from their copies. We further empower our acquisition function by
injecting the select-worst case perturbation. Our method achieves consistent
gains over the commonly used active learning strategies in various
classification tasks. Furthermore, we observe consistent improvements over the
baselines on the study of prompt selection in prompt-based few-shot learning.
These experiments demonstrate that our acquisition guided by local sensitivity
and hardness can be effective and beneficial for many NLP tasks.
"
Prototypical Calibration for Few-shot Learning of Language Models,Zhixiong Han,http://arxiv.org/pdf/2205.10183v2.pdf,2022-05-20,['cs.cl'],2205.10183v2.pdf,"  In-context learning of GPT-like models has been recognized as fragile across
different hand-crafted templates, and demonstration permutations. In this work,
we propose prototypical calibration to adaptively learn a more robust decision
boundary for zero- and few-shot classification, instead of greedy decoding.
Concretely, our method first adopts Gaussian mixture distribution to estimate
the prototypical clusters for all categories. Then we assign each cluster to
the corresponding label by solving a weighted bipartite matching problem. Given
an example, its prediction is calibrated by the likelihood of prototypical
clusters. Experimental results show that prototypical calibration yields a
substantial improvement on a diverse set of tasks. Extensive analysis across
different scales also indicates that our method calibrates the decision
boundary as expected, greatly improving the robustness of GPT to templates,
permutations, and class imbalance.
"
BBTv2: Towards a Gradient-Free Future with Large Language Models,Tianxiang Sun,http://arxiv.org/pdf/2205.11200v2.pdf,2022-05-23,"['cs.cl', 'cs.ai']",2205.11200v2.pdf,"  Most downstream adaptation methods tune all or part of the parameters of
pre-trained models (PTMs) through gradient descent, where the tuning cost
increases linearly with the growth of the model size. By contrast,
gradient-free methods only require the forward computation of the PTM to tune
the prompt, retaining the benefits of efficient tuning and deployment. Though,
past work on gradient-free tuning often introduces gradient descent to seek a
good initialization of prompt and lacks versatility across tasks and PTMs. In
this paper, we present BBTv2, an improved version of Black-Box Tuning, to drive
PTMs for few-shot learning. We prepend continuous prompts to every layer of the
PTM and propose a divide-and-conquer gradient-free algorithm to optimize the
prompts at different layers alternately. Extensive experiments across various
tasks and PTMs show that BBTv2 can achieve comparable performance to full model
tuning and state-of-the-art parameter-efficient methods (e.g., Adapter, LoRA,
BitFit, etc.) under few-shot settings while maintaining much fewer tunable
parameters.
"
Zero-Shot and Few-Shot Learning for Lung Cancer Multi-Label  Classification using Vision Transformer,Fu-Ming Guo,http://arxiv.org/pdf/2205.15290v2.pdf,2022-05-30,"['cs.cv', 'cs.ai', 'cs.lg']",2205.15290v2.pdf,"  Lung cancer is the leading cause of cancer-related death worldwide. Lung
adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC) are the most
common histologic subtypes of non-small-cell lung cancer (NSCLC). Histology is
an essential tool for lung cancer diagnosis. Pathologists make classifications
according to the dominant subtypes. Although morphology remains the standard
for diagnosis, significant tool needs to be developed to elucidate the
diagnosis. In our study, we utilize the pre-trained Vision Transformer (ViT)
model to classify multiple label lung cancer on histologic slices (from dataset
LC25000), in both Zero-Shot and Few-Shot settings. Then we compare the
performance of Zero-Shot and Few-Shot ViT on accuracy, precision, recall,
sensitivity and specificity. Our study show that the pre-trained ViT model has
a good performance in Zero-Shot setting, a competitive accuracy ($99.87\%$) in
Few-Shot setting ({epoch = 1}) and an optimal result ($100.00\%$ on both
validation set and test set) in Few-Shot seeting ({epoch = 5}).
"
Neural Prompt Search,Yuanhan Zhang,http://arxiv.org/pdf/2206.04673v2.pdf,2022-06-09,"['cs.cv', 'cs.ai', 'cs.lg']",2206.04673v2.pdf,"  The size of vision models has grown exponentially over the last few years,
especially after the emergence of Vision Transformer. This has motivated the
development of parameter-efficient tuning methods, such as learning adapter
layers or visual prompt tokens, which allow a tiny portion of model parameters
to be trained whereas the vast majority obtained from pre-training are frozen.
However, designing a proper tuning method is non-trivial: one might need to try
out a lengthy list of design choices, not to mention that each downstream
dataset often requires custom designs. In this paper, we view the existing
parameter-efficient tuning methods as ""prompt modules"" and propose Neural
prOmpt seArcH (NOAH), a novel approach that learns, for large vision models,
the optimal design of prompt modules through a neural architecture search
algorithm, specifically for each downstream dataset. By conducting extensive
experiments on over 20 vision datasets, we demonstrate that NOAH (i) is
superior to individual prompt modules, (ii) has a good few-shot learning
ability, and (iii) is domain-generalizable. The code and models are available
at https://github.com/Davidzhangyuanhan/NOAH.
"
Low Resource Pipeline for Spoken Language Understanding via Weak  Supervision,Ayush Kumar,http://arxiv.org/pdf/2206.10559v1.pdf,2022-06-21,['cs.cl'],2206.10559v1.pdf,"  In Weak Supervised Learning (WSL), a model is trained over noisy labels
obtained from semantic rules and task-specific pre-trained models. Rules offer
limited generalization over tasks and require significant manual efforts while
pre-trained models are available only for limited tasks. In this work, we
propose to utilize prompt-based methods as weak sources to obtain the noisy
labels on unannotated data. We show that task-agnostic prompts are
generalizable and can be used to obtain noisy labels for different Spoken
Language Understanding (SLU) tasks such as sentiment classification, disfluency
detection and emotion classification. These prompts could additionally be
updated to add task-specific contexts, thus providing flexibility to design
task-specific prompts. We demonstrate that prompt-based methods generate
reliable labels for the above SLU tasks and thus can be used as a universal
weak source to train a weak-supervised model (WSM) in absence of labeled data.
Our proposed WSL pipeline trained over prompt-based weak source outperforms
other competitive low-resource benchmarks on zero and few-shot learning by more
than 4% on Macro-F1 on all of the three benchmark SLU datasets. The proposed
method also outperforms a conventional rule based WSL pipeline by more than 5%
on Macro-F1.
"
Prompting Decision Transformer for Few-Shot Policy Generalization,Mengdi Xu,http://arxiv.org/pdf/2206.13499v1.pdf,2022-06-27,"['cs.lg', 'cs.ai', 'cs.cv', 'cs.ro']",2206.13499v1.pdf,"  Humans can leverage prior experience and learn novel tasks from a handful of
demonstrations. In contrast to offline meta-reinforcement learning, which aims
to achieve quick adaptation through better algorithm design, we investigate the
effect of architecture inductive bias on the few-shot learning capability. We
propose a Prompt-based Decision Transformer (Prompt-DT), which leverages the
sequential modeling ability of the Transformer architecture and the prompt
framework to achieve few-shot adaptation in offline RL. We design the
trajectory prompt, which contains segments of the few-shot demonstrations, and
encodes task-specific information to guide policy generation. Our experiments
in five MuJoCo control benchmarks show that Prompt-DT is a strong few-shot
learner without any extra finetuning on unseen target tasks. Prompt-DT
outperforms its variants and strong meta offline RL baselines by a large margin
with a trajectory prompt containing only a few timesteps. Prompt-DT is also
robust to prompt length changes and can generalize to out-of-distribution (OOD)
environments.
"
Few-shot training LLMs for project-specific code-summarization,Toufique Ahmed,http://arxiv.org/pdf/2207.04237v2.pdf,2022-07-09,"['cs.se', 'cs.lg']",2207.04237v2.pdf,"  Very large language models (LLMs), such as GPT-3 and Codex have achieved
state-of-the-art performance on several natural-language tasks, and show great
promise also for code. A particularly exciting aspect of LLMs is their knack
for few-shot and zero-shot learning: they can learn to perform a task with very
few examples. Few-shotting has particular synergies in software engineering,
where there are a lot of phenomena (identifier names, APIs, terminology, coding
patterns) that are known to be highly project-specific. However,
project-specific data can be quite limited, especially early in the history of
a project; thus the few-shot learning capacity of LLMs might be very relevant.
In this paper, we investigate the use few-shot training with the very large GPT
(Generative Pre-trained Transformer) Codex model, and find evidence suggesting
that one can significantly surpass state-of-the-art models for
code-summarization, leveraging project-specific training.
"
Convolutional Bypasses Are Better Vision Transformer Adapters,Shibo Jie,http://arxiv.org/pdf/2207.07039v3.pdf,2022-07-14,['cs.cv'],2207.07039v3.pdf,"  The pretrain-then-finetune paradigm has been widely adopted in computer
vision. But as the size of Vision Transformer (ViT) grows exponentially, the
full finetuning becomes prohibitive in view of the heavier storage overhead.
Motivated by parameter-efficient transfer learning (PETL) on language
transformers, recent studies attempt to insert lightweight adaptation modules
(e.g., adapter layers or prompt tokens) to pretrained ViT and only finetune
these modules while the pretrained weights are frozen. However, these modules
were originally proposed to finetune language models and did not take into
account the prior knowledge specifically for visual tasks. In this paper, we
propose to construct Convolutional Bypasses (Convpass) in ViT as adaptation
modules, introducing only a small amount (less than 0.5% of model parameters)
of trainable parameters to adapt the large ViT. Different from other PETL
methods, Convpass benefits from the hard-coded inductive bias of convolutional
layers and thus is more suitable for visual tasks, especially in the low-data
regime. Experimental results on VTAB-1K benchmark and few-shot learning
datasets show that Convpass outperforms current language-oriented adaptation
modules, demonstrating the necessity to tailor vision-oriented adaptation
modules for adapting vision models.
"
STT: Soft Template Tuning for Few-Shot Adaptation,Ping Yu,http://arxiv.org/pdf/2207.08408v1.pdf,2022-07-18,"['cs.cl', 'cs.ai']",2207.08408v1.pdf,"  Prompt tuning has been an extremely effective tool to adapt a pre-trained
model to downstream tasks. However, standard prompt-based methods mainly
consider the case of sufficient data of downstream tasks. It is still unclear
whether the advantage can be transferred to the few-shot regime, where only
limited data are available for each downstream task. Although some works have
demonstrated the potential of prompt-tuning under the few-shot setting, the
main stream methods via searching discrete prompts or tuning soft prompts with
limited data are still very challenging. Through extensive empirical studies,
we find that there is still a gap between prompt tuning and fully fine-tuning
for few-shot learning. To bridge the gap, we propose a new prompt-tuning
framework, called Soft Template Tuning (STT). STT combines manual and auto
prompts, and treats downstream classification tasks as a masked language
modeling task. Comprehensive evaluation on different settings suggests STT can
close the gap between fine-tuning and prompt-based methods without introducing
additional parameters. Significantly, it can even outperform the time- and
resource-consuming fine-tuning method on sentiment classification tasks.
"
Self-Supervision Can Be a Good Few-Shot Learner,Yuning Lu,http://arxiv.org/pdf/2207.09176v1.pdf,2022-07-19,['cs.cv'],2207.09176v1.pdf,"  Existing few-shot learning (FSL) methods rely on training with a large
labeled dataset, which prevents them from leveraging abundant unlabeled data.
From an information-theoretic perspective, we propose an effective unsupervised
FSL method, learning representations with self-supervision. Following the
InfoMax principle, our method learns comprehensive representations by capturing
the intrinsic structure of the data. Specifically, we maximize the mutual
information (MI) of instances and their representations with a low-bias MI
estimator to perform self-supervised pre-training. Rather than supervised
pre-training focusing on the discriminable features of the seen classes, our
self-supervised model has less bias toward the seen classes, resulting in
better generalization for unseen classes. We explain that supervised
pre-training and self-supervised pre-training are actually maximizing different
MI objectives. Extensive experiments are further conducted to analyze their FSL
performance with various training settings. Surprisingly, the results show that
self-supervised pre-training can outperform supervised pre-training under the
appropriate conditions. Compared with state-of-the-art FSL methods, our
approach achieves comparable performance on widely used FSL benchmarks without
any labels of the base classes.
"
Language Model Cascades,David Dohan,http://arxiv.org/pdf/2207.10342v2.pdf,2022-07-21,"['cs.cl', 'cs.ai']",2207.10342v2.pdf,"  Prompted models have demonstrated impressive few-shot learning abilities.
Repeated interactions at test-time with a single model, or the composition of
multiple models together, further expands capabilities. These compositions are
probabilistic models, and may be expressed in the language of graphical models
with random variables whose values are complex data types such as strings.
Cases with control flow and dynamic structure require techniques from
probabilistic programming, which allow implementing disparate model structures
and inference strategies in a unified language. We formalize several existing
techniques from this perspective, including scratchpads / chain of thought,
verifiers, STaR, selection-inference, and tool use. We refer to the resulting
programs as language model cascades.
"
Few-shot Adaptation Works with UnpredicTable Data,Jun Shern Chan,http://arxiv.org/pdf/2208.01009v2.pdf,2022-08-01,"['cs.cl', 'cs.ai', 'cs.lg']",2208.01009v2.pdf,"  Prior work on language models (LMs) shows that training on a large number of
diverse tasks improves few-shot learning (FSL) performance on new tasks. We
take this to the extreme, automatically extracting 413,299 tasks from internet
tables - orders of magnitude more than the next-largest public datasets.
Finetuning on the resulting dataset leads to improved FSL performance on
Natural Language Processing (NLP) tasks, but not proportionally to dataset
scale. In fact, we find that narrow subsets of our dataset sometimes outperform
more diverse datasets. For example, finetuning on software documentation from
support.google.com raises FSL performance by a mean of +7.5% on 52 downstream
tasks, which beats training on 40 human-curated NLP datasets (+6.7%).
Finetuning on various narrow datasets leads to similar broad improvements
across test tasks, suggesting that the gains are not from domain adaptation but
adapting to FSL in general. We do not observe clear patterns between the
datasets that lead to FSL gains, leaving open questions about why certain data
helps with FSL.
"
Robotic Interestingness via Human-Informed Few-Shot Object Detection,Seungchan Kim,http://arxiv.org/pdf/2208.01084v1.pdf,2022-08-01,['cs.ro'],2208.01084v1.pdf,"  Interestingness recognition is crucial for decision making in autonomous
exploration for mobile robots. Previous methods proposed an unsupervised online
learning approach that can adapt to environments and detect interesting scenes
quickly, but lack the ability to adapt to human-informed interesting objects.
To solve this problem, we introduce a human-interactive framework,
AirInteraction, that can detect human-informed objects via few-shot online
learning. To reduce the communication bandwidth, we first apply an online
unsupervised learning algorithm on the unmanned vehicle for interestingness
recognition and then only send the potential interesting scenes to a
base-station for human inspection. The human operator is able to draw and
provide bounding box annotations for particular interesting objects, which are
sent back to the robot to detect similar objects via few-shot learning. Only
using few human-labeled examples, the robot can learn novel interesting object
categories during the mission and detect interesting scenes that contain the
objects. We evaluate our method on various interesting scene recognition
datasets. To the best of our knowledge, it is the first human-informed few-shot
object detection framework for autonomous exploration.
"
Atlas: Few-shot Learning with Retrieval Augmented Language Models,Gautier Izacard,http://arxiv.org/pdf/2208.03299v3.pdf,2022-08-05,['cs.cl'],2208.03299v3.pdf,"  Large language models have shown impressive few-shot results on a wide range
of tasks. However, when knowledge is key for such results, as is the case for
tasks such as question answering and fact checking, massive parameter counts to
store knowledge seem to be needed. Retrieval augmented models are known to
excel at knowledge intensive tasks without the need for as many parameters, but
it is unclear whether they work in few-shot settings. In this work we present
Atlas, a carefully designed and pre-trained retrieval augmented language model
able to learn knowledge intensive tasks with very few training examples. We
perform evaluations on a wide range of tasks, including MMLU, KILT and
NaturalQuestions, and study the impact of the content of the document index,
showing that it can easily be updated. Notably, Atlas reaches over 42% accuracy
on Natural Questions using only 64 examples, outperforming a 540B parameters
model by 3% despite having 50x fewer parameters.
"
Limits of an AI program for solving college math problems,Ernest Davis,http://arxiv.org/pdf/2208.06906v1.pdf,2022-08-14,['cs.ai'],2208.06906v1.pdf,"  Drori et al. (2022) report that ""A neural network solves, explains, and
generates university math problems by program synthesis and few-shot learning
at human level ... [It] automatically answers 81\% of university-level
mathematics problems."" The system they describe is indeed impressive; however,
the above description is very much overstated. The work of solving the problems
is done, not by a neural network, but by the symbolic algebra package Sympy.
Problems of various formats are excluded from consideration. The so-called
""explanations"" are just rewordings of lines of code. Answers are marked as
correct that are not in the form specified in the problem. Most seriously, it
seems that in many cases the system uses the correct answer given in the test
corpus to guide its path to solving the problem.
"
Efficient Few-Shot Learning Without Prompts,Lewis Tunstall,http://arxiv.org/pdf/2209.11055v1.pdf,2022-09-22,['cs.cl'],2209.11055v1.pdf,"  Recent few-shot methods, such as parameter-efficient fine-tuning (PEFT) and
pattern exploiting training (PET), have achieved impressive results in
label-scarce settings. However, they are difficult to employ since they are
subject to high variability from manually crafted prompts, and typically
require billion-parameter language models to achieve high accuracy. To address
these shortcomings, we propose SetFit (Sentence Transformer Fine-tuning), an
efficient and prompt-free framework for few-shot fine-tuning of Sentence
Transformers (ST). SetFit works by first fine-tuning a pretrained ST on a small
number of text pairs, in a contrastive Siamese manner. The resulting model is
then used to generate rich text embeddings, which are used to train a
classification head. This simple framework requires no prompts or verbalizers,
and achieves high accuracy with orders of magnitude less parameters than
existing techniques. Our experiments show that SetFit obtains comparable
results with PEFT and PET techniques, while being an order of magnitude faster
to train. We also show that SetFit can be applied in multilingual settings by
simply switching the ST body. Our code is available at
https://github.com/huggingface/setfit and our datasets at
https://huggingface.co/setfit .
"
CORE: A Retrieve-then-Edit Framework for Counterfactual Data Generation,Tanay Dixit,http://arxiv.org/pdf/2210.04873v2.pdf,2022-10-10,['cs.cl'],2210.04873v2.pdf,"  Counterfactual data augmentation (CDA) -- i.e., adding minimally perturbed
inputs during training -- helps reduce model reliance on spurious correlations
and improves generalization to out-of-distribution (OOD) data. Prior work on
generating counterfactuals only considered restricted classes of perturbations,
limiting their effectiveness. We present COunterfactual Generation via
Retrieval and Editing (CORE), a retrieval-augmented generation framework for
creating diverse counterfactual perturbations for CDA. For each training
example, CORE first performs a dense retrieval over a task-related unlabeled
text corpus using a learned bi-encoder and extracts relevant counterfactual
excerpts. CORE then incorporates these into prompts to a large language model
with few-shot learning capabilities, for counterfactual editing. Conditioning
language model edits on naturally occurring data results in diverse
perturbations. Experiments on natural language inference and sentiment analysis
benchmarks show that CORE counterfactuals are more effective at improving
generalization to OOD data compared to other DA approaches. We also show that
the CORE retrieval framework can be used to encourage diversity in manually
authored perturbations
"
Continual Training of Language Models for Few-Shot Learning,Zixuan Ke,http://arxiv.org/pdf/2210.05549v1.pdf,2022-10-11,"['cs.cl', 'cs.ai', 'cs.lg', 'cs.ne']",2210.05549v1.pdf,"  Recent work on applying large language models (LMs) achieves impressive
performance in many NLP applications. Adapting or posttraining an LM using an
unlabeled domain corpus can produce even better performance for end-tasks in
the domain. This paper proposes the problem of continually extending an LM by
incrementally post-train the LM with a sequence of unlabeled domain corpora to
expand its knowledge without forgetting its previous skills. The goal is to
improve the few-shot end-task learning in these domains. The resulting system
is called CPT (Continual PostTraining), which to our knowledge, is the first
continual post-training system. Experimental results verify its effectiveness.
"
Knowledge-grounded Dialog State Tracking,Dian Yu,http://arxiv.org/pdf/2210.06656v1.pdf,2022-10-13,['cs.cl'],2210.06656v1.pdf,"  Knowledge (including structured knowledge such as schema and ontology, and
unstructured knowledge such as web corpus) is a critical part of dialog
understanding, especially for unseen tasks and domains. Traditionally, such
domain-specific knowledge is encoded implicitly into model parameters for the
execution of downstream tasks, which makes training inefficient. In addition,
such models are not easily transferable to new tasks with different schemas. In
this work, we propose to perform dialog state tracking grounded on knowledge
encoded externally. We query relevant knowledge of various forms based on the
dialog context where such information can ground the prediction of dialog
states. We demonstrate superior performance of our proposed method over strong
baselines, especially in the few-shot learning setting.
"
Unified Vision and Language Prompt Learning,Yuhang Zang,http://arxiv.org/pdf/2210.07225v1.pdf,2022-10-13,"['cs.cv', 'cs.ai']",2210.07225v1.pdf,"  Prompt tuning, a parameter- and data-efficient transfer learning paradigm
that tunes only a small number of parameters in a model's input space, has
become a trend in the vision community since the emergence of large
vision-language models like CLIP. We present a systematic study on two
representative prompt tuning methods, namely text prompt tuning and visual
prompt tuning. A major finding is that none of the unimodal prompt tuning
methods performs consistently well: text prompt tuning fails on data with high
intra-class visual variances while visual prompt tuning cannot handle low
inter-class variances. To combine the best from both worlds, we propose a
simple approach called Unified Prompt Tuning (UPT), which essentially learns a
tiny neural network to jointly optimize prompts across different modalities.
Extensive experiments on over 11 vision datasets show that UPT achieves a
better trade-off than the unimodal counterparts on few-shot learning
benchmarks, as well as on domain generalization benchmarks. Code and models
will be released to facilitate future research.
"
"Vision-Language Pre-training: Basics, Recent Advances, and Future Trends",Zhe Gan,http://arxiv.org/pdf/2210.09263v1.pdf,2022-10-17,"['cs.cv', 'cs.cl']",2210.09263v1.pdf,"  This paper surveys vision-language pre-training (VLP) methods for multimodal
intelligence that have been developed in the last few years. We group these
approaches into three categories: ($i$) VLP for image-text tasks, such as image
captioning, image-text retrieval, visual question answering, and visual
grounding; ($ii$) VLP for core computer vision tasks, such as (open-set) image
classification, object detection, and segmentation; and ($iii$) VLP for
video-text tasks, such as video captioning, video-text retrieval, and video
question answering. For each category, we present a comprehensive review of
state-of-the-art methods, and discuss the progress that has been made and
challenges still being faced, using specific systems and models as case
studies. In addition, for each category, we discuss advanced topics being
actively explored in the research community, such as big foundation models,
unified modeling, in-context few-shot learning, knowledge, robustness, and
computer vision in the wild, to name a few.
"
Better Few-Shot Relation Extraction with Label Prompt Dropout,Peiyuan Zhang,http://arxiv.org/pdf/2210.13733v1.pdf,2022-10-25,['cs.cl'],2210.13733v1.pdf,"  Few-shot relation extraction aims to learn to identify the relation between
two entities based on very limited training examples. Recent efforts found that
textual labels (i.e., relation names and relation descriptions) could be
extremely useful for learning class representations, which will benefit the
few-shot learning task. However, what is the best way to leverage such label
information in the learning process is an important research question. Existing
works largely assume such textual labels are always present during both
learning and prediction. In this work, we argue that such approaches may not
always lead to optimal results. Instead, we present a novel approach called
label prompt dropout, which randomly removes label descriptions in the learning
process. Our experiments show that our approach is able to lead to improved
class representations, yielding significantly better results on the few-shot
relation extraction task.
"
STPrompt: Semantic-guided and Task-driven prompts for Effective Few-shot  Classification,Jinta Weng,http://arxiv.org/pdf/2210.16489v1.pdf,2022-10-29,"['cs.cl', 'cs.ai']",2210.16489v1.pdf,"  The effectiveness of prompt learning has been demonstrated in different
pre-trained language models. By formulating suitable template and choosing
representative label mapping, prompt learning can be used as an efficient
knowledge probe. However, finding suitable prompt in existing methods requires
multiple experimental attempts or appropriate vector initialization on
formulating suitable template and choosing representative label mapping, which
it is more common in few-shot learning tasks. Motivating by PLM working
process, we try to construct the prompt from task semantic perspective and thus
propose the STPrompt -Semantic-guided and Task-driven Prompt model.
Specifically, two novel prompts generated from the semantic dependency tree
(Dep-prompt) and task-specific metadata description (Meta-prompt), are firstly
constructed in a prompt augmented pool, and the proposed model would
automatically select a suitable semantic prompt to motivating the prompt
learning process. Our results show that the proposed model achieves the
state-of-the-art performance in five different datasets of few-shot text
classification tasks, which prove that more semantic and significant prompts
could assume as a better knowledge proving tool.
"
ConsPrompt: Easily Exploiting Contrastive Samples for Few-shot Prompt  Learning,Jinta Weng,http://arxiv.org/pdf/2211.04118v1.pdf,2022-11-08,"['cs.cl', 'cs.ai']",2211.04118v1.pdf,"  Prompt learning recently become an effective linguistic tool to motivate the
PLMs' knowledge on few-shot-setting tasks. However, studies have shown the lack
of robustness still exists in prompt learning, since suitable initialization of
continuous prompt and expert-first manual prompt are essential in fine-tuning
process. What is more, human also utilize their comparative ability to motivate
their existing knowledge for distinguishing different examples. Motivated by
this, we explore how to use contrastive samples to strengthen prompt learning.
In detail, we first propose our model ConsPrompt combining with prompt encoding
network, contrastive sampling module, and contrastive scoring module.
Subsequently, two sampling strategies, similarity-based and label-based
strategies, are introduced to realize differential contrastive learning. The
effectiveness of proposed ConsPrompt is demonstrated in five different few-shot
learning tasks and shown the similarity-based sampling strategy is more
effective than label-based in combining contrastive learning. Our results also
exhibits the state-of-the-art performance and robustness in different few-shot
settings, which proves that the ConsPrompt could be assumed as a better
knowledge probe to motivate PLMs.
"
Retrieval-Augmented Generative Question Answering for Event Argument  Extraction,Xinya Du,http://arxiv.org/pdf/2211.07067v1.pdf,2022-11-14,['cs.cl'],2211.07067v1.pdf,"  Event argument extraction has long been studied as a sequential prediction
problem with extractive-based methods, tackling each argument in isolation.
Although recent work proposes generation-based methods to capture
cross-argument dependency, they require generating and post-processing a
complicated target sequence (template). Motivated by these observations and
recent pretrained language models' capabilities of learning from
demonstrations. We propose a retrieval-augmented generative QA model (R-GQA)
for event argument extraction. It retrieves the most similar QA pair and
augments it as prompt to the current example's context, then decodes the
arguments as answers. Our approach outperforms substantially prior methods
across various settings (i.e. fully supervised, domain transfer, and fewshot
learning). Finally, we propose a clustering-based sampling strategy (JointEnc)
and conduct a thorough analysis of how different strategies influence the
few-shot learning performance. The implementations are available at https://
github.com/xinyadu/RGQA
"
ProtSi: Prototypical Siamese Network with Data Augmentation for Few-Shot  Subjective Answer Evaluation,Yining Lu,http://arxiv.org/pdf/2211.09855v1.pdf,2022-11-17,['cs.cl'],2211.09855v1.pdf,"  Subjective answer evaluation is a time-consuming and tedious task, and the
quality of the evaluation is heavily influenced by a variety of subjective
personal characteristics. Instead, machine evaluation can effectively assist
educators in saving time while also ensuring that evaluations are fair and
realistic. However, most existing methods using regular machine learning and
natural language processing techniques are generally hampered by a lack of
annotated answers and poor model interpretability, making them unsuitable for
real-world use. To solve these challenges, we propose ProtSi Network, a unique
semi-supervised architecture that for the first time uses few-shot learning to
subjective answer evaluation. To evaluate students' answers by similarity
prototypes, ProtSi Network simulates the natural process of evaluator scoring
answers by combining Siamese Network which consists of BERT and encoder layers
with Prototypical Network. We employed an unsupervised diverse paraphrasing
model ProtAugment, in order to prevent overfitting for effective few-shot text
classification. By integrating contrastive learning, the discriminative text
issue can be mitigated. Experiments on the Kaggle Short Scoring Dataset
demonstrate that the ProtSi Network outperforms the most recent baseline models
in terms of accuracy and quadratic weighted kappa.
"
TEMPERA: Test-Time Prompting via Reinforcement Learning,Tianjun Zhang,http://arxiv.org/pdf/2211.11890v1.pdf,2022-11-21,"['cs.cl', 'cs.ai']",2211.11890v1.pdf,"  Careful prompt design is critical to the use of large language models in
zero-shot or few-shot learning. As a consequence, there is a growing interest
in automated methods to design optimal prompts. In this work, we propose
Test-time Prompt Editing using Reinforcement learning (TEMPERA). In contrast to
prior prompt generation methods, TEMPERA can efficiently leverage prior
knowledge, is adaptive to different queries and provides an interpretable
prompt for every query. To achieve this, we design a novel action space that
allows flexible editing of the initial prompts covering a wide set of
commonly-used components like instructions, few-shot exemplars, and
verbalizers. The proposed method achieves significant gains compared with
recent SoTA approaches like prompt tuning, AutoPrompt, and RLPrompt, across a
variety of tasks including sentiment analysis, topic classification, natural
language inference, and reading comprehension. Our method achieves 5.33x on
average improvement in sample efficiency when compared to the traditional
fine-tuning methods.
"
Towards Practical Few-shot Federated NLP,Dongqi Cai,http://arxiv.org/pdf/2212.00192v2.pdf,2022-12-01,"['cs.cl', 'cs.lg']",2212.00192v2.pdf,"  Transformer-based pre-trained models have emerged as the predominant solution
for natural language processing (NLP). Fine-tuning such pre-trained models for
downstream tasks often requires a considerable amount of labeled private data.
In practice, private data is often distributed across heterogeneous mobile
devices and may be prohibited from being uploaded. Moreover, well-curated
labeled data is often scarce, presenting an additional challenge. To address
these challenges, we first introduce a data generator for federated few-shot
learning tasks, which encompasses the quantity and skewness of scarce labeled
data in a realistic setting. Subsequently, we propose AUG-FedPrompt, a
prompt-based federated learning system that exploits abundant unlabeled data
for data augmentation. Our experiments indicate that AUG-FedPrompt can perform
on par with full-set fine-tuning with a limited amount of labeled data.
However, such competitive performance comes at a significant system cost.
"
Few-Shot Nested Named Entity Recognition,Hong Ming,http://arxiv.org/pdf/2212.00953v1.pdf,2022-12-02,"['cs.cl', 'cs.ai']",2212.00953v1.pdf,"  While Named Entity Recognition (NER) is a widely studied task, making
inferences of entities with only a few labeled data has been challenging,
especially for entities with nested structures. Unlike flat entities, entities
and their nested entities are more likely to have similar semantic feature
representations, drastically increasing difficulties in classifying different
entity categories in the few-shot setting. Although prior work has briefly
discussed nested structures in the context of few-shot learning, to our best
knowledge, this paper is the first one specifically dedicated to studying the
few-shot nested NER task. Leveraging contextual dependency to distinguish
nested entities, we propose a Biaffine-based Contrastive Learning (BCL)
framework. We first design a Biaffine span representation module for learning
the contextual span dependency representation for each entity span rather than
only learning its semantic representation. We then merge these two
representations by the residual connection to distinguish nested entities.
Finally, we build a contrastive learning framework to adjust the representation
distribution for larger margin boundaries and more generalized domain transfer
learning ability. We conducted experimental studies on three English, German,
and Russian nested NER datasets. The results show that the BCL outperformed
three baseline models on the 1-shot and 5-shot tasks in terms of F1 score.
"
Improving Few-Shot Performance of Language Models via Nearest Neighbor  Calibration,Feng Nie,http://arxiv.org/pdf/2212.02216v1.pdf,2022-12-05,['cs.cl'],2212.02216v1.pdf,"  Pre-trained language models (PLMs) have exhibited remarkable few-shot
learning capabilities when provided a few examples in a natural language prompt
as demonstrations of test instances, i.e., in-context learning. However, the
performance of in-context learning is susceptible to the choice of prompt
format, training examples and the ordering of the training examples. In this
paper, we propose a novel nearest-neighbor calibration framework for in-context
learning to ease this issue. It is inspired by a phenomenon that the in-context
learning paradigm produces incorrect labels when inferring training instances,
which provides a useful supervised signal to calibrate predictions. Thus, our
method directly augments the predictions with a $k$-nearest-neighbor ($k$NN)
classifier over a datastore of cached few-shot instance representations
obtained by PLMs and their corresponding labels. Then adaptive neighbor
selection and feature regularization modules are introduced to make full use of
a few support instances to reduce the $k$NN retrieval noise. Experiments on
various few-shot text classification tasks demonstrate that our method
significantly improves in-context learning, while even achieving comparable
performance with state-of-the-art tuning-based approaches in some sentiment
analysis tasks.
"
JamPatoisNLI: A Jamaican Patois Natural Language Inference Dataset,Ruth-Ann Armstrong,http://arxiv.org/pdf/2212.03419v1.pdf,2022-12-07,"['cs.cl', 'cs.lg', 'i.2.7']",2212.03419v1.pdf,"  JamPatoisNLI provides the first dataset for natural language inference in a
creole language, Jamaican Patois. Many of the most-spoken low-resource
languages are creoles. These languages commonly have a lexicon derived from a
major world language and a distinctive grammar reflecting the languages of the
original speakers and the process of language birth by creolization. This gives
them a distinctive place in exploring the effectiveness of transfer from large
monolingual or multilingual pretrained models. While our work, along with
previous work, shows that transfer from these models to low-resource languages
that are unrelated to languages in their training set is not very effective, we
would expect stronger results from transfer to creoles. Indeed, our experiments
show considerably better results from few-shot learning of JamPatoisNLI than
for such unrelated languages, and help us begin to understand how the unique
relationship between creoles and their high-resource base languages affect
cross-lingual transfer. JamPatoisNLI, which consists of naturally-occurring
premises and expert-written hypotheses, is a step towards steering research
into a traditionally underserved language and a useful benchmark for
understanding cross-lingual NLP.
"
Learn to Explore: on Bootstrapping Interactive Data Exploration with  Meta-learning,Yukun Cao,http://arxiv.org/pdf/2212.03423v4.pdf,2022-12-07,"['cs.db', 'cs.ai']",2212.03423v4.pdf,"  Interactive data exploration (IDE) is an effective way of comprehending big
data, whose volume and complexity are beyond human abilities. The main goal of
IDE is to discover user interest regions from a database through multi-rounds
of user labelling. Existing IDEs adopt active-learning framework, where users
iteratively discriminate or label the interestingness of selected tuples. The
process of data exploration can be viewed as the process of training a
classifier, which determines whether a database tuple is interesting to a user.
An efficient exploration thus takes very few iterations of user labelling to
reach the data region of interest. In this work, we consider the data
exploration as the process of few-shot learning, where the classifier is
learned with only a few training examples, or exploration iterations. To this
end, we propose a learning-to-explore framework, based on meta-learning, which
learns how to learn a classifier with automatically generated meta-tasks, so
that the exploration process can be much shortened. Extensive experiments on
real datasets show that our proposal outperforms existing explore-by-example
solutions in terms of accuracy and efficiency.
"
Demystifying Prompts in Language Models via Perplexity Estimation,Hila Gonen,http://arxiv.org/pdf/2212.04037v1.pdf,2022-12-08,['cs.cl'],2212.04037v1.pdf,"  Language models can be prompted to perform a wide variety of zero- and
few-shot learning problems. However, performance varies significantly with the
choice of prompt, and we do not yet understand why this happens or how to pick
the best prompts. In this work, we analyze the factors that contribute to this
variance and establish a new empirical hypothesis: the performance of a prompt
is coupled with the extent to which the model is familiar with the language it
contains. Over a wide range of tasks, we show that the lower the perplexity of
the prompt is, the better the prompt is able to perform the task. As a result,
we devise a method for creating prompts: (1) automatically extend a small seed
set of manually written prompts by paraphrasing using GPT3 and backtranslation
and (2) choose the lowest perplexity prompts to get significant gains in
performance.
"
Technical Report -- Competition Solution for Prompt Tuning using  Pretrained Language Model,Jiang-Long Song,http://arxiv.org/pdf/2212.06369v3.pdf,2022-12-13,['cs.cl'],2212.06369v3.pdf,"  Prompt tuning recently becomes a hot-spot in the applications of large
pretrained language models on specific downstream tasks. Regarding the Language
Model as a Service (LMaaS), black-box tuning using derivative-free optimization
(DFO) provides a novel approach to expand the practical scenarios of pretrained
models and enrich the researches of few-shot learning. In this report, we
present our solution in this competition that is based on the LMaaS scenario.
Our solution consists of several modifications to BBTv2, including multiple
label words, selection of P0, rolling update strategy, multi-task loss from MLP
classifier, and finally using the ensemble method to further improve
generalization ability. We also shared some strategies that we tried but didn't
use in the final submission for further discussion. In the end we raised a
question about the SNLI dataset and the impact on the results, as well as our
concerns about the competition.
"
Localized Latent Updates for Fine-Tuning Vision-Language Models,Moritz Ibing,http://arxiv.org/pdf/2212.06556v1.pdf,2022-12-13,"['cs.cv', 'cs.cl', 'cs.lg']",2212.06556v1.pdf,"  Although massive pre-trained vision-language models like CLIP show impressive
generalization capabilities for many tasks, still it often remains necessary to
fine-tune them for improved performance on specific datasets. When doing so, it
is desirable that updating the model is fast and that the model does not lose
its capabilities on data outside of the dataset, as is often the case with
classical fine-tuning approaches. In this work we suggest a lightweight
adapter, that only updates the models predictions close to seen datapoints. We
demonstrate the effectiveness and speed of this relatively simple approach in
the context of few-shot learning, where our results both on classes seen and
unseen during training are comparable with or improve on the state of the art.
"
ALERT: Adapting Language Models to Reasoning Tasks,Ping Yu,http://arxiv.org/pdf/2212.08286v2.pdf,2022-12-16,['cs.cl'],2212.08286v2.pdf,"  Current large language models can perform reasonably well on complex tasks
that require step-by-step reasoning with few-shot learning. Are these models
applying reasoning skills they have learnt during pre-training and reason
outside of their training context, or are they simply memorizing their training
corpus at finer granularity and have learnt to better understand their context?
To tease apart these possibilities, we introduce ALERT, a benchmark and suite
of analyses for assessing language models' reasoning ability comparing
pre-trained and finetuned models on complex tasks that require reasoning skills
to solve. ALERT provides a test bed to asses any language model on fine-grained
reasoning skills, which spans over 20 datasets and covers 10 different
reasoning skills. We leverage ALERT to further investigate the role of
finetuning. With extensive empirical analysis we find that language models
learn more reasoning skills such as textual entailment, abductive reasoning,
and analogical reasoning during finetuning stage compared to pretraining state.
We also find that when language models are finetuned they tend to overfit to
the prompt template, which hurts the robustness of models causing
generalization problems.
"
Learning from Taxonomy: Multi-label Few-Shot Classification for Everyday  Sound Recognition,Jinhua Liang,http://arxiv.org/pdf/2212.08952v1.pdf,2022-12-17,"['cs.sd', 'eess.as']",2212.08952v1.pdf,"  Everyday sound recognition aims to infer types of sound events in audio
streams. While many works succeeded in training models with high performance in
a fully-supervised manner, they are still restricted to the demand of large
quantities of labelled data and the range of predefined classes. To overcome
these drawbacks, this work firstly curates a new database named FSD-FS for
multi-label few-shot audio classification. It then explores how to incorporate
audio taxonomy in few-shot learning. Specifically, this work proposes
label-dependent prototypical networks (LaD-protonet) to exploit parent-children
relationships between labels. Plus, it applies taxonomy-aware label smoothing
techniques to boost model performance. Experiments demonstrate that
LaD-protonet outperforms original prototypical networks as well as other
state-of-the-art methods. Moreover, its performance can be further boosted when
combined with taxonomy-aware label smoothing.
"
Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations,Xinxi Lyu,http://arxiv.org/pdf/2212.09865v2.pdf,2022-12-19,"['cs.cl', 'cs.ai']",2212.09865v2.pdf,"  Although large language models can be prompted for both zero- and few-shot
learning, performance drops significantly when no demonstrations are available.
In this paper, we introduce Z-ICL, a new zero-shot method that closes the gap
by constructing pseudo-demonstrations for a given test input using a raw text
corpus. Concretely, pseudo-demonstrations are constructed by (1) finding the
nearest neighbors to the test input from the corpus and pairing them with
random task labels, and (2) applying a set of techniques to reduce the amount
of direct copying the model does from the resulting demonstrations. Evaluation
on nine classification datasets shows that Z-ICL outperforms previous zero-shot
methods by a significant margin, and is on par with in-context learning with
labeled training data in the few-shot setting. Overall, Z-ICL provides a
significantly higher estimate of the zero-shot performance levels of a model,
and supports future efforts to develop better pseudo-demonstrations that
further improve zero-shot results.
"
A Survey On Few-shot Knowledge Graph Completion with Structural and  Commonsense Knowledge,Haodi Ma,http://arxiv.org/pdf/2301.01172v1.pdf,2023-01-03,"['cs.cl', 'cs.ai', 'cs.lg']",2301.01172v1.pdf,"  Knowledge graphs (KG) have served as the key component of various natural
language processing applications. Commonsense knowledge graphs (CKG) are a
special type of KG, where entities and relations are composed of free-form
text. However, previous works in KG completion and CKG completion suffer from
long-tail relations and newly-added relations which do not have many know
triples for training. In light of this, few-shot KG completion (FKGC), which
requires the strengths of graph representation learning and few-shot learning,
has been proposed to challenge the problem of limited annotated data. In this
paper, we comprehensively survey previous attempts on such tasks in the form of
a series of methods and applications. Specifically, we first introduce FKGC
challenges, commonly used KGs, and CKGs. Then we systematically categorize and
summarize existing works in terms of the type of KGs and the methods. Finally,
we present applications of FKGC models on prediction tasks in different areas
and share our thoughts on future research directions of FKGC.
"
Distillation of encoder-decoder transformers for sequence labelling,Marco Farina,http://arxiv.org/pdf/2302.05454v1.pdf,2023-02-10,"['cs.cl', 'cs.ir']",2302.05454v1.pdf,"  Driven by encouraging results on a wide range of tasks, the field of NLP is
experiencing an accelerated race to develop bigger language models. This race
for bigger models has also underscored the need to continue the pursuit of
practical distillation approaches that can leverage the knowledge acquired by
these big models in a compute-efficient manner. Having this goal in mind, we
build on recent work to propose a hallucination-free framework for sequence
tagging that is especially suited for distillation. We show empirical results
of new state-of-the-art performance across multiple sequence labelling datasets
and validate the usefulness of this framework for distilling a large model in a
few-shot learning scenario.
"
Learning to Initialize: Can Meta Learning Improve Cross-task  Generalization in Prompt Tuning?,Chengwei Qin,http://arxiv.org/pdf/2302.08143v2.pdf,2023-02-16,"['cs.cl', 'cs.ai']",2302.08143v2.pdf,"  Prompt tuning (PT) which only tunes the embeddings of an additional sequence
of tokens per task, keeping the pre-trained language model (PLM) frozen, has
shown remarkable performance in few-shot learning. Despite this, PT has been
shown to rely heavily on good initialization of the prompt embeddings. In this
work, we study meta prompt tuning (MPT) to systematically explore how
meta-learning can help improve (if it can) cross-task generalization in PT
through learning to initialize the prompt embeddings from other relevant tasks.
We empirically analyze a representative set of meta learning algorithms in a
wide range of adaptation settings with different source/target task
configurations on a large set of few-shot tasks. With extensive experiments and
analysis, we demonstrate the effectiveness of MPT. We find the improvement to
be significant particularly on classification tasks. For other kinds of tasks
such as question answering, we observe that while MPT can outperform PT in most
cases, it does not always outperform multi-task learning. We further provide an
in-depth analysis from the perspective of task similarity.
"
Scalable Prompt Generation for Semi-supervised Learning with Language  Models,Yuhang Zhou,http://arxiv.org/pdf/2302.09236v1.pdf,2023-02-18,"['cs.cl', 'cs.ai']",2302.09236v1.pdf,"  Prompt-based learning methods in semi-supervised learning (SSL) settings have
been shown to be effective on multiple natural language understanding (NLU)
datasets and tasks in the literature. However, manually designing multiple
prompts and verbalizers requires domain knowledge and human effort, making it
difficult and expensive to scale across different datasets. In this paper, we
propose two methods to automatically design multiple prompts and integrate
automatic verbalizer in SSL settings without sacrificing performance. The first
method uses various demonstration examples with learnable continuous prompt
tokens to create diverse prompt models. The second method uses a varying number
of soft prompt tokens to encourage language models to learn different prompts.
For the verbalizer, we use the prototypical verbalizer to replace the manual
one. In summary, we obtained the best average accuracy of 73.2% (a relative
improvement of 2.52% over even the previous state-of-the-art SSL method with
manual prompts and verbalizers) in different few-shot learning settings.
"
Language Models are Few-shot Learners for Prognostic Prediction,Zekai Chen,http://arxiv.org/pdf/2302.12692v4.pdf,2023-02-24,"['cs.cl', 'cs.ai', 'cs.lg', 'q-bio.qm']",2302.12692v4.pdf,"  Clinical prediction is an essential task in the healthcare industry. However,
the recent success of transformers, on which large language models are built,
has not been extended to this domain. In this research, we explore the use of
transformers and language models in prognostic prediction for immunotherapy
using real-world patients' clinical data and molecular profiles. This paper
investigates the potential of transformers to improve clinical prediction
compared to conventional machine learning approaches and addresses the
challenge of few-shot learning in predicting rare disease areas. The study
benchmarks the efficacy of baselines and language models on prognostic
prediction across multiple cancer types and investigates the impact of
different pretrained language models under few-shot regimes. The results
demonstrate significant improvements in accuracy and highlight the potential of
NLP in clinical research to improve early detection and intervention for
different diseases.
"
Pre-Finetuning for Few-Shot Emotional Speech Recognition,Maximillian Chen,http://arxiv.org/pdf/2302.12921v2.pdf,2023-02-24,"['cs.cl', 'cs.lg', 'cs.sd', 'eess.as']",2302.12921v2.pdf,"  Speech models have long been known to overfit individual speakers for many
classification tasks. This leads to poor generalization in settings where the
speakers are out-of-domain or out-of-distribution, as is common in production
environments. We view speaker adaptation as a few-shot learning problem and
propose investigating transfer learning approaches inspired by recent success
with pre-trained models in natural language tasks. We propose pre-finetuning
speech models on difficult tasks to distill knowledge into few-shot downstream
classification objectives. We pre-finetune Wav2Vec2.0 on every permutation of
four multiclass emotional speech recognition corpora and evaluate our
pre-finetuned models through 33,600 few-shot fine-tuning trials on the
Emotional Speech Dataset.
"
Mixture of Soft Prompts for Controllable Data Generation,Derek Chen,http://arxiv.org/pdf/2303.01580v2.pdf,2023-03-02,['cs.cl'],2303.01580v2.pdf,"  Large language models (LLMs) effectively generate fluent text when the target
output follows natural language patterns. However, structured prediction tasks
confine the output format to a limited ontology, causing even very large models
to struggle since they were never trained with such restrictions in mind. The
difficulty of using LLMs for direct prediction is exacerbated in few-shot
learning scenarios, which commonly arise due to domain shift and resource
limitations. We flip the problem on its head by leveraging the LLM as a tool
for data augmentation rather than direct prediction. Our proposed Mixture of
Soft Prompts (MSP) serves as a parameter-efficient procedure for generating
data in a controlled manner. Denoising mechanisms are further applied to
improve the quality of synthesized data. Automatic metrics show our method is
capable of producing diverse and natural text, while preserving label
semantics. Moreover, MSP achieves state-of-the-art results on three benchmarks
when compared against strong baselines. Our method offers an alternate
data-centric approach for applying LLMs to complex prediction tasks.
"
Prismer: A Vision-Language Model with An Ensemble of Experts,Shikun Liu,http://arxiv.org/pdf/2303.02506v2.pdf,2023-03-04,"['cs.lg', 'cs.ai', 'cs.cv']",2303.02506v2.pdf,"  Recent vision-language models have shown impressive multi-modal generation
capabilities. However, typically they require training huge models on massive
datasets. As a more scalable alternative, we introduce Prismer, a data- and
parameter-efficient vision-language model that leverages an ensemble of domain
experts. Prismer only requires training of a small number of components, with
the majority of network weights inherited from readily-available, pre-trained
domain experts, and kept frozen during training. By leveraging experts from a
wide range of domains, we show that Prismer can efficiently pool this expert
knowledge and adapt it to various vision-language reasoning tasks. In our
experiments, we show that Prismer achieves fine-tuned and few-shot learning
performance which is competitive with current state-of-the-art models, whilst
requiring up to two orders of magnitude less training data. Code is available
at https://github.com/NVlabs/prismer.
"
Enhancing Activity Prediction Models in Drug Discovery with the Ability  to Understand Human Language,Philipp Seidl,http://arxiv.org/pdf/2303.03363v2.pdf,2023-03-06,"['q-bio.bm', 'cs.cl', 'cs.lg', 'stat.ml']",2303.03363v2.pdf,"  Activity and property prediction models are the central workhorses in drug
discovery and materials sciences, but currently they have to be trained or
fine-tuned for new tasks. Without training or fine-tuning, scientific language
models could be used for such low-data tasks through their announced zero- and
few-shot capabilities. However, their predictive quality at activity prediction
is lacking. In this work, we envision a novel type of activity prediction model
that is able to adapt to new prediction tasks at inference time, via
understanding textual information describing the task. To this end, we propose
a new architecture with separate modules for chemical and natural language
inputs, and a contrastive pre-training objective on data from large biochemical
databases. In extensive experiments, we show that our method CLAMP yields
improved predictive performance on few-shot learning benchmarks and zero-shot
problems in drug discovery. We attribute the advances of our method to the
modularized architecture and to our pre-training objective.
"
MenuCraft: Interactive Menu System Design with Large Language Models,Amir Hossein Kargaran,http://arxiv.org/pdf/2303.04496v2.pdf,2023-03-08,"['cs.cl', 'cs.ai', 'cs.hc']",2303.04496v2.pdf,"  Menu system design is a challenging task involving many design options and
various human factors. For example, one crucial factor that designers need to
consider is the semantic and systematic relation of menu commands. However,
capturing these relations can be challenging due to limited available
resources. With the advancement of neural language models, large language
models can utilize their vast pre-existing knowledge in designing and refining
menu systems. In this paper, we propose MenuCraft, an AI-assisted designer for
menu design that enables collaboration between the designer and a dialogue
system to design menus. MenuCraft offers an interactive language-based menu
design tool that simplifies the menu design process and enables easy
customization of design options. MenuCraft supports a variety of interactions
through dialog that allows performing zero/few-shot learning.
"
Consistency Analysis of ChatGPT,Myeongjun Erik Jang,http://arxiv.org/pdf/2303.06273v2.pdf,2023-03-11,"['cs.cl', 'cs.ai']",2303.06273v2.pdf,"  ChatGPT has gained a huge popularity since its introduction. Its positive
aspects have been reported through many media platforms, and some analyses even
showed that ChatGPT achieved a decent grade in professional exams, adding extra
support to the claim that AI can now assist and even replace humans in
industrial fields. Others, however, doubt its reliability and trustworthiness.
This paper investigates the trustworthiness of ChatGPT and GPT-4 regarding
logically consistent behaviour, focusing specifically on semantic consistency
and the properties of negation, symmetric, and transitive consistency. Our
findings suggest that while both models appear to show an enhanced language
understanding and reasoning ability, they still frequently fall short of
generating logically consistent predictions. We also ascertain via experiments
that prompt designing, few-shot learning and employing larger large language
models (LLMs) are unlikely to be the ultimate solution to resolve the
inconsistency issue of LLMs.
"
Learning Expressive Prompting With Residuals for Vision Transformers,Rajshekhar Das,http://arxiv.org/pdf/2303.15591v1.pdf,2023-03-27,['cs.cv'],2303.15591v1.pdf,"  Prompt learning is an efficient approach to adapt transformers by inserting
learnable set of parameters into the input and intermediate representations of
a pre-trained model. In this work, we present Expressive Prompts with Residuals
(EXPRES) which modifies the prompt learning paradigm specifically for effective
adaptation of vision transformers (ViT). Out method constructs downstream
representations via learnable ``output'' tokens, that are akin to the learned
class tokens of the ViT. Further for better steering of the downstream
representation processed by the frozen transformer, we introduce residual
learnable tokens that are added to the output of various computations. We apply
EXPRES for image classification, few shot learning, and semantic segmentation,
and show our method is capable of achieving state of the art prompt tuning on
3/3 categories of the VTAB benchmark. In addition to strong performance, we
observe that our approach is an order of magnitude more prompt efficient than
existing visual prompting baselines. We analytically show the computational
benefits of our approach over weight space adaptation techniques like
finetuning. Lastly we systematically corroborate the architectural design of
our method via a series of ablation experiments.
"
Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior  Refinement,Xiangyang Zhu,http://arxiv.org/pdf/2304.01195v1.pdf,2023-04-03,"['cs.cv', 'cs.ai', 'cs.mm']",2304.01195v1.pdf,"  The popularity of Contrastive Language-Image Pre-training (CLIP) has
propelled its application to diverse downstream vision tasks. To improve its
capacity on downstream tasks, few-shot learning has become a widely-adopted
technique. However, existing methods either exhibit limited performance or
suffer from excessive learnable parameters. In this paper, we propose APE, an
Adaptive Prior rEfinement method for CLIP's pre-trained knowledge, which
achieves superior accuracy with high computational efficiency. Via a prior
refinement module, we analyze the inter-class disparity in the downstream data
and decouple the domain-specific knowledge from the CLIP-extracted cache model.
On top of that, we introduce two model variants, a training-free APE and a
training-required APE-T. We explore the trilateral affinities between the test
image, prior cache model, and textual representations, and only enable a
lightweight category-residual module to be trained. For the average accuracy
over 11 benchmarks, both APE and APE-T attain state-of-the-art and respectively
outperform the second-best by +1.59% and +1.99% under 16 shots with x30 less
learnable parameters.
"
Sociocultural knowledge is needed for selection of shots in hate speech  detection tasks,Antonis Maronikolakis,http://arxiv.org/pdf/2304.01890v4.pdf,2023-04-04,"['cs.cl', 'cs.ai', 'cs.lg']",2304.01890v4.pdf,"  We introduce HATELEXICON, a lexicon of slurs and targets of hate speech for
the countries of Brazil, Germany, India and Kenya, to aid training and
interpretability of models. We demonstrate how our lexicon can be used to
interpret model predictions, showing that models developed to classify extreme
speech rely heavily on target words when making predictions. Further, we
propose a method to aid shot selection for training in low-resource settings
via HATELEXICON. In few-shot learning, the selection of shots is of paramount
importance to model performance. In our work, we simulate a few-shot setting
for German and Hindi, using HASOC data for training and the Multilingual
HateCheck (MHC) as a benchmark. We show that selecting shots based on our
lexicon leads to models performing better on MHC than models trained on shots
sampled randomly. Thus, when given only a few training examples, using our
lexicon to select shots containing more sociocultural information leads to
better few-shot performance.
"
Revisiting Automated Prompting: Are We Actually Doing Better?,Yulin Zhou,http://arxiv.org/pdf/2304.03609v2.pdf,2023-04-07,"['cs.cl', 'cs.lg']",2304.03609v2.pdf,"  Current literature demonstrates that Large Language Models (LLMs) are great
few-shot learners, and prompting significantly increases their performance on a
range of downstream tasks in a few-shot learning setting. An attempt to
automate human-led prompting followed, with some progress achieved. In
particular, subsequent work demonstrates automation can outperform fine-tuning
in certain K-shot learning scenarios.
  In this paper, we revisit techniques for automated prompting on six different
downstream tasks and a larger range of K-shot learning settings. We find that
automated prompting does not consistently outperform simple manual prompts. Our
work suggests that, in addition to fine-tuning, manual prompts should be used
as a baseline in this line of research.
"
MixPro: Simple yet Effective Data Augmentation for Prompt-based Learning,Bohan Li,http://arxiv.org/pdf/2304.09402v1.pdf,2023-04-19,"['cs.cl', 'cs.lg']",2304.09402v1.pdf,"  Prompt-based learning reformulates downstream tasks as cloze problems by
combining the original input with a template. This technique is particularly
useful in few-shot learning, where a model is trained on a limited amount of
data. However, the limited templates and text used in few-shot prompt-based
learning still leave significant room for performance improvement.
Additionally, existing methods using model ensembles can constrain the model
efficiency. To address these issues, we propose an augmentation method called
MixPro, which augments both the vanilla input text and the templates through
token-level, sentence-level, and epoch-level Mixup strategies. We conduct
experiments on five few-shot datasets, and the results show that MixPro
outperforms other augmentation baselines, improving model performance by an
average of 5.08% compared to before augmentation.
"
Information Extraction from Documents: Question Answering vs Token  Classification in real-world setups,Laurent Lam,http://arxiv.org/pdf/2304.10994v1.pdf,2023-04-21,['cs.cl'],2304.10994v1.pdf,"  Research in Document Intelligence and especially in Document Key Information
Extraction (DocKIE) has been mainly solved as Token Classification problem.
Recent breakthroughs in both natural language processing (NLP) and computer
vision helped building document-focused pre-training methods, leveraging a
multimodal understanding of the document text, layout and image modalities.
However, these breakthroughs also led to the emergence of a new DocKIE subtask
of extractive document Question Answering (DocQA), as part of the Machine
Reading Comprehension (MRC) research field. In this work, we compare the
Question Answering approach with the classical token classification approach
for document key information extraction. We designed experiments to benchmark
five different experimental setups : raw performances, robustness to noisy
environment, capacity to extract long entities, fine-tuning speed on Few-Shot
Learning and finally Zero-Shot Learning. Our research showed that when dealing
with clean and relatively short entities, it is still best to use token
classification-based approach, while the QA approach could be a good
alternative for noisy environment or long entities use-cases.
"
Discern and Answer: Mitigating the Impact of Misinformation in  Retrieval-Augmented Models with Discriminators,Giwon Hong,http://arxiv.org/pdf/2305.01579v1.pdf,2023-05-02,"['cs.cl', 'cs.ai']",2305.01579v1.pdf,"  Most existing retrieval-augmented language models (LMs) for question
answering assume all retrieved information is factually correct. In this work,
we study a more realistic scenario in which retrieved documents may contain
misinformation, causing conflicts among them. We observe that the existing
models are highly brittle to such information in both fine-tuning and
in-context few-shot learning settings. We propose approaches to make
retrieval-augmented LMs robust to misinformation by explicitly fine-tuning a
discriminator or prompting to elicit discrimination capability in GPT-3. Our
empirical results on open-domain question answering show that these approaches
significantly improve LMs' robustness to knowledge conflicts. We also provide
our findings on interleaving the fine-tuned model's decision with the
in-context learning process, paving a new path to leverage the best of both
worlds.
"
Causal Interventions-based Few-Shot Named Entity Recognition,Zhen Yang,http://arxiv.org/pdf/2305.01914v1.pdf,2023-05-03,['cs.cl'],2305.01914v1.pdf,"  Few-shot named entity recognition (NER) systems aims at recognizing new
classes of entities based on a few labeled samples. A significant challenge in
the few-shot regime is prone to overfitting than the tasks with abundant
samples. The heavy overfitting in few-shot learning is mainly led by spurious
correlation caused by the few samples selection bias. To alleviate the problem
of the spurious correlation in the few-shot NER, in this paper, we propose a
causal intervention-based few-shot NER method. Based on the prototypical
network, the method intervenes in the context and prototype via backdoor
adjustment during training. In particular, intervening in the context of the
one-shot scenario is very difficult, so we intervene in the prototype via
incremental learning, which can also avoid catastrophic forgetting. Our
experiments on different benchmarks show that our approach achieves new
state-of-the-art results (achieving up to 29% absolute improvement and 12% on
average for all tasks).
"
Plug-and-Play Multilingual Few-shot Spoken Words Recognition,Aaqib Saeed,http://arxiv.org/pdf/2305.03058v1.pdf,2023-05-03,"['eess.as', 'cs.lg', 'cs.sd']",2305.03058v1.pdf,"  As technology advances and digital devices become prevalent, seamless
human-machine communication is increasingly gaining significance. The growing
adoption of mobile, wearable, and other Internet of Things (IoT) devices has
changed how we interact with these smart devices, making accurate spoken words
recognition a crucial component for effective interaction. However, building
robust spoken words detection system that can handle novel keywords remains
challenging, especially for low-resource languages with limited training data.
Here, we propose PLiX, a multilingual and plug-and-play keyword spotting system
that leverages few-shot learning to harness massive real-world data and enable
the recognition of unseen spoken words at test-time. Our few-shot deep models
are learned with millions of one-second audio clips across 20 languages,
achieving state-of-the-art performance while being highly efficient. Extensive
evaluations show that PLiX can generalize to novel spoken words given as few as
just one support example and performs well on unseen languages out of the box.
We release models and inference code to serve as a foundation for future
research and voice-enabled user interface development for emerging devices.
"
Data Curation for Image Captioning with Text-to-Image Generative Models,Wenyan Li,http://arxiv.org/pdf/2305.03610v1.pdf,2023-05-05,"['cs.cv', 'cs.ai', 'cs.cl']",2305.03610v1.pdf,"  Recent advances in image captioning are mainly driven by large-scale
vision-language pretraining, relying heavily on computational resources and
increasingly large multimodal datasets. Instead of scaling up pretraining data,
we ask whether it is possible to improve performance by improving the quality
of the samples in existing datasets. We pursue this question through two
approaches to data curation: one that assumes that some examples should be
avoided due to mismatches between the image and caption, and one that assumes
that the mismatch can be addressed by replacing the image, for which we use the
state-of-the-art Stable Diffusion model. These approaches are evaluated using
the BLIP model on MS COCO and Flickr30K in both finetuning and few-shot
learning settings. Our simple yet effective approaches consistently outperform
baselines, indicating that better image captioning models can be trained by
curating existing resources. Finally, we conduct a human study to understand
the errors made by the Stable Diffusion model and highlight directions for
future work in text-to-image generation.
"
Make Prompt-based Black-Box Tuning Colorful: Boosting Model  Generalization from Three Orthogonal Perspectives,Qiushi Sun,http://arxiv.org/pdf/2305.08088v1.pdf,2023-05-14,"['cs.cl', 'cs.ai']",2305.08088v1.pdf,"  Large language models (LLMs) have shown increasing power on various natural
language processing (NLP) tasks. However, tuning these models for downstream
tasks usually needs exorbitant costs or is unavailable due to commercial
considerations. Recently, black-box tuning has been proposed to address this
problem by optimizing task-specific prompts without accessing the gradients and
hidden representations. However, most existing works have yet fully exploited
the potential of gradient-free optimization under the scenario of few-shot
learning. In this paper, we describe BBT-RGB, a suite of straightforward and
complementary techniques for enhancing the efficiency and performance of
black-box optimization. Specifically, our method includes three plug-and-play
components: (1) Two-stage derivative-free optimization strategy that
facilitates fast convergence and mitigates overfitting; (2) Automatic
verbalizer construction with its novel usage under few-shot settings; (3)
Better prompt initialization policy based on instruction search and
auto-selected demonstration. Extensive experiments across various tasks on
natural language understanding and inference demonstrate the effectiveness of
our method. Our codes are publicly available at
https://github.com/QiushiSun/BBT-RGB.
"
CPL-NoViD: Context-Aware Prompt-based Learning for Norm Violation  Detection in Online Communities,Zihao He,http://arxiv.org/pdf/2305.09846v2.pdf,2023-05-16,"['cs.cl', 'cs.si']",2305.09846v2.pdf,"  Detecting norm violations in online communities is critical to maintaining
healthy and safe spaces for online discussions. Existing machine learning
approaches often struggle to adapt to the diverse rules and interpretations
across different communities due to the inherent challenges of fine-tuning
models for such context-specific tasks. In this paper, we introduce
Context-aware Prompt-based Learning for Norm Violation Detection (CPL-NoViD), a
novel method that employs prompt-based learning to detect norm violations
across various types of rules. CPL-NoViD outperforms the baseline by
incorporating context through natural language prompts and demonstrates
improved performance across different rule types. Significantly, it not only
excels in cross-rule-type and cross-community norm violation detection but also
exhibits adaptability in few-shot learning scenarios. Most notably, it
establishes a new state-of-the-art in norm violation detection, surpassing
existing benchmarks. Our work highlights the potential of prompt-based learning
for context-sensitive norm violation detection and paves the way for future
research on more adaptable, context-aware models to better support online
community moderators.
"
A Weak Supervision Approach for Few-Shot Aspect Based Sentiment,Robert Vacareanu,http://arxiv.org/pdf/2305.11979v1.pdf,2023-05-19,['cs.cl'],2305.11979v1.pdf,"  We explore how weak supervision on abundant unlabeled data can be leveraged
to improve few-shot performance in aspect-based sentiment analysis (ABSA)
tasks. We propose a pipeline approach to construct a noisy ABSA dataset, and we
use it to adapt a pre-trained sequence-to-sequence model to the ABSA tasks. We
test the resulting model on three widely used ABSA datasets, before and after
fine-tuning. Our proposed method preserves the full fine-tuning performance
while showing significant improvements (15.84% absolute F1) in the few-shot
learning scenario for the harder tasks. In zero-shot (i.e., without
fine-tuning), our method outperforms the previous state of the art on the
aspect extraction sentiment classification (AESC) task and is, additionally,
capable of performing the harder aspect sentiment triplet extraction (ASTE)
task.
"
Efficient Open Domain Multi-Hop Question Answering with Few-Shot Data  Synthesis,Mingda Chen,http://arxiv.org/pdf/2305.13691v1.pdf,2023-05-23,['cs.cl'],2305.13691v1.pdf,"  Few-shot learning for open domain multi-hop question answering typically
relies on large language models (LLMs). While powerful, LLMs are inefficient at
the inference time. We propose a data synthesis framework for multi-hop
question answering that allows for improving smaller language models with less
than 10 human-annotated question answer pairs. The framework is built upon the
data generation functions parameterized by LLMs and prompts, which requires
minimal hand-crafted features. Empirically, we synthesize millions of multi-hop
questions and claims. After finetuning language models on the synthetic data,
we evaluate the models on popular benchmarks on multi-hop question answering
and fact verification. Our experimental results show that finetuning on the
synthetic data improves model performance significantly, allowing our finetuned
models to be competitive with prior models while being almost one-third the
size in terms of parameter counts.
"
Images in Language Space: Exploring the Suitability of Large Language  Models for Vision & Language Tasks,Sherzod Hakimov,http://arxiv.org/pdf/2305.13782v1.pdf,2023-05-23,['cs.cl'],2305.13782v1.pdf,"  Large language models have demonstrated robust performance on various
language tasks using zero-shot or few-shot learning paradigms. While being
actively researched, multimodal models that can additionally handle images as
input have yet to catch up in size and generality with language-only models. In
this work, we ask whether language-only models can be utilised for tasks that
require visual input -- but also, as we argue, often require a strong reasoning
component. Similar to some recent related work, we make visual information
accessible to the language model using separate verbalisation models.
Specifically, we investigate the performance of open-source, open-access
language models against GPT-3 on five vision-language tasks when given
textually-encoded visual information. Our results suggest that language models
are effective for solving vision-language tasks even with limited samples. This
approach also enhances the interpretability of a model's output by providing a
means of tracing the output back through the verbalised image content.
"
Improving Factuality and Reasoning in Language Models through Multiagent  Debate,Yilun Du,http://arxiv.org/pdf/2305.14325v1.pdf,2023-05-23,"['cs.cl', 'cs.ai', 'cs.cv', 'cs.lg']",2305.14325v1.pdf,"  Large language models (LLMs) have demonstrated remarkable capabilities in
language generation, understanding, and few-shot learning in recent years. An
extensive body of work has explored how their performance may be further
improved through the tools of prompting, ranging from verification,
self-consistency, or intermediate scratchpads. In this paper, we present a
complementary approach to improve language responses where multiple language
model instances propose and debate their individual responses and reasoning
processes over multiple rounds to arrive at a common final answer. Our findings
indicate that this approach significantly enhances mathematical and strategic
reasoning across a number of tasks. We also demonstrate that our approach
improves the factual validity of generated content, reducing fallacious answers
and hallucinations that contemporary models are prone to. Our approach may be
directly applied to existing black-box models and uses identical procedure and
prompts for all tasks we investigate. Overall, our findings suggest that such
""society of minds"" approach has the potential to significantly advance the
capabilities of LLMs and pave the way for further breakthroughs in language
generation and understanding.
"
Are Large Language Models Robust Zero-shot Coreference Resolvers?,Nghia T. Le,http://arxiv.org/pdf/2305.14489v1.pdf,2023-05-23,['cs.cl'],2305.14489v1.pdf,"  Recent progress in domain adaptation for coreference resolution relies on
continued training using annotated data from target domains. At the same time,
pre-trained large language models (LMs) have exhibited strong zero- and
few-shot learning abilities across a wide range of NLP tasks including pronoun
resolution. While this demonstrates evidence of coreference ability, previous
work has mostly studied this ability using simple sentence-level datasets such
as the Winograd Schema Challenge. In this work, we assess the feasibility of
zero-shot learning for coreference resolution by evaluating instruction-tuned
language models on more difficult, linguistically-complex coreference
benchmarks (e.g., CoNLL-2012). We demonstrate that zero-shot prompting
outperforms current unsupervised coreference systems. Further investigations
reveal the robust zero-shot generalization ability of instruction-tuned LMs
across a wide range of domains, languages, and time periods, as well as a
strong reliance on high-quality mention detection systems.
"
Training on Thin Air: Improve Image Classification with Generated Data,Yongchao Zhou,http://arxiv.org/pdf/2305.15316v1.pdf,2023-05-24,"['cs.cv', 'cs.lg']",2305.15316v1.pdf,"  Acquiring high-quality data for training discriminative models is a crucial
yet challenging aspect of building effective predictive systems. In this paper,
we present Diffusion Inversion, a simple yet effective method that leverages
the pre-trained generative model, Stable Diffusion, to generate diverse,
high-quality training data for image classification. Our approach captures the
original data distribution and ensures data coverage by inverting images to the
latent space of Stable Diffusion, and generates diverse novel training images
by conditioning the generative model on noisy versions of these vectors. We
identify three key components that allow our generated images to successfully
supplant the original dataset, leading to a 2-3x enhancement in sample
complexity and a 6.5x decrease in sampling time. Moreover, our approach
consistently outperforms generic prompt-based steering methods and KNN
retrieval baseline across a wide range of datasets. Additionally, we
demonstrate the compatibility of our approach with widely-used data
augmentation techniques, as well as the reliability of the generated data in
supporting various neural architectures and enhancing few-shot learning.
"
ParaAMR: A Large-Scale Syntactically Diverse Paraphrase Dataset by AMR  Back-Translation,Kuan-Hao Huang,http://arxiv.org/pdf/2305.16585v1.pdf,2023-05-26,['cs.cl'],2305.16585v1.pdf,"  Paraphrase generation is a long-standing task in natural language processing
(NLP). Supervised paraphrase generation models, which rely on human-annotated
paraphrase pairs, are cost-inefficient and hard to scale up. On the other hand,
automatically annotated paraphrase pairs (e.g., by machine back-translation),
usually suffer from the lack of syntactic diversity -- the generated paraphrase
sentences are very similar to the source sentences in terms of syntax. In this
work, we present ParaAMR, a large-scale syntactically diverse paraphrase
dataset created by abstract meaning representation back-translation. Our
quantitative analysis, qualitative examples, and human evaluation demonstrate
that the paraphrases of ParaAMR are syntactically more diverse compared to
existing large-scale paraphrase datasets while preserving good semantic
similarity. In addition, we show that ParaAMR can be used to improve on three
NLP tasks: learning sentence embeddings, syntactically controlled paraphrase
generation, and data augmentation for few-shot learning. Our results thus
showcase the potential of ParaAMR for improving various NLP applications.
"
Adapting Language-Audio Models as Few-Shot Audio Learners,Jinhua Liang,http://arxiv.org/pdf/2305.17719v1.pdf,2023-05-28,"['eess.as', 'cs.sd']",2305.17719v1.pdf,"  We presented the Treff adapter, a training-efficient adapter for CLAP, to
boost zero-shot classification performance by making use of a small set of
labelled data. Specifically, we designed CALM to retrieve the probability
distribution of text-audio clips over classes using a set of audio-label pairs
and combined it with CLAP's zero-shot classification results. Furthermore, we
designed a training-free version of the Treff adapter by using CALM as a cosine
similarity measure. Experiments showed that the proposed Treff adapter is
comparable and even better than fully-supervised methods and adaptation methods
in low-shot and data-abundant scenarios. While the Treff adapter shows that
combining large-scale pretraining and rapid learning of domain-specific
knowledge is non-trivial for obtaining generic representations for few-shot
learning, it is still limited to audio classification tasks. In the future, we
will explore how to use audio-language models in diverse audio domains.
"
Transfer Learning for Power Outage Detection Task with Limited Training  Data,Olukunle Owolabi,http://arxiv.org/pdf/2305.17817v1.pdf,2023-05-28,"['cs.cl', 'stat.ap']",2305.17817v1.pdf,"  Early detection of power outages is crucial for maintaining a reliable power
distribution system. This research investigates the use of transfer learning
and language models in detecting outages with limited labeled data. By
leveraging pretraining and transfer learning, models can generalize to unseen
classes.
  Using a curated balanced dataset of social media tweets related to power
outages, we conducted experiments using zero-shot and few-shot learning. Our
hypothesis is that Language Models pretrained with limited data could achieve
high performance in outage detection tasks over baseline models. Results show
that while classical models outperform zero-shot Language Models, few-shot
fine-tuning significantly improves their performance. For example, with 10%
fine-tuning, BERT achieves 81.3% accuracy (+15.3%), and GPT achieves 74.5%
accuracy (+8.5%). This has practical implications for analyzing and localizing
outages in scenarios with limited data availability.
  Our evaluation provides insights into the potential of few-shot fine-tuning
with Language Models for power outage detection, highlighting their strengths
and limitations. This research contributes to the knowledge base of leveraging
advanced natural language processing techniques for managing critical
infrastructure.
"
Deeply Coupled Cross-Modal Prompt Learning,Xuejing Liu,http://arxiv.org/pdf/2305.17903v2.pdf,2023-05-29,['cs.cv'],2305.17903v2.pdf,"  Recent advancements in multimodal foundation models (e.g., CLIP) have
excelled in zero-shot generalization. Prompt tuning involved in the knowledge
transfer from foundation models to downstream tasks has gained significant
attention recently. Existing prompt-tuning methods in cross-modal learning,
however, either solely focus on language branch, or learn vision-language
interaction in a shallow mechanism. In this context, we propose a Deeply
coupled Cross-modal Prompt learning (DCP) method based on CLIP. DCP flexibly
accommodates the interplay between vision and language with a Cross-Modal
Prompt Attention (CMPA) mechanism, which enables the mutual exchange of
respective representation through a well-connected multi-head attention module
progressively and strongly. We then conduct comprehensive few-shot learning
experiments on 11 image classification datasets and analyze the robustness to
domain shift as well. Thorough experimental analysis evidently demonstrates the
superb few-shot generalization and compelling domain adaption capacity of a
well-executed DCP. The code can be found at https://github.com/GingL/CMPA.
"
"What does the Failure to Reason with ""Respectively"" in Zero/Few-Shot  Settings Tell Us about Language Models?",Ruixiang Cui,http://arxiv.org/pdf/2305.19597v1.pdf,2023-05-31,"['cs.cl', 'cs.ai']",2305.19597v1.pdf,"  Humans can effortlessly understand the coordinate structure of sentences such
as ""Niels Bohr and Kurt Cobain were born in Copenhagen and Seattle,
respectively"". In the context of natural language inference (NLI), we examine
how language models (LMs) reason with respective readings (Gawron and Kehler,
2004) from two perspectives: syntactic-semantic and commonsense-world
knowledge. We propose a controlled synthetic dataset WikiResNLI and a naturally
occurring dataset NatResNLI to encompass various explicit and implicit
realizations of ""respectively"". We show that fine-tuned NLI models struggle
with understanding such readings without explicit supervision. While few-shot
learning is easy in the presence of explicit cues, longer training is required
when the reading is evoked implicitly, leaving models to rely on common sense
inferences. Furthermore, our fine-grained analysis indicates models fail to
generalize across different constructions. To conclude, we demonstrate that LMs
still lag behind humans in generalizing to the long tail of linguistic
constructions.
"
Measuring the Robustness of Natural Language Processing Models to Domain  Shifts,Nitay Calderon,http://arxiv.org/pdf/2306.00168v2.pdf,2023-05-31,['cs.cl'],2306.00168v2.pdf,"  Existing research on Domain Robustness (DR) suffers from disparate setups,
lack of evaluation task variety, and reliance on challenge sets. In this paper,
we pose a fundamental question: What is the state of affairs of the DR
challenge in the era of Large Language Models (LLMs)? To this end, we construct
a DR benchmark comprising diverse NLP tasks, including sentence and token-level
classification, QA, and generation, each task consists of several domains. We
explore the DR challenge of fine-tuned and few-shot learning models in natural
domain shift settings and devise two diagnostic metrics of Out-of-Distribution
(OOD) performance degradation: The commonly used Source Drop (SD) and the
overlooked Target Drop (TD). Our findings reveal important insights: First,
despite their capabilities, zero-to-few shot LLMs and fine-tuning approaches
still fail to meet satisfactory performance in the OOD context; Second, TD
approximates better than SD the average OOD degradation; Third, in a
significant proportion of domain shifts, either SD or TD is positive, but not
both, and therefore disregarding one can lead to incorrect DR conclusions.
"
Human-like Few-Shot Learning via Bayesian Reasoning over Natural  Language,Kevin Ellis,http://arxiv.org/pdf/2306.02797v3.pdf,2023-06-05,"['cs.cl', 'cs.ai', 'cs.lg']",2306.02797v3.pdf,"  A core tension in models of concept learning is that the model must carefully
balance the tractability of inference against the expressivity of the
hypothesis class. Humans, however, can efficiently learn a broad range of
concepts. We introduce a model of inductive learning that seeks to be
human-like in that sense. It implements a Bayesian reasoning process where a
language model first proposes candidate hypotheses expressed in natural
language, which are then re-weighed by a prior and a likelihood. By estimating
the prior from human data, we can predict human judgments on learning problems
involving numbers and sets, spanning concepts that are generative,
discriminative, propositional, and higher-order.
"
Few Shot Rationale Generation using Self-Training with Dual Teachers,Aditya Srikanth Veerubhotla,http://arxiv.org/pdf/2306.03315v1.pdf,2023-06-05,"['cs.cl', 'cs.ai']",2306.03315v1.pdf,"  Self-rationalizing models that also generate a free-text explanation for
their predicted labels are an important tool to build trustworthy AI
applications. Since generating explanations for annotated labels is a laborious
and costly pro cess, recent models rely on large pretrained language models
(PLMs) as their backbone and few-shot learning. In this work we explore a
self-training approach leveraging both labeled and unlabeled data to further
improve few-shot models, under the assumption that neither human written
rationales nor annotated task labels are available at scale. We introduce a
novel dual-teacher learning framework, which learns two specialized teacher
models for task prediction and rationalization using self-training and distills
their knowledge into a multi-tasking student model that can jointly generate
the task label and rationale. Furthermore, we formulate a new loss function,
Masked Label Regularization (MLR) which promotes explanations to be strongly
conditioned on predicted labels. Evaluation on three public datasets
demonstrate that the proposed methods are effective in modeling task labels and
generating faithful rationales.
"
A New Dataset and Empirical Study for Sentence Simplification in Chinese,Shiping Yang,http://arxiv.org/pdf/2306.04188v1.pdf,2023-06-07,['cs.cl'],2306.04188v1.pdf,"  Sentence Simplification is a valuable technique that can benefit language
learners and children a lot. However, current research focuses more on English
sentence simplification. The development of Chinese sentence simplification is
relatively slow due to the lack of data. To alleviate this limitation, this
paper introduces CSS, a new dataset for assessing sentence simplification in
Chinese. We collect manual simplifications from human annotators and perform
data analysis to show the difference between English and Chinese sentence
simplifications. Furthermore, we test several unsupervised and zero/few-shot
learning methods on CSS and analyze the automatic evaluation and human
evaluation results. In the end, we explore whether Large Language Models can
serve as high-quality Chinese sentence simplification systems by evaluating
them on CSS.
"
Can AI Moderate Online Communities?,Henrik Axelsen,http://arxiv.org/pdf/2306.05122v1.pdf,2023-06-08,['cs.cy'],2306.05122v1.pdf,"  The task of cultivating healthy communication in online communities becomes
increasingly urgent, as gaming and social media experiences become
progressively more immersive and life-like. We approach the challenge of
moderating online communities by training student models using a large language
model (LLM). We use zero-shot learning models to distill and expand datasets
followed by a few-shot learning and a fine-tuning approach, leveraging
open-access generative pre-trained transformer models (GPT) from OpenAI. Our
preliminary findings suggest, that when properly trained, LLMs can excel in
identifying actor intentions, moderating toxic comments, and rewarding positive
contributions. The student models perform above-expectation in non-contextual
assignments such as identifying classically toxic behavior and perform
sufficiently on contextual assignments such as identifying positive
contributions to online discourse. Further, using open-access models like
OpenAI's GPT we experience a step-change in the development process for what
has historically been a complex modeling task. We contribute to the information
system (IS) discourse with a rapid development framework on the application of
generative AI in content online moderation and management of culture in
decentralized, pseudonymous communities by providing a sample model suite of
industrial-ready generative AI models based on open-access LLMs.
"
The ADAIO System at the BEA-2023 Shared Task on Generating AI Teacher  Responses in Educational Dialogues,Adaeze Adigwe,http://arxiv.org/pdf/2306.05360v1.pdf,2023-06-08,"['cs.cl', 'cs.ai', 'cs.cy']",2306.05360v1.pdf,"  This paper presents the ADAIO team's system entry in the Building Educational
Applications (BEA) 2023 Shared Task on Generating AI Teacher Responses in
Educational Dialogues. The task aims to assess the performance of
state-of-the-art generative models as AI teachers in producing suitable
responses within a student-teacher dialogue. Our system comprises evaluating
various baseline models using OpenAI GPT-3 and designing diverse prompts to
prompt the OpenAI models for teacher response generation. After the challenge,
our system achieved second place by employing a few-shot prompt-based approach
with the OpenAI text-davinci-003 model. The results highlight the few-shot
learning capabilities of large-language models, particularly OpenAI's GPT-3, in
the role of AI teachers.
"
Prompt-based Extraction of Social Determinants of Health Using Few-shot  Learning,Giridhar Kaushik Ramachandran,http://arxiv.org/pdf/2306.07170v1.pdf,2023-06-12,['cs.cl'],2306.07170v1.pdf,"  Social determinants of health (SDOH) documented in the electronic health
record through unstructured text are increasingly being studied to understand
how SDOH impacts patient health outcomes. In this work, we utilize the Social
History Annotation Corpus (SHAC), a multi-institutional corpus of de-identified
social history sections annotated for SDOH, including substance use,
employment, and living status information. We explore the automatic extraction
of SDOH information with SHAC in both standoff and inline annotation formats
using GPT-4 in a one-shot prompting setting. We compare GPT-4 extraction
performance with a high-performing supervised approach and perform thorough
error analyses. Our prompt-based GPT-4 method achieved an overall 0.652 F1 on
the SHAC test set, similar to the 7th best-performing system among all teams in
the n2c2 challenge with SHAC.
"
Rethink the Effectiveness of Text Data Augmentation: An Empirical  Analysis,Zhengxiang Shi,http://arxiv.org/pdf/2306.07664v1.pdf,2023-06-13,"['cs.cl', 'cs.ai', 'cs.lg']",2306.07664v1.pdf,"  In recent years, language models (LMs) have made remarkable progress in
advancing the field of natural language processing (NLP). However, the impact
of data augmentation (DA) techniques on the fine-tuning (FT) performance of
these LMs has been a topic of ongoing debate. In this study, we evaluate the
effectiveness of three different FT methods in conjugation with
back-translation across an array of 7 diverse NLP tasks, including
classification and regression types, covering single-sentence and sentence-pair
tasks. Contrary to prior assumptions that DA does not contribute to the
enhancement of LMs' FT performance, our findings reveal that continued
pre-training on augmented data can effectively improve the FT performance of
the downstream tasks. In the most favourable case, continued pre-training
improves the performance of FT by more than 10% in the few-shot learning
setting. Our finding highlights the potential of DA as a powerful tool for
bolstering LMs' performance.
"
Neural Fine-Tuning Search for Few-Shot Learning,Panagiotis Eustratiadis,http://arxiv.org/pdf/2306.09295v1.pdf,2023-06-15,"['cs.cv', 'cs.lg']",2306.09295v1.pdf,"  In few-shot recognition, a classifier that has been trained on one set of
classes is required to rapidly adapt and generalize to a disjoint, novel set of
classes. To that end, recent studies have shown the efficacy of fine-tuning
with carefully crafted adaptation architectures. However this raises the
question of: How can one design the optimal adaptation strategy? In this paper,
we study this question through the lens of neural architecture search (NAS).
Given a pre-trained neural network, our algorithm discovers the optimal
arrangement of adapters, which layers to keep frozen and which to fine-tune. We
demonstrate the generality of our NAS method by applying it to both residual
networks and vision transformers and report state-of-the-art performance on
Meta-Dataset and Meta-Album.
"
Multilingual Few-Shot Learning via Language Model Retrieval,Genta Indra Winata,http://arxiv.org/pdf/2306.10964v1.pdf,2023-06-19,['cs.cl'],2306.10964v1.pdf,"  Transformer-based language models have achieved remarkable success in
few-shot in-context learning and drawn a lot of research interest. However,
these models' performance greatly depends on the choice of the example prompts
and also has high variability depending on how samples are chosen. In this
paper, we conduct a comprehensive study of retrieving semantically similar
few-shot samples and using them as the context, as it helps the model decide
the correct label without any gradient update in the multilingual and
cross-lingual settings. We evaluate the proposed method on five natural
language understanding datasets related to intent detection, question
classification, sentiment analysis, and topic classification. The proposed
method consistently outperforms random sampling in monolingual and
cross-lingual tasks in non-English languages.
"
Language models are weak learners,Hariharan Manikandan,http://arxiv.org/pdf/2306.14101v1.pdf,2023-06-25,"['cs.lg', 'cs.ai']",2306.14101v1.pdf,"  A central notion in practical and theoretical machine learning is that of a
$\textit{weak learner}$, classifiers that achieve better-than-random
performance (on any given distribution over data), even by a small margin. Such
weak learners form the practical basis for canonical machine learning methods
such as boosting. In this work, we illustrate that prompt-based large language
models can operate effectively as said weak learners. Specifically, we
illustrate the use of a large language model (LLM) as a weak learner in a
boosting algorithm applied to tabular data. We show that by providing (properly
sampled according to the distribution of interest) text descriptions of tabular
data samples, LLMs can produce a summary of the samples that serves as a
template for classification and achieves the aim of acting as a weak learner on
this task. We incorporate these models into a boosting approach, which in some
settings can leverage the knowledge within the LLM to outperform traditional
tree-based boosting. The model outperforms both few-shot learning and
occasionally even more involved fine-tuning procedures, particularly for tasks
involving small numbers of data points. The results illustrate the potential
for prompt-based LLMs to function not just as few-shot learners themselves, but
as components of larger machine learning pipelines.
"
RobuT: A Systematic Study of Table QA Robustness Against Human-Annotated  Adversarial Perturbations,Yilun Zhao,http://arxiv.org/pdf/2306.14321v1.pdf,2023-06-25,"['cs.cl', 'cs.ai']",2306.14321v1.pdf,"  Despite significant progress having been made in question answering on
tabular data (Table QA), it's unclear whether, and to what extent existing
Table QA models are robust to task-specific perturbations, e.g., replacing key
question entities or shuffling table columns. To systematically study the
robustness of Table QA models, we propose a benchmark called RobuT, which
builds upon existing Table QA datasets (WTQ, WikiSQL-Weak, and SQA) and
includes human-annotated adversarial perturbations in terms of table header,
table content, and question. Our results indicate that both state-of-the-art
Table QA models and large language models (e.g., GPT-3) with few-shot learning
falter in these adversarial sets. We propose to address this problem by using
large language models to generate adversarial examples to enhance training,
which significantly improves the robustness of Table QA models. Our data and
code is publicly available at https://github.com/yilunzhao/RobuT.
"
Benchmarking Large Language Model Capabilities for Conditional  Generation,Joshua Maynez,http://arxiv.org/pdf/2306.16793v1.pdf,2023-06-29,['cs.cl'],2306.16793v1.pdf,"  Pre-trained large language models (PLMs) underlie most new developments in
natural language processing. They have shifted the field from
application-specific model pipelines to a single model that is adapted to a
wide range of tasks. Autoregressive PLMs like GPT-3 or PaLM, alongside
techniques like few-shot learning, have additionally shifted the output
modality to generation instead of classification or regression. Despite their
ubiquitous use, the generation quality of language models is rarely evaluated
when these models are introduced. Additionally, it is unclear how existing
generation tasks--while they can be used to compare systems at a high
level--relate to the real world use cases for which people have been adopting
them. In this work, we discuss how to adapt existing application-specific
generation benchmarks to PLMs and provide an in-depth, empirical study of the
limitations and capabilities of PLMs in natural language generation tasks along
dimensions such as scale, architecture, input and output language. Our results
show that PLMs differ in their applicability to different data regimes and
their generalization to multiple languages and inform which PLMs to use for a
given generation task setup. We share best practices to be taken into
consideration when benchmarking generation capabilities during the development
of upcoming PLMs.
"
On Conditional and Compositional Language Model Differentiable Prompting,Jonathan Pilault,http://arxiv.org/pdf/2307.01446v1.pdf,2023-07-04,"['cs.cl', 'cs.lg']",2307.01446v1.pdf,"  Prompts have been shown to be an effective method to adapt a frozen
Pretrained Language Model (PLM) to perform well on downstream tasks. Prompts
can be represented by a human-engineered word sequence or by a learned
continuous embedding. In this work, we investigate conditional and
compositional differentiable prompting. We propose a new model, Prompt
Production System (PRopS), which learns to transform task instructions or input
metadata, into continuous prompts that elicit task-specific outputs from the
PLM. Our model uses a modular network structure based on our neural formulation
of Production Systems, which allows the model to learn discrete rules -- neural
functions that learn to specialize in transforming particular prompt input
patterns, making it suitable for compositional transfer learning and few-shot
learning. We present extensive empirical and theoretical analysis and show that
PRopS consistently surpasses other PLM adaptation techniques, and often
improves upon fully fine-tuned models, on compositional generalization tasks,
controllable summarization and multilingual translation, while needing fewer
trainable parameters.
"
Diverse Retrieval-Augmented In-Context Learning for Dialogue State  Tracking,Brendan King,http://arxiv.org/pdf/2307.01453v1.pdf,2023-07-04,['cs.cl'],2307.01453v1.pdf,"  There has been significant interest in zero and few-shot learning for
dialogue state tracking (DST) due to the high cost of collecting and annotating
task-oriented dialogues. Recent work has demonstrated that in-context learning
requires very little data and zero parameter updates, and even outperforms
trained methods in the few-shot setting (Hu et al. 2022). We propose RefPyDST,
which advances the state of the art with three advancements to in-context
learning for DST. First, we formulate DST as a Python programming task,
explicitly modeling language coreference as variable reference in Python.
Second, since in-context learning depends highly on the context examples, we
propose a method to retrieve a diverse set of relevant examples to improve
performance. Finally, we introduce a novel re-weighting method during decoding
that takes into account probabilities of competing surface forms, and produces
a more accurate dialogue state prediction. We evaluate our approach using
MultiWOZ and achieve state-of-the-art multi-domain joint-goal accuracy in zero
and few-shot settings.
"
Generating Efficient Training Data via LLM-based Attribute Manipulation,Letian Peng,http://arxiv.org/pdf/2307.07099v1.pdf,2023-07-14,['cs.cl'],2307.07099v1.pdf,"  In this paper, we propose a novel method, Chain-of-Thoughts Attribute
Manipulation (CoTAM), to guide few-shot learning by carefully crafted data from
Large Language Models (LLMs). The main idea is to create data with changes only
in the attribute targeted by the task. Inspired by facial attribute
manipulation, our approach generates label-switched data by leveraging LLMs to
manipulate task-specific attributes and reconstruct new sentences in a
controlled manner. Instead of conventional latent representation controlling,
we implement chain-of-thoughts decomposition and reconstruction to adapt the
procedure to LLMs. Extensive results on text classification and other tasks
verify the advantage of CoTAM over other LLM-based text generation methods with
the same number of training examples. Analysis visualizes the attribute
manipulation effectiveness of CoTAM and presents the potential of LLM-guided
learning with even less supervision.
"
Overthinking the Truth: Understanding how Language Models Process False  Demonstrations,Danny Halawi,http://arxiv.org/pdf/2307.09476v1.pdf,2023-07-18,"['cs.lg', 'cs.ai', 'cs.cl']",2307.09476v1.pdf,"  Modern language models can imitate complex patterns through few-shot
learning, enabling them to complete challenging tasks without fine-tuning.
However, imitation can also lead models to reproduce inaccuracies or harmful
content if present in the context. We study harmful imitation through the lens
of a model's internal representations, and identify two related phenomena:
overthinking and false induction heads. The first phenomenon, overthinking,
appears when we decode predictions from intermediate layers, given correct vs.
incorrect few-shot demonstrations. At early layers, both demonstrations induce
similar model behavior, but the behavior diverges sharply at some ""critical
layer"", after which the accuracy given incorrect demonstrations progressively
decreases. The second phenomenon, false induction heads, are a possible
mechanistic cause of overthinking: these are heads in late layers that attend
to and copy false information from previous demonstrations, and whose ablation
reduces overthinking. Beyond scientific understanding, our results suggest that
studying intermediate model computations could be a promising avenue for
understanding and guarding against harmful model behaviors.
"
Does Correction Remain A Problem For Large Language Models?,Xiaowu Zhang,http://arxiv.org/pdf/2308.01776v2.pdf,2023-08-03,['cs.cl'],2308.01776v2.pdf,"  As large language models, such as GPT, continue to advance the capabilities
of natural language processing (NLP), the question arises: does the problem of
correction still persist? This paper investigates the role of correction in the
context of large language models by conducting two experiments. The first
experiment focuses on correction as a standalone task, employing few-shot
learning techniques with GPT-like models for error correction. The second
experiment explores the notion of correction as a preparatory task for other
NLP tasks, examining whether large language models can tolerate and perform
adequately on texts containing certain levels of noise or errors. By addressing
these experiments, we aim to shed light on the significance of correction in
the era of large language models and its implications for various NLP
applications.
"
Thespian: Multi-Character Text Role-Playing Game Agents,Christopher Cui,http://arxiv.org/pdf/2308.01872v1.pdf,2023-08-03,"['cs.ai', 'cs.cl']",2308.01872v1.pdf,"  Text-adventure games and text role-playing games are grand challenges for
reinforcement learning game playing agents. Text role-playing games are
open-ended environments where an agent must faithfully play a particular
character. We consider the distinction between characters and actors, where an
actor agent has the ability to play multiple characters. We present a framework
we call a thespian agent that can learn to emulate multiple characters along
with a soft prompt that can be used to direct it as to which character to play
at any time. We further describe an attention mechanism that allows the agent
to learn new characters that are based on previously learned characters in a
few-shot fashion. We show that our agent outperforms the state of the art agent
framework in multi-character learning and few-shot learning.
"
Meta-learning in healthcare: A survey,Alireza Rafiei,http://arxiv.org/pdf/2308.02877v1.pdf,2023-08-05,"['cs.lg', 'cs.ai']",2308.02877v1.pdf,"  As a subset of machine learning, meta-learning, or learning to learn, aims at
improving the model's capabilities by employing prior knowledge and experience.
A meta-learning paradigm can appropriately tackle the conventional challenges
of traditional learning approaches, such as insufficient number of samples,
domain shifts, and generalization. These unique characteristics position
meta-learning as a suitable choice for developing influential solutions in
various healthcare contexts, where the available data is often insufficient,
and the data collection methodologies are different. This survey discusses
meta-learning broad applications in the healthcare domain to provide insight
into how and where it can address critical healthcare challenges. We first
describe the theoretical foundations and pivotal methods of meta-learning. We
then divide the employed meta-learning approaches in the healthcare domain into
two main categories of multi/single-task learning and many/few-shot learning
and survey the studies. Finally, we highlight the current challenges in
meta-learning research, discuss the potential solutions and provide future
perspectives on meta-learning in healthcare.
"
AutoConv: Automatically Generating Information-seeking Conversations  with Large Language Models,Siheng Li,http://arxiv.org/pdf/2308.06507v1.pdf,2023-08-12,['cs.cl'],2308.06507v1.pdf,"  Information-seeking conversation, which aims to help users gather information
through conversation, has achieved great progress in recent years. However, the
research is still stymied by the scarcity of training data. To alleviate this
problem, we propose AutoConv for synthetic conversation generation, which takes
advantage of the few-shot learning ability and generation capacity of large
language models (LLM). Specifically, we formulate the conversation generation
problem as a language modeling task, then finetune an LLM with a few human
conversations to capture the characteristics of the information-seeking process
and use it for generating synthetic conversations with high quality.
Experimental results on two frequently-used datasets verify that AutoConv has
substantial improvements over strong baselines and alleviates the dependence on
human annotation. In addition, we also provide several analysis studies to
promote future research.
"
Few-shot Class-incremental Learning: A Survey,Jinghua Zhang,http://arxiv.org/pdf/2308.06764v1.pdf,2023-08-13,"['cs.lg', 'cs.ai']",2308.06764v1.pdf,"  Few-shot Class-Incremental Learning (FSCIL) presents a unique challenge in
machine learning, as it necessitates the continuous learning of new classes
from sparse labeled training samples without forgetting previous knowledge.
While this field has seen recent progress, it remains an active area of
exploration. This paper aims to provide a comprehensive and systematic review
of FSCIL. In our in-depth examination, we delve into various facets of FSCIL,
encompassing the problem definition, the discussion of primary challenges of
unreliable empirical risk minimization and the stability-plasticity dilemma,
general schemes, and relevant problems of incremental learning and few-shot
learning. Besides, we offer an overview of benchmark datasets and evaluation
metrics. Furthermore, we introduce the classification methods in FSCIL from
data-based, structure-based, and optimization-based approaches and the object
detection methods in FSCIL from anchor-free and anchor-based approaches. Beyond
these, we illuminate several promising research directions within FSCIL that
merit further investigation.
"
Distilled Feature Fields Enable Few-Shot Language-Guided Manipulation,William Shen,http://arxiv.org/pdf/2308.07931v1.pdf,2023-07-27,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg', 'cs.ro']",2308.07931v1.pdf,"  Self-supervised and language-supervised image models contain rich knowledge
of the world that is important for generalization. Many robotic tasks, however,
require a detailed understanding of 3D geometry, which is often lacking in 2D
image features. This work bridges this 2D-to-3D gap for robotic manipulation by
leveraging distilled feature fields to combine accurate 3D geometry with rich
semantics from 2D foundation models. We present a few-shot learning method for
6-DOF grasping and placing that harnesses these strong spatial and semantic
priors to achieve in-the-wild generalization to unseen objects. Using features
distilled from a vision-language model, CLIP, we present a way to designate
novel objects for manipulation via free-text natural language, and demonstrate
its ability to generalize to unseen expressions and novel categories of
objects.
"
Refashioning Emotion Recognition Modelling: The Advent of Generalised  Large Models,Zixing Zhang,http://arxiv.org/pdf/2308.11578v1.pdf,2023-08-21,"['cs.cl', 'cs.ai', 'cs.lg']",2308.11578v1.pdf,"  After the inception of emotion recognition or affective computing, it has
increasingly become an active research topic due to its broad applications.
Over the past couple of decades, emotion recognition models have gradually
migrated from statistically shallow models to neural network-based deep models,
which can significantly boost the performance of emotion recognition models and
consistently achieve the best results on different benchmarks. Therefore, in
recent years, deep models have always been considered the first option for
emotion recognition. However, the debut of large language models (LLMs), such
as ChatGPT, has remarkably astonished the world due to their emerged
capabilities of zero/few-shot learning, in-context learning, chain-of-thought,
and others that are never shown in previous deep models. In the present paper,
we comprehensively investigate how the LLMs perform in emotion recognition in
terms of diverse aspects, including in-context learning, few-short learning,
accuracy, generalisation, and explanation. Moreover, we offer some insights and
pose other potential challenges, hoping to ignite broader discussions about
enhancing emotion recognition in the new era of advanced and generalised large
models.
"
Gpachov at CheckThat! 2023: A Diverse Multi-Approach Ensemble for  Subjectivity Detection in News Articles,Georgi Pachov,http://arxiv.org/pdf/2309.06844v1.pdf,2023-09-13,"['cs.cl', 'cs.ai', 'cs.mm']",2309.06844v1.pdf,"  The wide-spread use of social networks has given rise to subjective,
misleading, and even false information on the Internet. Thus, subjectivity
detection can play an important role in ensuring the objectiveness and the
quality of a piece of information. This paper presents the solution built by
the Gpachov team for the CLEF-2023 CheckThat! lab Task~2 on subjectivity
detection. Three different research directions are explored. The first one is
based on fine-tuning a sentence embeddings encoder model and dimensionality
reduction. The second one explores a sample-efficient few-shot learning model.
The third one evaluates fine-tuning a multilingual transformer on an altered
dataset, using data from multiple languages. Finally, the three approaches are
combined in a simple majority voting ensemble, resulting in 0.77 macro F1 on
the test set and achieving 2nd place on the English subtask.
"
"An Empathy-Based Sandbox Approach to Bridge Attitudes, Goals, Knowledge,  and Behaviors in the Privacy Paradox",Chaoran Chen,http://arxiv.org/pdf/2309.14510v1.pdf,2023-09-25,['cs.hc'],2309.14510v1.pdf,"  The ""privacy paradox"" describes the discrepancy between users' privacy
attitudes and their actual behaviors. Mitigating this discrepancy requires
solutions that account for both system opaqueness and users' hesitations in
testing different privacy settings due to fears of unintended data exposure. We
introduce an empathy-based approach that allows users to experience how privacy
behaviors may alter system outcomes in a risk-free sandbox environment from the
perspective of artificially generated personas. To generate realistic personas,
we introduce a novel pipeline that augments the outputs of large language
models using few-shot learning, contextualization, and chain of thoughts. Our
empirical studies demonstrated the adequate quality of generated personas and
highlighted the changes in privacy-related applications (e.g., online
advertising) caused by different personas. Furthermore, users demonstrated
cognitive and emotional empathy towards the personas when interacting with our
sandbox. We offered design implications for downstream applications in
improving user privacy literacy and promoting behavior changes.
"
Boosting In-Context Learning with Factual Knowledge,Jianing Wang,http://arxiv.org/pdf/2309.14771v1.pdf,2023-09-26,"['cs.cl', 'cs.ai']",2309.14771v1.pdf,"  In-Context Learning (ICL) over Large language models (LLMs) aims at solving
previously unseen tasks by conditioning on a few training examples, eliminating
the need for parameter updates and achieving competitive performance. In this
paper, we demonstrate that factual knowledge is imperative for the performance
of ICL in three core facets, i.e., the inherent knowledge learned in LLMs, the
factual knowledge derived from the selected in-context examples, and the
knowledge biases in LLMs for output generation. To unleash the power of LLMs in
few-shot learning scenarios, we introduce a novel Knowledgeable In-Context
Tuning (KICT) framework to further improve the performance of ICL: 1) injecting
factual knowledge to LLMs during continual self-supervised pre-training, 2)
judiciously selecting the examples with high knowledge relevance, and 3)
calibrating the prediction results based on prior knowledge. We evaluate the
proposed approaches on auto-regressive LLMs (e.g., GPT-style models) over
multiple text classification and question answering tasks. Experimental results
demonstrate that KICT substantially outperforms strong baselines, and improves
by more than 13% and 7% of accuracy on text classification and question
answering tasks, respectively.
"
Small Visual Language Models can also be Open-Ended Few-Shot Learners,Mohammad Mahdi Derakhshani,http://arxiv.org/pdf/2310.00500v1.pdf,2023-09-30,['cs.cv'],2310.00500v1.pdf,"  We present Self-Context Adaptation (SeCAt), a self-supervised approach that
unlocks open-ended few-shot abilities of small visual language models. Our
proposed adaptation algorithm explicitly learns from symbolic, yet
self-supervised training tasks. Specifically, our approach imitates image
captions in a self-supervised way based on clustering a large pool of images
followed by assigning semantically-unrelated names to clusters. By doing so, we
construct the `self-context', a training signal consisting of interleaved
sequences of image and pseudo-caption pairs and a query image for which the
model is trained to produce the right pseudo-caption. We demonstrate the
performance and flexibility of SeCAt on several multimodal few-shot datasets,
spanning various granularities. By using models with approximately 1B
parameters we outperform the few-shot abilities of much larger models, such as
Frozen and FROMAGe. SeCAt opens new possibilities for research in open-ended
few-shot learning that otherwise requires access to large or proprietary
models.
"
Injecting a Structural Inductive Bias into a Seq2Seq Model by Simulation,Matthias Lindemann,http://arxiv.org/pdf/2310.00796v1.pdf,2023-10-01,['cs.cl'],2310.00796v1.pdf,"  Strong inductive biases enable learning from little data and help
generalization outside of the training distribution. Popular neural
architectures such as Transformers lack strong structural inductive biases for
seq2seq NLP tasks on their own. Consequently, they struggle with systematic
generalization beyond the training distribution, e.g. with extrapolating to
longer inputs, even when pre-trained on large amounts of text. We show how a
structural inductive bias can be injected into a seq2seq model by pre-training
it to simulate structural transformations on synthetic data. Specifically, we
inject an inductive bias towards Finite State Transducers (FSTs) into a
Transformer by pre-training it to simulate FSTs given their descriptions. Our
experiments show that our method imparts the desired inductive bias, resulting
in improved systematic generalization and better few-shot learning for FST-like
tasks.
"
TRAM: Benchmarking Temporal Reasoning for Large Language Models,Yuqing Wang,http://arxiv.org/pdf/2310.00835v2.pdf,2023-10-02,['cs.cl'],2310.00835v2.pdf,"  Reasoning about time is essential for understanding the nuances of events
described in natural language. Previous research on this topic has been limited
in scope, characterized by a lack of standardized benchmarks that would allow
for consistent evaluations across different studies. In this paper, we
introduce TRAM, a temporal reasoning benchmark composed of ten datasets,
encompassing various temporal aspects of events such as order, arithmetic,
frequency, and duration, designed to facilitate a comprehensive evaluation of
the temporal reasoning capabilities of large language models (LLMs). We conduct
an extensive evaluation using popular LLMs, such as GPT-4 and Llama2, in both
zero-shot and few-shot learning scenarios. Additionally, we employ BERT-based
models to establish the baseline evaluations. Our findings indicate that these
models still trail human performance in temporal reasoning tasks. It is our
aspiration that TRAM will spur further progress in enhancing the temporal
reasoning abilities of LLMs.
"
Procedural Text Mining with Large Language Models,Anisa Rula,http://arxiv.org/pdf/2310.03376v1.pdf,2023-10-05,"['cs.cl', 'cs.ai', 'cs.it', 'math.it']",2310.03376v1.pdf,"  Recent advancements in the field of Natural Language Processing, particularly
the development of large-scale language models that are pretrained on vast
amounts of knowledge, are creating novel opportunities within the realm of
Knowledge Engineering. In this paper, we investigate the usage of large
language models (LLMs) in both zero-shot and in-context learning settings to
tackle the problem of extracting procedures from unstructured PDF text in an
incremental question-answering fashion. In particular, we leverage the current
state-of-the-art GPT-4 (Generative Pre-trained Transformer 4) model,
accompanied by two variations of in-context learning that involve an ontology
with definitions of procedures and steps and a limited number of samples of
few-shot learning. The findings highlight both the promise of this approach and
the value of the in-context learning customisations. These modifications have
the potential to significantly address the challenge of obtaining sufficient
training data, a hurdle often encountered in deep learning-based Natural
Language Processing techniques for procedure extraction.
"
PrototypeFormer: Learning to Explore Prototype Relationships for  Few-shot Image Classification,Feihong He,http://arxiv.org/pdf/2310.03517v1.pdf,2023-10-05,['cs.cv'],2310.03517v1.pdf,"  Few-shot image classification has received considerable attention for
addressing the challenge of poor classification performance with limited
samples in novel classes. However, numerous studies have employed sophisticated
learning strategies and diversified feature extraction methods to address this
issue. In this paper, we propose our method called PrototypeFormer, which aims
to significantly advance traditional few-shot image classification approaches
by exploring prototype relationships. Specifically, we utilize a transformer
architecture to build a prototype extraction module, aiming to extract class
representations that are more discriminative for few-shot classification.
Additionally, during the model training process, we propose a contrastive
learning-based optimization approach to optimize prototype features in few-shot
learning scenarios. Despite its simplicity, the method performs remarkably
well, with no bells and whistles. We have experimented with our approach on
several popular few-shot image classification benchmark datasets, which shows
that our method outperforms all current state-of-the-art methods. In
particular, our method achieves 97.07% and 90.88% on 5-way 5-shot and 5-way
1-shot tasks of miniImageNet, which surpasses the state-of-the-art results with
accuracy of 7.27% and 8.72%, respectively. The code will be released later.
"
A Holistic Evaluation of Piano Sound Quality,Monan Zhou,http://arxiv.org/pdf/2310.04722v1.pdf,2023-10-07,"['cs.sd', 'cs.ai', 'eess.as']",2310.04722v1.pdf,"  This paper aims to develop a holistic evaluation method for piano sound
quality to assist in purchasing decisions. Unlike previous studies that focused
on the effect of piano performance techniques on sound quality, this study
evaluates the inherent sound quality of different pianos. To derive quality
evaluation systems, the study uses subjective questionnaires based on a piano
sound quality dataset. The method selects the optimal piano classification
models by comparing the fine-tuning results of different pre-training models of
Convolutional Neural Networks (CNN). To improve the interpretability of the
models, the study applies Equivalent Rectangular Bandwidth (ERB) analysis. The
results reveal that musically trained individuals are better able to
distinguish between the sound quality differences of different pianos. The best
fine-tuned CNN pre-trained backbone achieves a high accuracy of 98.3\% as the
piano classifier. However, the dataset is limited, and the audio is sliced to
increase its quantity, resulting in a lack of diversity and balance, so we use
focal loss to reduce the impact of data imbalance. To optimize the method, the
dataset will be expanded, or few-shot learning techniques will be employed in
future research.
"
Argumentative Stance Prediction: An Exploratory Study on Multimodality  and Few-Shot Learning,Arushi Sharma,http://arxiv.org/pdf/2310.07093v1.pdf,2023-10-11,['cs.cl'],2310.07093v1.pdf,"  To advance argumentative stance prediction as a multimodal problem, the First
Shared Task in Multimodal Argument Mining hosted stance prediction in crucial
social topics of gun control and abortion. Our exploratory study attempts to
evaluate the necessity of images for stance prediction in tweets and compare
out-of-the-box text-based large-language models (LLM) in few-shot settings
against fine-tuned unimodal and multimodal models. Our work suggests an
ensemble of fine-tuned text-based language models (0.817 F1-score) outperforms
both the multimodal (0.677 F1-score) and text-based few-shot prediction using a
recent state-of-the-art LLM (0.550 F1-score). In addition to the differences in
performance, our findings suggest that the multimodal models tend to perform
better when image content is summarized as natural language over their native
pixel structure and, using in-context examples improves few-shot performance of
LLMs.
"
LLM-augmented Preference Learning from Natural Language,Inwon Kang,http://arxiv.org/pdf/2310.08523v1.pdf,2023-10-12,['cs.cl'],2310.08523v1.pdf,"  Finding preferences expressed in natural language is an important but
challenging task. State-of-the-art(SotA) methods leverage transformer-based
models such as BERT, RoBERTa, etc. and graph neural architectures such as graph
attention networks. Since Large Language Models (LLMs) are equipped to deal
with larger context lengths and have much larger model sizes than the
transformer-based model, we investigate their ability to classify comparative
text directly. This work aims to serve as a first step towards using LLMs for
the CPC task. We design and conduct a set of experiments that format the
classification task into an input prompt for the LLM and a methodology to get a
fixed-format response that can be automatically evaluated. Comparing
performances with existing methods, we see that pre-trained LLMs are able to
outperform the previous SotA models with no fine-tuning involved. Our results
show that the LLMs can consistently outperform the SotA when the target text is
large -- i.e. composed of multiple sentences --, and are still comparable to
the SotA performance in shorter text. We also find that few-shot learning
yields better performance than zero-shot learning.
"
In-Context Learning for Few-Shot Molecular Property Prediction,Christopher Fifty,http://arxiv.org/pdf/2310.08863v1.pdf,2023-10-13,['cs.lg'],2310.08863v1.pdf,"  In-context learning has become an important approach for few-shot learning in
Large Language Models because of its ability to rapidly adapt to new tasks
without fine-tuning model parameters. However, it is restricted to applications
in natural language and inapplicable to other domains. In this paper, we adapt
the concepts underpinning in-context learning to develop a new algorithm for
few-shot molecular property prediction. Our approach learns to predict
molecular properties from a context of (molecule, property measurement) pairs
and rapidly adapts to new properties without fine-tuning. On the FS-Mol and
BACE molecular property prediction benchmarks, we find this method surpasses
the performance of recent meta-learning algorithms at small support sizes and
is competitive with the best methods at large support sizes.
"
In-Context Few-Shot Relation Extraction via Pre-Trained Language Models,Yilmazcan Ozyurt,http://arxiv.org/pdf/2310.11085v1.pdf,2023-10-17,"['cs.cl', 'cs.ai', 'cs.lg']",2310.11085v1.pdf,"  Relation extraction aims at inferring structured human knowledge from textual
documents. State-of-the-art methods based on language models commonly have two
limitations: (1) they require named entities to be either given as input or
infer them, which introduces additional noise, and (2) they require human
annotations of documents. As a remedy, we present a novel framework for
in-context few-shot relation extraction via pre-trained language models. To the
best of our knowledge, we are the first to reformulate the relation extraction
task as a tailored in-context few-shot learning paradigm. Thereby, we achieve
crucial benefits in that we eliminate the need for both named entity
recognition and human annotation of documents. Unlike existing methods based on
fine-tuning, our framework is flexible in that it can be easily updated for a
new set of relations without re-training. We evaluate our framework using
DocRED, the largest publicly available dataset for document-level relation
extraction, and demonstrate that our framework achieves state-of-the-art
performance. Finally, our framework allows us to identify missing annotations,
and we thus show that our framework actually performs much better than the
original labels from the development set of DocRED.
"
Group Preference Optimization: Few-Shot Alignment of Large Language  Models,Siyan Zhao,http://arxiv.org/pdf/2310.11523v1.pdf,2023-10-17,"['cs.lg', 'cs.ai', 'cs.cl']",2310.11523v1.pdf,"  Many applications of large language models (LLMs), ranging from chatbots to
creative writing, require nuanced subjective judgments that can differ
significantly across different groups. Existing alignment algorithms can be
expensive to align for each group, requiring prohibitive amounts of
group-specific preference data and computation for real-world use cases. We
introduce Group Preference Optimization (GPO), an alignment framework that
steers language models to preferences of individual groups in a few-shot
manner. In GPO, we augment the base LLM with an independent transformer module
trained to predict the preferences of a group for the LLM generations. For
few-shot learning, we parameterize this module as an in-context autoregressive
transformer and train it via meta-learning on several groups. We empirically
validate the efficacy of GPO through rigorous evaluations using LLMs with
varied sizes on three human opinion adaptation tasks. These tasks involve
adapting to the preferences of US demographic groups, global countries, and
individual users. Our results demonstrate that GPO not only aligns models more
accurately but also requires fewer group-specific preferences, and less
training and inference computing resources, outperforming existing strategies
such as in-context steering and fine-tuning methods.
"
CLARA: Multilingual Contrastive Learning for Audio Representation  Acquisition,Kari A Noriy,http://arxiv.org/pdf/2310.11830v2.pdf,2023-10-18,"['cs.sd', 'cs.lg', 'cs.mm', 'eess.as']",2310.11830v2.pdf,"  Multilingual speech processing requires understanding emotions, a task made
difficult by limited labelled data. CLARA, minimizes reliance on labelled data,
enhancing generalization across languages. It excels at fostering shared
representations, aiding cross-lingual transfer of speech and emotions, even
with little data. Our approach adeptly captures emotional nuances in speech,
overcoming subjective assessment issues. Using a large multilingual audio
corpus and self-supervised learning, CLARA develops speech representations
enriched with emotions, advancing emotion-aware multilingual speech processing.
  Our method expands the data range using data augmentation, textual embedding
for visual understanding, and transfers knowledge from high- to low-resource
languages. CLARA demonstrates excellent performance in emotion recognition,
language comprehension, and audio benchmarks, excelling in zero-shot and
few-shot learning. It adapts to low-resource languages, marking progress in
multilingual speech representation learning.
"
A Tale of Pronouns: Interpretability Informs Gender Bias Mitigation for  Fairer Instruction-Tuned Machine Translation,Giuseppe Attanasio,http://arxiv.org/pdf/2310.12127v2.pdf,2023-10-18,"['cs.cl', 'cs.lg']",2310.12127v2.pdf,"  Recent instruction fine-tuned models can solve multiple NLP tasks when
prompted to do so, with machine translation (MT) being a prominent use case.
However, current research often focuses on standard performance benchmarks,
leaving compelling fairness and ethical considerations behind. In MT, this
might lead to misgendered translations, resulting, among other harms, in the
perpetuation of stereotypes and prejudices. In this work, we address this gap
by investigating whether and to what extent such models exhibit gender bias in
machine translation and how we can mitigate it. Concretely, we compute
established gender bias metrics on the WinoMT corpus from English to German and
Spanish. We discover that IFT models default to male-inflected translations,
even disregarding female occupational stereotypes. Next, using interpretability
methods, we unveil that models systematically overlook the pronoun indicating
the gender of a target occupation in misgendered translations. Finally, based
on this finding, we propose an easy-to-implement and effective bias mitigation
solution based on few-shot learning that leads to significantly fairer
translations.
"
An Exploration of In-Context Learning for Speech Language Model,Ming-Hao Hsu,http://arxiv.org/pdf/2310.12477v1.pdf,2023-10-19,"['eess.as', 'cs.ai', 'cs.cl']",2310.12477v1.pdf,"  Ever since the development of GPT-3 in the natural language processing (NLP)
field, in-context learning (ICL) has played an important role in utilizing
large language models (LLMs). By presenting the LM utterance-label
demonstrations at the input, the LM can accomplish few-shot learning without
relying on gradient descent or requiring explicit modification of its
parameters. This enables the LM to learn and adapt in a black-box manner.
Despite the success of ICL in NLP, little work is exploring the possibility of
ICL in speech processing. This study proposes the first exploration of ICL with
a speech LM without text supervision. We first show that the current speech LM
does not have the ICL capability. With the proposed warmup training, the speech
LM can, therefore, perform ICL on unseen tasks. In this work, we verify the
feasibility of ICL for speech LM on speech classification tasks.
"
Large Language Models are biased to overestimate profoundness,Eugenio Herrera-Berg,http://arxiv.org/pdf/2310.14422v1.pdf,2023-10-22,['cs.cl'],2310.14422v1.pdf,"  Recent advancements in natural language processing by large language models
(LLMs), such as GPT-4, have been suggested to approach Artificial General
Intelligence. And yet, it is still under dispute whether LLMs possess similar
reasoning abilities to humans. This study evaluates GPT-4 and various other
LLMs in judging the profoundness of mundane, motivational, and pseudo-profound
statements. We found a significant statement-to-statement correlation between
the LLMs and humans, irrespective of the type of statements and the prompting
technique used. However, LLMs systematically overestimate the profoundness of
nonsensical statements, with the exception of Tk-instruct, which uniquely
underestimates the profoundness of statements. Only few-shot learning prompts,
as opposed to chain-of-thought prompting, draw LLMs ratings closer to humans.
Furthermore, this work provides insights into the potential biases induced by
Reinforcement Learning from Human Feedback (RLHF), inducing an increase in the
bias to overestimate the profoundness of statements.
"
Improving Few-shot Generalization of Safety Classifiers via Data  Augmented Parameter-Efficient Fine-Tuning,Ananth Balashankar,http://arxiv.org/pdf/2310.16959v1.pdf,2023-10-25,['cs.lg'],2310.16959v1.pdf,"  As large language models (LLMs) are widely adopted, new safety issues and
policies emerge, to which existing safety classifiers do not generalize well.
If we have only observed a few examples of violations of a new safety rule, how
can we build a classifier to detect violations? In this paper, we study the
novel setting of domain-generalized few-shot learning for LLM-based text safety
classifiers. Unlike prior few-shot work, these new safety issues can be hard to
uncover and we do not get to choose the few examples. We demonstrate that
existing few-shot techniques do not perform well in this setting, and rather we
propose to do parameter-efficient fine-tuning (PEFT) combined with augmenting
training data based on similar examples in prior existing rules. We empirically
show that our approach of similarity-based data-augmentation + prompt-tuning
(DAPT) consistently outperforms baselines that either do not rely on data
augmentation or on PEFT by 7-17% F1 score in the Social Chemistry moral
judgement and 9-13% AUC in the Toxicity detection tasks, even when the new rule
is loosely correlated with existing ones.
"
Retrofitting Light-weight Language Models for Emotions using Supervised  Contrastive Learning,Sapan Shah,http://arxiv.org/pdf/2310.18930v1.pdf,2023-10-29,['cs.cl'],2310.18930v1.pdf,"  We present a novel retrofitting method to induce emotion aspects into
pre-trained language models (PLMs) such as BERT and RoBERTa. Our method updates
pre-trained network weights using contrastive learning so that the text
fragments exhibiting similar emotions are encoded nearby in the representation
space, and the fragments with different emotion content are pushed apart. While
doing so, it also ensures that the linguistic knowledge already present in PLMs
is not inadvertently perturbed. The language models retrofitted by our method,
i.e., BERTEmo and RoBERTaEmo, produce emotion-aware text representations, as
evaluated through different clustering and retrieval metrics. For the
downstream tasks on sentiment analysis and sarcasm detection, they perform
better than their pre-trained counterparts (about 1% improvement in F1-score)
and other existing approaches. Additionally, a more significant boost in
performance is observed for the retrofitted models over pre-trained ones in
few-shot learning setting.
"
Nexus at ArAIEval Shared Task: Fine-Tuning Arabic Language Models for  Propaganda and Disinformation Detection,Yunze Xiao,http://arxiv.org/pdf/2311.03184v1.pdf,2023-11-06,"['cs.cl', 'cs.ai', 'cs.si', '68t50', 'f.2.2; i.2.7']",2311.03184v1.pdf,"  The spread of disinformation and propagandistic content poses a threat to
societal harmony, undermining informed decision-making and trust in reliable
sources. Online platforms often serve as breeding grounds for such content, and
malicious actors exploit the vulnerabilities of audiences to shape public
opinion. Although there have been research efforts aimed at the automatic
identification of disinformation and propaganda in social media content, there
remain challenges in terms of performance. The ArAIEval shared task aims to
further research on these particular issues within the context of the Arabic
language. In this paper, we discuss our participation in these shared tasks. We
competed in subtasks 1A and 2A, where our submitted system secured positions
9th and 10th, respectively. Our experiments consist of fine-tuning transformer
models and using zero- and few-shot learning with GPT-4.
"
Multilingual Mathematical Autoformalization,Albert Q. Jiang,http://arxiv.org/pdf/2311.03755v1.pdf,2023-11-07,"['cs.cl', 'cs.lg']",2311.03755v1.pdf,"  Autoformalization is the task of translating natural language materials into
machine-verifiable formalisations. Progress in autoformalization research is
hindered by the lack of a sizeable dataset consisting of informal-formal pairs
expressing the same essence. Existing methods tend to circumvent this challenge
by manually curating small corpora or using few-shot learning with large
language models. But these methods suffer from data scarcity and formal
language acquisition difficulty. In this work, we create $\texttt{MMA}$, a
large, flexible, multilingual, and multi-domain dataset of informal-formal
pairs, by using a language model to translate in the reverse direction, that
is, from formal mathematical statements into corresponding informal ones.
Experiments show that language models fine-tuned on $\texttt{MMA}$ produce
$16-18\%$ of statements acceptable with minimal corrections on the
$\texttt{miniF2F}$ and $\texttt{ProofNet}$ benchmarks, up from $0\%$ with the
base model. We demonstrate that fine-tuning on multilingual formal data results
in more capable autoformalization models even when deployed on monolingual
tasks.
"
Data-Efficient Goal-Oriented Conversation with Dialogue Knowledge  Transfer Networks,Igor Shalyminov,http://arxiv.org/pdf/1910.01302v1.pdf,2019-10-03,"['cs.cl', 'i.2.7']",1910.01302v1.pdf,"  Goal-oriented dialogue systems are now being widely adopted in industry where
it is of key importance to maintain a rapid prototyping cycle for new products
and domains. Data-driven dialogue system development has to be adapted to meet
this requirement --- therefore, reducing the amount of data and annotations
necessary for training such systems is a central research problem.
  In this paper, we present the Dialogue Knowledge Transfer Network (DiKTNet),
a state-of-the-art approach to goal-oriented dialogue generation which only
uses a few example dialogues (i.e. few-shot learning), none of which has to be
annotated. We achieve this by performing a 2-stage training. Firstly, we
perform unsupervised dialogue representation pre-training on a large source of
goal-oriented dialogues in multiple domains, the MetaLWOz corpus. Secondly, at
the transfer stage, we train DiKTNet using this representation together with 2
other textual knowledge sources with different levels of generality: ELMo
encoder and the main dataset's source domains.
  Our main dataset is the Stanford Multi-Domain dialogue corpus. We evaluate
our model on it in terms of BLEU and Entity F1 scores, and show that our
approach significantly and consistently improves upon a series of baseline
models as well as over the previous state-of-the-art dialogue generation model,
ZSDG. The improvement upon the latter --- up to 10% in Entity F1 and the
average of 3% in BLEU score --- is achieved using only the equivalent of 10% of
ZSDG's in-domain training data.
"
Meta-Learning with Dynamic-Memory-Based Prototypical Network for  Few-Shot Event Detection,Shumin Deng,http://arxiv.org/pdf/1910.11621v2.pdf,2019-10-25,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",1910.11621v2.pdf,"  Event detection (ED), a sub-task of event extraction, involves identifying
triggers and categorizing event mentions. Existing methods primarily rely upon
supervised learning and require large-scale labeled event datasets which are
unfortunately not readily available in many real-life applications. In this
paper, we consider and reformulate the ED task with limited labeled data as a
Few-Shot Learning problem. We propose a Dynamic-Memory-Based Prototypical
Network (DMB-PN), which exploits Dynamic Memory Network (DMN) to not only learn
better prototypes for event types, but also produce more robust sentence
encodings for event mentions. Differing from vanilla prototypical networks
simply computing event prototypes by averaging, which only consume event
mentions once, our model is more robust and is capable of distilling contextual
information from event mentions for multiple times due to the multi-hop
mechanism of DMNs. The experiments show that DMB-PN not only deals with sample
scarcity better than a series of baseline models but also performs more
robustly when the variety of event types is relatively large and the instance
quantity is extremely small.
"
Spirit Distillation: Precise Real-time Semantic Segmentation of Road  Scenes with Insufficient Data,Zhiyuan Wu,http://arxiv.org/pdf/2103.13733v2.pdf,2021-03-25,"['cs.cv', 'cs.ai', 'cs.lg']",2103.13733v2.pdf,"  Semantic segmentation of road scenes is one of the key technologies for
realizing autonomous driving scene perception, and the effectiveness of deep
Convolutional Neural Networks(CNNs) for this task has been demonstrated.
State-of-art CNNs for semantic segmentation suffer from excessive computations
as well as large-scale training data requirement. Inspired by the ideas of
Fine-tuning-based Transfer Learning (FTT) and feature-based knowledge
distillation, we propose a new knowledge distillation method for cross-domain
knowledge transference and efficient data-insufficient network training, named
Spirit Distillation(SD), which allow the student network to mimic the teacher
network to extract general features, so that a compact and accurate student
network can be trained for real-time semantic segmentation of road scenes.
Then, in order to further alleviate the trouble of insufficient data and
improve the robustness of the student, an Enhanced Spirit Distillation (ESD)
method is proposed, which commits to exploit a more comprehensive general
features extraction capability by considering images from both the target and
the proximity domains as input. To our knowledge, this paper is a pioneering
work on the application of knowledge distillation to few-shot learning.
Persuasive experiments conducted on Cityscapes semantic segmentation with the
prior knowledge transferred from COCO2017 and KITTI demonstrate that our
methods can train a better student network (mIOU and high-precision accuracy
boost by 1.4% and 8.2% respectively, with 78.2% segmentation variance) with
only 41.8% FLOPs (see Fig. 1).
"
AMP0: Species-Specific Prediction of Anti-microbial Peptides using Zero  and Few Shot Learning,Sadaf Gull,http://arxiv.org/pdf/1911.06106v1.pdf,2019-10-28,"['q-bio.bm', 'cs.lg', 'stat.ml']",1911.06106v1.pdf,"  The evolution of drug-resistant microbial species is one of the major
challenges to global health. The development of new antimicrobial treatments
such as antimicrobial peptides needs to be accelerated to combat this threat.
However, the discovery of novel antimicrobial peptides is hampered by
low-throughput biochemical assays. Computational techniques can be used for
rapid screening of promising antimicrobial peptide candidates prior to testing
in the wet lab. The vast majority of existing antimicrobial peptide predictors
are non-targeted in nature, i.e., they can predict whether a given peptide
sequence is antimicrobial, but they are unable to predict whether the sequence
can target a particular microbial species. In this work, we have developed a
targeted antimicrobial peptide activity predictor that can predict whether a
peptide is effective against a given microbial species or not. This has been
made possible through zero-shot and few-shot machine learning. The proposed
predictor called AMP0 takes in the peptide amino acid sequence and any
N/C-termini modifications together with the genomic sequence of a target
microbial species to generate targeted predictions. It is important to note
that the proposed method can generate predictions for species that are not part
of its training set. The accuracy of predictions for novel test species can be
further improved by providing a few example peptides for that species. Our
computational cross-validation results show that the pro-posed scheme is
particularly effective for targeted antimicrobial prediction in comparison to
existing approaches and can be used for screening potential antimicrobial
peptides in a targeted manner especially for cases in which the number of
training examples is small. The webserver of the method is available at
http://ampzero.pythonanywhere.com.
"
Brain-inspired global-local learning incorporated with neuromorphic  computing,Yujie Wu,http://arxiv.org/pdf/2006.03226v3.pdf,2020-06-05,"['cs.ne', 'cs.ai', 'q-bio.nc']",2006.03226v3.pdf,"  Two main routes of learning methods exist at present including error-driven
global learning and neuroscience-oriented local learning. Integrating them into
one network may provide complementary learning capabilities for versatile
learning scenarios. At the same time, neuromorphic computing holds great
promise, but still needs plenty of useful algorithms and algorithm-hardware
co-designs for exploiting the advantages. Here, we report a neuromorphic hybrid
learning model by introducing a brain-inspired meta-learning paradigm and a
differentiable spiking model incorporating neuronal dynamics and synaptic
plasticity. It can meta-learn local plasticity and receive top-down supervision
information for multiscale synergic learning. We demonstrate the advantages of
this model in multiple different tasks, including few-shot learning, continual
learning, and fault-tolerance learning in neuromorphic vision sensors. It
achieves significantly higher performance than single-learning methods, and
shows promise in empowering neuromorphic applications revolution. We further
implemented the hybrid model in the Tianjic neuromorphic platform by exploiting
algorithm-hardware co-designs and proved that the model can fully utilize
neuromorphic many-core architecture to develop hybrid computation paradigm.
"
Direct multimodal few-shot learning of speech and images,Leanne Nortje,http://arxiv.org/pdf/2012.05680v2.pdf,2020-12-10,"['cs.cl', 'cs.sd', 'eess.as']",2012.05680v2.pdf,"  We propose direct multimodal few-shot models that learn a shared embedding
space of spoken words and images from only a few paired examples. Imagine an
agent is shown an image along with a spoken word describing the object in the
picture, e.g. pen, book and eraser. After observing a few paired examples of
each class, the model is asked to identify the ""book"" in a set of unseen
pictures. Previous work used a two-step indirect approach relying on learned
unimodal representations: speech-speech and image-image comparisons are
performed across the support set of given speech-image pairs. We propose two
direct models which instead learn a single multimodal space where inputs from
different modalities are directly comparable: a multimodal triplet network
(MTriplet) and a multimodal correspondence autoencoder (MCAE). To train these
direct models, we mine speech-image pairs: the support set is used to pair up
unlabelled in-domain speech and images. In a speech-to-image digit matching
task, direct models outperform indirect models, with the MTriplet achieving the
best multimodal five-shot accuracy. We show that the improvements are due to
the combination of unsupervised and transfer learning in the direct models, and
the absence of two-step compounding errors.
"
What Makes Good In-Context Examples for GPT-$3$?,Jiachang Liu,http://arxiv.org/pdf/2101.06804v1.pdf,2021-01-17,['cs.cl'],2101.06804v1.pdf,"  GPT-$3$ has attracted lots of attention due to its superior performance
across a wide range of NLP tasks, especially with its powerful and versatile
in-context few-shot learning ability. Despite its success, we found that the
empirical results of GPT-$3$ depend heavily on the choice of in-context
examples. In this work, we investigate whether there are more effective
strategies for judiciously selecting in-context examples (relative to random
sampling) that better leverage GPT-$3$'s few-shot capabilities. Inspired by the
recent success of leveraging a retrieval module to augment large-scale neural
network models, we propose to retrieve examples that are semantically-similar
to a test sample to formulate its corresponding prompt. Intuitively, the
in-context examples selected with such a strategy may serve as more informative
inputs to unleash GPT-$3$'s extensive knowledge. We evaluate the proposed
approach on several natural language understanding and generation benchmarks,
where the retrieval-based prompt selection approach consistently outperforms
the random baseline. Moreover, it is observed that the sentence encoders
fine-tuned on task-related datasets yield even more helpful retrieval results.
Notably, significant gains are observed on tasks such as table-to-text
generation (41.9% on the ToTTo dataset) and open-domain question answering
(45.5% on the NQ dataset). We hope our investigation could help understand the
behaviors of GPT-$3$ and large-scale pre-trained LMs in general and enhance
their few-shot capabilities.
"
Modelling Latent Translations for Cross-Lingual Transfer,Edoardo Maria Ponti,http://arxiv.org/pdf/2107.11353v1.pdf,2021-07-23,['cs.cl'],2107.11353v1.pdf,"  While achieving state-of-the-art results in multiple tasks and languages,
translation-based cross-lingual transfer is often overlooked in favour of
massively multilingual pre-trained encoders. Arguably, this is due to its main
limitations: 1) translation errors percolating to the classification phase and
2) the insufficient expressiveness of the maximum-likelihood translation. To
remedy this, we propose a new technique that integrates both steps of the
traditional pipeline (translation and classification) into a single model, by
treating the intermediate translations as a latent random variable. As a
result, 1) the neural machine translation system can be fine-tuned with a
variant of Minimum Risk Training where the reward is the accuracy of the
downstream task classifier. Moreover, 2) multiple samples can be drawn to
approximate the expected loss across all possible translations during
inference. We evaluate our novel latent translation-based model on a series of
multilingual NLU tasks, including commonsense reasoning, paraphrase
identification, and natural language inference. We report gains for both
zero-shot and few-shot learning setups, up to 2.7 accuracy points on average,
which are even more prominent for low-resource languages (e.g., Haitian
Creole). Finally, we carry out in-depth analyses comparing different underlying
NMT models and assessing the impact of alternative translations on the
downstream performance.
"
ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback,Mike Wu,http://arxiv.org/pdf/2107.14035v2.pdf,2021-07-23,"['cs.cy', 'cs.lg']",2107.14035v2.pdf,"  High-quality computer science education is limited by the difficulty of
providing instructor feedback to students at scale. While this feedback could
in principle be automated, supervised approaches to predicting the correct
feedback are bottlenecked by the intractability of annotating large quantities
of student code. In this paper, we instead frame the problem of providing
feedback as few-shot classification, where a meta-learner adapts to give
feedback to student code on a new programming question from just a few examples
annotated by instructors. Because data for meta-training is limited, we propose
a number of amendments to the typical few-shot learning framework, including
task augmentation to create synthetic tasks, and additional side information to
build stronger priors about each task. These additions are combined with a
transformer architecture to embed discrete sequences (e.g. code) to a
prototypical representation of a feedback class label. On a suite of few-shot
natural language processing tasks, we match or outperform state-of-the-art
performance. Then, on a collection of student solutions to exam questions from
an introductory university course, we show that our approach reaches an average
precision of 88% on unseen questions, surpassing the 82% precision of teaching
assistants. Our approach was successfully deployed to deliver feedback to
16,000 student exam-solutions in a programming course offered by a tier 1
university. This is, to the best of our knowledge, the first successful
deployment of a machine learning based feedback to open-ended student code.
"
Robust Retrieval Augmented Generation for Zero-shot Slot Filling,Michael Glass,http://arxiv.org/pdf/2108.13934v2.pdf,2021-08-31,"['cs.cl', 'cs.ai', 'cs.ir']",2108.13934v2.pdf,"  Automatically inducing high quality knowledge graphs from a given collection
of documents still remains a challenging problem in AI. One way to make headway
for this problem is through advancements in a related task known as slot
filling. In this task, given an entity query in form of [Entity, Slot, ?], a
system is asked to fill the slot by generating or extracting the missing value
exploiting evidence extracted from relevant passage(s) in the given document
collection. The recent works in the field try to solve this task in an
end-to-end fashion using retrieval-based language models. In this paper, we
present a novel approach to zero-shot slot filling that extends dense passage
retrieval with hard negatives and robust training procedures for retrieval
augmented generation models. Our model reports large improvements on both T-REx
and zsRE slot filling datasets, improving both passage retrieval and slot value
generation, and ranking at the top-1 position in the KILT leaderboard.
Moreover, we demonstrate the robustness of our system showing its domain
adaptation capability on a new variant of the TACRED dataset for slot filling,
through a combination of zero/few-shot learning. We release the source code and
pre-trained models.
"
Template-free Prompt Tuning for Few-shot NER,Ruotian Ma,http://arxiv.org/pdf/2109.13532v3.pdf,2021-09-28,"['cs.cl', 'cs.ai']",2109.13532v3.pdf,"  Prompt-based methods have been successfully applied in sentence-level
few-shot learning tasks, mostly owing to the sophisticated design of templates
and label words. However, when applied to token-level labeling tasks such as
NER, it would be time-consuming to enumerate the template queries over all
potential entity spans. In this work, we propose a more elegant method to
reformulate NER tasks as LM problems without any templates. Specifically, we
discard the template construction process while maintaining the word prediction
paradigm of pre-training models to predict a class-related pivot word (or label
word) at the entity position. Meanwhile, we also explore principled ways to
automatically search for appropriate label words that the pre-trained models
can easily adapt to. While avoiding complicated template-based process, the
proposed LM objective also reduces the gap between different objectives used in
pre-training and fine-tuning, thus it can better benefit the few-shot
performance. Experimental results demonstrate the effectiveness of the proposed
method over bert-tagger and template-based method under few-shot setting.
Moreover, the decoding speed of the proposed method is up to 1930.12 times
faster than the template-based method.
"
RAFT: A Real-World Few-Shot Text Classification Benchmark,Neel Alex,http://arxiv.org/pdf/2109.14076v3.pdf,2021-09-28,"['cs.cl', 'cs.ai', 'cs.lg']",2109.14076v3.pdf,"  Large pre-trained language models have shown promise for few-shot learning,
completing text-based tasks given only a few task-specific examples. Will
models soon solve classification tasks that have so far been reserved for human
research assistants? Existing benchmarks are not designed to measure progress
in applied settings, and so don't directly answer this question. The RAFT
benchmark (Real-world Annotated Few-shot Tasks) focuses on naturally occurring
tasks and uses an evaluation setup that mirrors deployment. Baseline
evaluations on RAFT reveal areas current techniques struggle with: reasoning
over long texts and tasks with many classes. Human baselines show that some
classification tasks are difficult for non-expert humans, reflecting that
real-world value sometimes depends on domain expertise. Yet even non-expert
human baseline F1 scores exceed GPT-3 by an average of 0.11. The RAFT datasets
and leaderboard will track which model improvements translate into real-world
benefits at https://raft.elicit.org .
"
LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based  on Prompt Tuning of T5,Chengwei Qin,http://arxiv.org/pdf/2110.07298v3.pdf,2021-10-14,['cs.cl'],2110.07298v3.pdf,"  Existing approaches to lifelong language learning rely on plenty of labeled
data for learning a new task, which is hard to obtain in most real scenarios.
Considering that humans can continually learn new tasks from a handful of
examples, we expect the models also to be able to generalize well on new
few-shot tasks without forgetting the previous ones. In this work, we define
this more challenging yet practical problem as Lifelong Few-shot Language
Learning (LFLL) and propose a unified framework for it based on prompt tuning
of T5. Our framework called LFPT5 takes full advantage of PT's strong few-shot
learning ability, and simultaneously trains the model as a task solver and a
data generator. Before learning a new domain of the same task type, LFPT5
generates pseudo (labeled) samples of previously learned domains, and later
gets trained on those samples to alleviate forgetting of previous knowledge as
it learns the new domain. In addition, a KL divergence loss is minimized to
achieve label consistency between the previous and the current model. While
adapting to a new task type, LFPT5 includes and tunes additional prompt
embeddings for the new task. With extensive experiments, we demonstrate that
LFPT5 can be applied to various different types of tasks and significantly
outperform previous methods in different LFLL settings.
"
MetaICL: Learning to Learn In Context,Sewon Min,http://arxiv.org/pdf/2110.15943v2.pdf,2021-10-29,"['cs.cl', 'cs.ai']",2110.15943v2.pdf,"  We introduce MetaICL (Meta-training for In-Context Learning), a new
meta-training framework for few-shot learning where a pretrained language model
is tuned to do in-context learning on a large set of training tasks. This
meta-training enables the model to more effectively learn a new task in context
at test time, by simply conditioning on a few training examples with no
parameter updates or task-specific templates. We experiment on a large, diverse
collection of tasks consisting of 142 NLP datasets including classification,
question answering, natural language inference, paraphrase detection and more,
across seven different meta-training/target splits. MetaICL outperforms a range
of baselines including in-context learning without meta-training and multi-task
learning followed by zero-shot transfer. We find that the gains are
particularly significant for target tasks that have domain shifts from the
meta-training tasks, and that using a diverse set of the meta-training tasks is
key to improvements. We also show that MetaICL approaches (and sometimes beats)
the performance of models fully finetuned on the target task, and outperforms
much bigger models with nearly 8x parameters. Finally, we show that MetaICL is
complementary to human-written instructions, and the best performance can be
achieved by combining both approaches.
"
Scaling ASR Improves Zero and Few Shot Learning,Alex Xiao,http://arxiv.org/pdf/2111.05948v3.pdf,2021-11-10,"['cs.cl', 'cs.sd', 'eess.as']",2111.05948v3.pdf,"  With 4.5 million hours of English speech from 10 different sources across 120
countries and models of up to 10 billion parameters, we explore the frontiers
of scale for automatic speech recognition. We propose data selection techniques
to efficiently scale training data to find the most valuable samples in massive
datasets. To efficiently scale model sizes, we leverage various optimizations
such as sparse transducer loss and model sharding. By training 1-10B parameter
universal English ASR models, we push the limits of speech recognition
performance across many domains. Furthermore, our models learn powerful speech
representations with zero and few-shot capabilities on novel domains and styles
of speech, exceeding previous results across multiple in-house and public
benchmarks. For speakers with disorders due to brain damage, our best zero-shot
and few-shot models achieve 22% and 60% relative improvement on the AphasiaBank
test set, respectively, while realizing the best performance on public social
media videos. Furthermore, the same universal model reaches equivalent
performance with 500x less in-domain data on the SPGISpeech financial-domain
dataset.
"
PointCLIP: Point Cloud Understanding by CLIP,Renrui Zhang,http://arxiv.org/pdf/2112.02413v1.pdf,2021-12-04,"['cs.cv', 'cs.ai', 'cs.ro']",2112.02413v1.pdf,"  Recently, zero-shot and few-shot learning via Contrastive Vision-Language
Pre-training (CLIP) have shown inspirational performance on 2D visual
recognition, which learns to match images with their corresponding texts in
open-vocabulary settings. However, it remains under explored that whether CLIP,
pre-trained by large-scale image-text pairs in 2D, can be generalized to 3D
recognition. In this paper, we identify such a setting is feasible by proposing
PointCLIP, which conducts alignment between CLIP-encoded point cloud and 3D
category texts. Specifically, we encode a point cloud by projecting it into
multi-view depth maps without rendering, and aggregate the view-wise zero-shot
prediction to achieve knowledge transfer from 2D to 3D. On top of that, we
design an inter-view adapter to better extract the global feature and
adaptively fuse the few-shot knowledge learned from 3D into CLIP pre-trained in
2D. By just fine-tuning the lightweight adapter in the few-shot settings, the
performance of PointCLIP could be largely improved. In addition, we observe the
complementary property between PointCLIP and classical 3D-supervised networks.
By simple ensembling, PointCLIP boosts baseline's performance and even
surpasses state-of-the-art models. Therefore, PointCLIP is a promising
alternative for effective 3D point cloud understanding via CLIP under low
resource cost and data regime. We conduct thorough experiments on
widely-adopted ModelNet10, ModelNet40 and the challenging ScanObjectNN to
demonstrate the effectiveness of PointCLIP. The code is released at
https://github.com/ZrrSkywalker/PointCLIP.
"
A Survey of Deep Learning for Low-Shot Object Detection,Qihan Huang,http://arxiv.org/pdf/2112.02814v4.pdf,2021-12-06,"['cs.cv', 'cs.ai']",2112.02814v4.pdf,"  Object detection has achieved a huge breakthrough with deep neural networks
and massive annotated data. However, current detection methods cannot be
directly transferred to the scenario where the annotated data is scarce due to
the severe overfitting problem. Although few-shot learning and zero-shot
learning have been extensively explored in the field of image classification,
it is indispensable to design new methods for object detection in the
data-scarce scenario since object detection has an additional challenging
localization task. Low-Shot Object Detection (LSOD) is an emerging research
topic of detecting objects from a few or even no annotated samples, consisting
of One-Shot Object Detection (OSOD), Few-Shot Object Detection (FSOD) and
Zero-Shot Object Detection (ZSD). This survey provides a comprehensive review
of LSOD methods. First, we propose a thorough taxonomy of LSOD methods and
analyze them systematically, comprising some extensional topics of LSOD
(semi-supervised LSOD, weakly-supervised LSOD, and incremental LSOD). Then, we
indicate the pros and cons of current LSOD methods with a comparison of their
performance. Finally, we discuss the challenges and promising directions of
LSOD to provide guidance for future works.
"
"Vision-Language Intelligence: Tasks, Representation Learning, and Large  Models",Feng Li,http://arxiv.org/pdf/2203.01922v1.pdf,2022-03-03,"['cs.cv', 'cs.ai', 'cs.cl']",2203.01922v1.pdf,"  This paper presents a comprehensive survey of vision-language (VL)
intelligence from the perspective of time. This survey is inspired by the
remarkable progress in both computer vision and natural language processing,
and recent trends shifting from single modality processing to multiple modality
comprehension. We summarize the development in this field into three time
periods, namely task-specific methods, vision-language pre-training (VLP)
methods, and larger models empowered by large-scale weakly-labeled data. We
first take some common VL tasks as examples to introduce the development of
task-specific methods. Then we focus on VLP methods and comprehensively review
key components of the model structures and training methods. After that, we
show how recent work utilizes large-scale raw image-text data to learn
language-aligned visual representations that generalize better on zero or few
shot learning tasks. Finally, we discuss some potential future trends towards
modality cooperation, unified representation, and knowledge incorporation. We
believe that this review will be of help for researchers and practitioners of
AI and ML, especially those interested in computer vision and natural language
processing.
"
Rethinking Task Sampling for Few-shot Vision-Language Transfer Learning,Zhenhailong Wang,http://arxiv.org/pdf/2203.04904v3.pdf,2022-03-09,"['cs.mm', 'cs.cl', 'cs.cv']",2203.04904v3.pdf,"  Despite achieving state-of-the-art zero-shot performance, existing
vision-language models still fall short of few-shot transfer ability on
domain-specific problems. Classical fine-tuning often fails to prevent highly
expressive models from exploiting spurious correlations. Although
model-agnostic meta-learning (MAML) presents as a natural alternative for
few-shot transfer learning, the expensive computation due to implicit
second-order optimization limits its use on large-scale vision-language models
such as CLIP. While much literature has been devoted to exploring alternative
optimization strategies, we identify another essential aspect towards effective
few-shot transfer learning, task sampling, which is previously only be viewed
as part of data pre-processing in MAML. To show the impact of task sampling, we
propose a simple algorithm, Model-Agnostic Multitask Fine-tuning (MAMF), which
differentiates classical fine-tuning only on uniformly sampling multiple tasks.
Despite its simplicity, we show that MAMF consistently outperforms classical
fine-tuning on five few-shot vision-language classification tasks. We further
show that the effectiveness of the bi-level optimization in MAML is highly
sensitive to the zero-shot performance of a task in the context of few-shot
vision-language classification. The goal of this paper is to provide new
insights on what makes few-shot learning work, and encourage more research into
investigating better task sampling strategies.
"
mGPT: Few-Shot Learners Go Multilingual,Oleh Shliazhko,http://arxiv.org/pdf/2204.07580v2.pdf,2022-04-15,"['cs.cl', 'cs.ai', '68-06, 68-04, 68t50, 68t01', 'i.2; i.2.7']",2204.07580v2.pdf,"  Recent studies report that autoregressive language models can successfully
solve many NLP tasks via zero- and few-shot learning paradigms, which opens up
new possibilities for using the pre-trained language models. This paper
introduces two autoregressive GPT-like models with 1.3 billion and 13 billion
parameters trained on 60 languages from 25 language families using Wikipedia
and Colossal Clean Crawled Corpus. We reproduce the GPT-3 architecture using
GPT-2 sources and the sparse attention mechanism; Deepspeed and Megatron
frameworks allow us to parallelize the training and inference steps
effectively. The resulting models show performance on par with the recently
released XGLM models by Facebook, covering more languages and enhancing NLP
possibilities for low resource languages of CIS countries and Russian small
nations. We detail the motivation for the choices of the architecture design,
thoroughly describe the data preparation pipeline, and train five small
versions of the model to choose the most optimal multilingual tokenization
strategy. We measure the model perplexity in all covered languages and evaluate
it on the wide spectre of multilingual tasks, including classification,
generative, sequence labeling and knowledge probing. The models were evaluated
with the zero-shot and few-shot methods. Furthermore, we compared the
classification tasks with the state-of-the-art multilingual model XGLM. source
code and the mGPT XL model are publicly released.
"
In-BoXBART: Get Instructions into Biomedical Multi-Task Learning,Mihir Parmar,http://arxiv.org/pdf/2204.07600v1.pdf,2022-04-15,['cs.cl'],2204.07600v1.pdf,"  Single-task models have proven pivotal in solving specific tasks; however,
they have limitations in real-world applications where multi-tasking is
necessary and domain shifts are exhibited. Recently, instructional prompts have
shown significant improvement towards multi-task generalization; however, the
effect of instructional prompts and Multi-Task Learning (MTL) has not been
systematically studied in the biomedical domain. Motivated by this, this paper
explores the impact of instructional prompts for biomedical MTL. We introduce
the BoX, a collection of 32 instruction tasks for Biomedical NLP across (X)
various categories. Using this meta-dataset, we propose a unified model termed
In-BoXBART, that can jointly learn all tasks of the BoX without any
task-specific modules. To the best of our knowledge, this is the first attempt
to propose a unified model in the biomedical domain and use instructions to
achieve generalization across several biomedical tasks. Experimental results
indicate that the proposed model: 1) outperforms the single-task baseline by
~3% and multi-task (without instruction) baseline by ~18% on an average, and 2)
shows ~23% improvement compared to the single-task baseline in few-shot
learning (i.e., 32 instances per task) on an average. Our analysis indicates
that there is significant room for improvement across tasks in the BoX,
implying the scope for future research direction.
"
OPT: Open Pre-trained Transformer Language Models,Susan Zhang,http://arxiv.org/pdf/2205.01068v4.pdf,2022-05-02,"['cs.cl', 'cs.lg']",2205.01068v4.pdf,"  Large language models, which are often trained for hundreds of thousands of
compute days, have shown remarkable capabilities for zero- and few-shot
learning. Given their computational cost, these models are difficult to
replicate without significant capital. For the few that are available through
APIs, no access is granted to the full model weights, making them difficult to
study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only
pre-trained transformers ranging from 125M to 175B parameters, which we aim to
fully and responsibly share with interested researchers. We show that OPT-175B
is comparable to GPT-3, while requiring only 1/7th the carbon footprint to
develop. We are also releasing our logbook detailing the infrastructure
challenges we faced, along with code for experimenting with all of the released
models.
"
Relation Extraction as Open-book Examination: Retrieval-enhanced Prompt  Tuning,Xiang Chen,http://arxiv.org/pdf/2205.02355v2.pdf,2022-05-04,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",2205.02355v2.pdf,"  Pre-trained language models have contributed significantly to relation
extraction by demonstrating remarkable few-shot learning abilities. However,
prompt tuning methods for relation extraction may still fail to generalize to
those rare or hard patterns. Note that the previous parametric learning
paradigm can be viewed as memorization regarding training data as a book and
inference as the close-book test. Those long-tailed or hard patterns can hardly
be memorized in parameters given few-shot instances. To this end, we regard RE
as an open-book examination and propose a new semiparametric paradigm of
retrieval-enhanced prompt tuning for relation extraction. We construct an
open-book datastore for retrieval regarding prompt-based instance
representations and corresponding relation labels as memorized key-value pairs.
During inference, the model can infer relations by linearly interpolating the
base output of PLM with the non-parametric nearest neighbor distribution over
the datastore. In this way, our model not only infers relation through
knowledge stored in the weights during training but also assists
decision-making by unwinding and querying examples in the open-book datastore.
Extensive experiments on benchmark datasets show that our method can achieve
state-of-the-art in both standard supervised and few-shot settings. Code are
available in https://github.com/zjunlp/PromptKG/tree/main/research/RetrievalRE.
"
Towards Unified Prompt Tuning for Few-shot Text Classification,Jianing Wang,http://arxiv.org/pdf/2205.05313v1.pdf,2022-05-11,"['cs.cl', 'cs.ai']",2205.05313v1.pdf,"  Prompt-based fine-tuning has boosted the performance of Pre-trained Language
Models (PLMs) on few-shot text classification by employing task-specific
prompts. Yet, PLMs are unfamiliar with prompt-style expressions during
pre-training, which limits the few-shot learning performance on downstream
tasks. It would be desirable if the models can acquire some prompting knowledge
before adaptation to specific NLP tasks. We present the Unified Prompt Tuning
(UPT) framework, leading to better few-shot text classification for BERT-style
models by explicitly capturing prompting semantics from non-target NLP
datasets. In UPT, a novel paradigm Prompt-Options-Verbalizer is proposed for
joint prompt learning across different NLP tasks, forcing PLMs to capture
task-invariant prompting knowledge. We further design a self-supervised task
named Knowledge-enhanced Selective Masked Language Modeling to improve the
PLM's generalization abilities for accurate adaptation to previously unseen
tasks. After multi-task learning across multiple tasks, the PLM can be better
prompt-tuned towards any dissimilar target tasks in low-resourced settings.
Experiments over a variety of NLP tasks show that UPT consistently outperforms
state-of-the-arts for prompt-based fine-tuning.
"
Towards Answering Open-ended Ethical Quandary Questions,Yejin Bang,http://arxiv.org/pdf/2205.05989v3.pdf,2022-05-12,"['cs.cl', 'cs.ai', 'cs.lg']",2205.05989v3.pdf,"  Considerable advancements have been made in various NLP tasks based on the
impressive power of large language models (LLMs) and many NLP applications are
deployed in our daily lives. In this work, we challenge the capability of LLMs
with the new task of Ethical Quandary Generative Question Answering. Ethical
quandary questions are more challenging to address because multiple conflicting
answers may exist to a single quandary. We explore the current capability of
LLMs in providing an answer with a deliberative exchange of different
perspectives to an ethical quandary, in the approach of Socratic philosophy,
instead of providing a closed answer like an oracle. We propose a model that
searches for different ethical principles applicable to the ethical quandary
and generates an answer conditioned on the chosen principles through
prompt-based few-shot learning. We also discuss the remaining challenges and
ethical issues involved in this task and suggest the direction toward
developing responsible NLP systems by incorporating human values explicitly.
"
PromptDA: Label-guided Data Augmentation for Prompt-based Few-shot  Learners,Canyu Chen,http://arxiv.org/pdf/2205.09229v3.pdf,2022-05-18,"['cs.cl', 'cs.ai']",2205.09229v3.pdf,"  Recent advances in large pre-trained language models (PLMs) lead to
impressive gains in natural language understanding (NLU) tasks with
task-specific fine-tuning. However, directly fine-tuning PLMs heavily relies on
sufficient labeled training instances, which are usually hard to obtain.
Prompt-based tuning on PLMs has shown to be powerful for various downstream
few-shot tasks. Existing works studying prompt-based tuning for few-shot NLU
tasks mainly focus on deriving proper label words with a verbalizer or
generating prompt templates to elicit semantics from PLMs. In addition,
conventional data augmentation strategies such as synonym substitution, though
widely adopted in low-resource scenarios, only bring marginal improvements for
prompt-based few-shot learning. Thus, an important research question arises:
how to design effective data augmentation methods for prompt-based few-shot
tuning? To this end, considering the label semantics are essential in
prompt-based tuning, we propose a novel label-guided data augmentation
framework PromptDA, which exploits the enriched label semantic information for
data augmentation. Extensive experiment results on few-shot text classification
tasks demonstrate the superior performance of the proposed framework by
effectively leveraging label semantics and data augmentation for natural
language understanding. Our code is available at
https://github.com/canyuchen/PromptDA.
"
What Makes Data-to-Text Generation Hard for Pretrained Language Models?,Moniba Keymanesh,http://arxiv.org/pdf/2205.11505v1.pdf,2022-05-23,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",2205.11505v1.pdf,"  Expressing natural language descriptions of structured facts or relations --
data-to-text generation (D2T) -- increases the accessibility of structured
knowledge repositories. Previous work shows that pre-trained language
models(PLMs) perform remarkably well on this task after fine-tuning on a
significant amount of task-specific training data. On the other hand, while
auto-regressive PLMs can generalize from a few task examples, their efficacy at
D2T is largely unexplored. Furthermore, we have an incomplete understanding of
the limits of PLMs on D2T.
  In this work, we conduct an empirical study of both fine-tuned and
auto-regressive PLMs on the DART multi-domain D2T dataset. We consider their
performance as a function of the amount of task-specific data and how these
data are incorporated into the models: zero and few-shot learning, and
fine-tuning of model weights. In addition, we probe the limits of PLMs by
measuring performance on subsets of the evaluation data: novel predicates and
abstractive test examples. To improve the performance on these subsets, we
investigate two techniques: providing predicate descriptions in the context and
re-ranking generated candidates by information reflected in the source.
Finally, we conduct a human evaluation of model errors and show that D2T
generation tasks would benefit from datasets with more careful manual curation.
"
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures  of Soft Prompts,Akari Asai,http://arxiv.org/pdf/2205.11961v2.pdf,2022-05-24,['cs.cl'],2205.11961v2.pdf,"  This work introduces a new multi-task, parameter-efficient language model
(LM) tuning method that learns to transfer knowledge across different tasks via
a mixture of soft prompts-small prefix embedding vectors pre-trained for
different tasks. Our method, called ATTEMPT (ATTEntional Mixtures of Prompt
Tuning), obtains source prompts as encodings of large-scale source tasks into a
small number of parameters and trains an attention module to interpolate the
source prompts and a newly initialized target prompt for every instance in the
target task. During training, only the target task prompt and the attention
weights, which are shared between tasks in multi-task training, are updated,
while the original LM and source prompts are intact. ATTEMPT is highly
parameter-efficient (e.g., updates 2,300 times fewer parameters than full
fine-tuning) while achieving high task performance using knowledge from
high-resource tasks. Moreover, it is modular using pre-trained soft prompts,
and can flexibly add or remove source prompts for effective knowledge transfer.
Our experimental results across 21 diverse NLP datasets show that ATTEMPT
significantly outperforms prompt tuning and outperforms or matches fully
fine-tuned or other parameter-efficient tuning approaches that use over ten
times more parameters. Finally, ATTEMPT outperforms previous work in few-shot
learning settings.
"
Making Large Language Models Better Reasoners with Step-Aware Verifier,Yifei Li,http://arxiv.org/pdf/2206.02336v3.pdf,2022-06-06,"['cs.cl', 'cs.ai']",2206.02336v3.pdf,"  Few-shot learning is a challenging task that requires language models to
generalize from limited examples. Large language models like GPT-3 and PaLM
have made impressive progress in this area, but they still face difficulties in
reasoning tasks such as GSM8K, a benchmark for arithmetic problems. To improve
their reasoning skills, previous work has proposed to guide the language model
with prompts that elicit a series of reasoning steps before giving the final
answer, achieving a significant improvement on GSM8K from 17.9% to 58.1% in
problem-solving rate. In this paper, we present DIVERSE (Diverse Verifier on
Reasoning Step), a novel approach that further enhances the reasoning
capability of language models. DIVERSE has three main components: first, it
generates diverse prompts to explore different reasoning paths for the same
question; second, it uses a verifier to filter out incorrect answers based on a
weighted voting scheme; and third, it verifies each reasoning step individually
instead of the whole chain. We evaluate DIVERSE on the latest language model
code-davinci-002 and show that it achieves new state-of-the-art results on six
of eight reasoning benchmarks (e.g., GSM8K 74.4% to 83.2%).
"
Language Models are General-Purpose Interfaces,Yaru Hao,http://arxiv.org/pdf/2206.06336v1.pdf,2022-06-13,['cs.cl'],2206.06336v1.pdf,"  Foundation models have received much attention due to their effectiveness
across a broad range of downstream applications. Though there is a big
convergence in terms of architecture, most pretrained models are typically
still developed for specific tasks or modalities. In this work, we propose to
use language models as a general-purpose interface to various foundation
models. A collection of pretrained encoders perceive diverse modalities (such
as vision, and language), and they dock with a language model that plays the
role of a universal task layer. We propose a semi-causal language modeling
objective to jointly pretrain the interface and the modular encoders. We
subsume the advantages and capabilities from both causal and non-causal
modeling, thereby combining the best of two worlds. Specifically, the proposed
method not only inherits the capabilities of in-context learning and open-ended
generation from causal language modeling, but also is conducive to finetuning
because of the bidirectional encoders. More importantly, our approach
seamlessly unlocks the combinations of the above capabilities, e.g., enabling
in-context learning or instruction following with finetuned encoders.
Experimental results across various language-only and vision-language
benchmarks show that our model outperforms or is competitive with specialized
models on finetuning, zero-shot generalization, and few-shot learning.
"
FiT: Parameter Efficient Few-shot Transfer Learning for Personalized and  Federated Image Classification,Aliaksandra Shysheya,http://arxiv.org/pdf/2206.08671v2.pdf,2022-06-17,"['stat.ml', 'cs.cv', 'cs.lg']",2206.08671v2.pdf,"  Modern deep learning systems are increasingly deployed in situations such as
personalization and federated learning where it is necessary to support i)
learning on small amounts of data, and ii) communication efficient distributed
training protocols. In this work, we develop FiLM Transfer (FiT) which fulfills
these requirements in the image classification setting by combining ideas from
transfer learning (fixed pretrained backbones and fine-tuned FiLM adapter
layers) and meta-learning (automatically configured Naive Bayes classifiers and
episodic training) to yield parameter efficient models with superior
classification accuracy at low-shot. The resulting parameter efficiency is key
for enabling few-shot learning, inexpensive model updates for personalization,
and communication efficient federated learning. We experiment with FiT on a
wide range of downstream datasets and show that it achieves better
classification accuracy than the leading Big Transfer (BiT) algorithm at
low-shot and achieves state-of-the art accuracy on the challenging VTAB-1k
benchmark, with fewer than 1% of the updateable parameters. Finally, we
demonstrate the parameter efficiency and superior accuracy of FiT in
distributed low-shot applications including model personalization and federated
learning where model update size is an important performance metric.
"
A Reinforcement Learning-based Offensive semantics Censorship System for  Chatbots,Shaokang Cai,http://arxiv.org/pdf/2207.10569v1.pdf,2022-07-13,['cs.cl'],2207.10569v1.pdf,"  The rapid development of artificial intelligence (AI) technology has enabled
large-scale AI applications to land in the market and practice. However, while
AI technology has brought many conveniences to people in the productization
process, it has also exposed many security issues. Especially, attacks against
online learning vulnerabilities of chatbots occur frequently. Therefore, this
paper proposes a semantics censorship chatbot system based on reinforcement
learning, which is mainly composed of two parts: the Offensive semantics
censorship model and the semantics purification model. Offensive semantics
review can combine the context of user input sentences to detect the rapid
evolution of Offensive semantics and respond to Offensive semantics responses.
The semantics purification model For the case of chatting robot models, it has
been contaminated by large numbers of offensive semantics, by strengthening the
offensive reply learned by the learning algorithm, rather than rolling back to
the early versions. In addition, by integrating a once-through learning
approach, the speed of semantics purification is accelerated while reducing the
impact on the quality of replies. The experimental results show that our
proposed approach reduces the probability of the chat model generating
offensive replies and that the integration of the few-shot learning algorithm
improves the training speed rapidly while effectively slowing down the decline
in BLEU values.
"
AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq  Model,Saleh Soltan,http://arxiv.org/pdf/2208.01448v2.pdf,2022-08-02,"['cs.cl', 'cs.lg']",2208.01448v2.pdf,"  In this work, we demonstrate that multilingual large-scale
sequence-to-sequence (seq2seq) models, pre-trained on a mixture of denoising
and Causal Language Modeling (CLM) tasks, are more efficient few-shot learners
than decoder-only models on various tasks. In particular, we train a 20 billion
parameter multilingual seq2seq model called Alexa Teacher Model (AlexaTM 20B)
and show that it achieves state-of-the-art (SOTA) performance on 1-shot
summarization tasks, outperforming a much larger 540B PaLM decoder model.
AlexaTM 20B also achieves SOTA in 1-shot machine translation, especially for
low-resource languages, across almost all language pairs supported by the model
(Arabic, English, French, German, Hindi, Italian, Japanese, Marathi,
Portuguese, Spanish, Tamil, and Telugu) on Flores-101 dataset. We also show in
zero-shot setting, AlexaTM 20B outperforms GPT3 (175B) on SuperGLUE and SQuADv2
datasets and provides SOTA performance on multilingual tasks such as XNLI,
XCOPA, Paws-X, and XWinograd. Overall, our results present a compelling case
for seq2seq models as a powerful alternative to decoder-only models for
Large-scale Language Model (LLM) training.
"
Unsupervisedly Prompting AlphaFold2 for Few-Shot Learning of Accurate  Folding Landscape and Protein Structure Prediction,Jun Zhang,http://arxiv.org/pdf/2208.09652v2.pdf,2022-08-20,"['cs.lg', 'cs.ai', 'physics.bio-ph']",2208.09652v2.pdf,"  Data-driven predictive methods which can efficiently and accurately transform
protein sequences into biologically active structures are highly valuable for
scientific research and medical development. Determining accurate folding
landscape using co-evolutionary information is fundamental to the success of
modern protein structure prediction methods. As the state of the art,
AlphaFold2 has dramatically raised the accuracy without performing explicit
co-evolutionary analysis. Nevertheless, its performance still shows strong
dependence on available sequence homologs. Based on the interrogation on the
cause of such dependence, we presented EvoGen, a meta generative model, to
remedy the underperformance of AlphaFold2 for poor MSA targets. By prompting
the model with calibrated or virtually generated homologue sequences, EvoGen
helps AlphaFold2 fold accurately in low-data regime and even achieve
encouraging performance with single-sequence predictions. Being able to make
accurate predictions with few-shot MSA not only generalizes AlphaFold2 better
for orphan sequences, but also democratizes its use for high-throughput
applications. Besides, EvoGen combined with AlphaFold2 yields a probabilistic
structure generation method which could explore alternative conformations of
protein sequences, and the task-aware differentiable algorithm for sequence
generation will benefit other related tasks including protein design.
"
Disentangle and Remerge: Interventional Knowledge Distillation for  Few-Shot Object Detection from A Conditional Causal Perspective,Jiangmeng Li,http://arxiv.org/pdf/2208.12681v2.pdf,2022-08-26,['cs.cv'],2208.12681v2.pdf,"  Few-shot learning models learn representations with limited human
annotations, and such a learning paradigm demonstrates practicability in
various tasks, e.g., image classification, object detection, etc. However,
few-shot object detection methods suffer from an intrinsic defect that the
limited training data makes the model cannot sufficiently explore semantic
information. To tackle this, we introduce knowledge distillation to the
few-shot object detection learning paradigm. We further run a motivating
experiment, which demonstrates that in the process of knowledge distillation,
the empirical error of the teacher model degenerates the prediction performance
of the few-shot object detection model as the student. To understand the
reasons behind this phenomenon, we revisit the learning paradigm of knowledge
distillation on the few-shot object detection task from the causal theoretic
standpoint, and accordingly, develop a Structural Causal Model. Following the
theoretical guidance, we propose a backdoor adjustment-based knowledge
distillation method for the few-shot object detection task, namely Disentangle
and Remerge (D&R), to perform conditional causal intervention toward the
corresponding Structural Causal Model. Empirically, the experiments on
benchmarks demonstrate that D&R can yield significant performance boosts in
few-shot object detection. Code is available at
https://github.com/ZYN-1101/DandR.git.
"
NeurIPS'22 Cross-Domain MetaDL competition: Design and baseline results,Dustin CarriĂłn-Ojeda,http://arxiv.org/pdf/2208.14686v1.pdf,2022-08-31,"['cs.lg', 'cs.ai', 'cs.cv', 'cs.ne']",2208.14686v1.pdf,"  We present the design and baseline results for a new challenge in the
ChaLearn meta-learning series, accepted at NeurIPS'22, focusing on
""cross-domain"" meta-learning. Meta-learning aims to leverage experience gained
from previous tasks to solve new tasks efficiently (i.e., with better
performance, little training data, and/or modest computational resources).
While previous challenges in the series focused on within-domain few-shot
learning problems, with the aim of learning efficiently N-way k-shot tasks
(i.e., N class classification problems with k training examples), this
competition challenges the participants to solve ""any-way"" and ""any-shot""
problems drawn from various domains (healthcare, ecology, biology,
manufacturing, and others), chosen for their humanitarian and societal impact.
To that end, we created Meta-Album, a meta-dataset of 40 image classification
datasets from 10 domains, from which we carve out tasks with any number of
""ways"" (within the range 2-20) and any number of ""shots"" (within the range
1-20). The competition is with code submission, fully blind-tested on the
CodaLab challenge platform. The code of the winners will be open-sourced,
enabling the deployment of automated machine learning solutions for few-shot
image classification across several domains.
"
Automatic Label Sequence Generation for Prompting Sequence-to-sequence  Models,Zichun Yu,http://arxiv.org/pdf/2209.09401v1.pdf,2022-09-20,"['cs.cl', 'cs.lg']",2209.09401v1.pdf,"  Prompting, which casts downstream applications as language modeling tasks,
has shown to be sample efficient compared to standard fine-tuning with
pre-trained models. However, one pitfall of prompting is the need of
manually-designed patterns, whose outcome can be unintuitive and requires large
validation sets to tune. To tackle the challenge, we propose AutoSeq, a fully
automatic prompting method: (1) We adopt natural language prompts on
sequence-to-sequence models, enabling free-form generation and larger label
search space; (2) We propose label sequences -- phrases with indefinite lengths
to verbalize the labels -- which eliminate the need of manual templates and are
more expressive than single label words; (3) We use beam search to
automatically generate a large amount of label sequence candidates and propose
contrastive re-ranking to get the best combinations. AutoSeq significantly
outperforms other no-manual-design methods, such as soft prompt tuning, adapter
tuning, and automatic search on single label words; the generated label
sequences are even better than curated manual ones on a variety of tasks. Our
method reveals the potential of sequence-to-sequence models in few-shot
learning and sheds light on a path to generic and automatic prompting. The
source code of this paper can be obtained from
https://github.com/thunlp/Seq2Seq-Prompt.
"
Collaboration of Pre-trained Models Makes Better Few-shot Learner,Renrui Zhang,http://arxiv.org/pdf/2209.12255v2.pdf,2022-09-25,['cs.cv'],2209.12255v2.pdf,"  Few-shot classification requires deep neural networks to learn generalized
representations only from limited training images, which is challenging but
significant in low-data regimes. Recently, CLIP-based methods have shown
promising few-shot performance benefited from the contrastive language-image
pre-training. Based on this point, we question if the large-scale pre-training
can alleviate the few-shot data deficiency and also assist the representation
learning by the pre-learned knowledge. In this paper, we propose CoMo, a
Collaboration of pre-trained Models that incorporates diverse prior knowledge
from various pre-training paradigms for better few-shot learning. Our CoMo
includes: CLIP's language-contrastive knowledge, DINO's vision-contrastive
knowledge, and DALL-E's language-generative knowledge. Specifically, CoMo works
in two aspects: few-shot data expansion and diverse knowledge ensemble. For
one, we generate synthetic images via zero-shot DALL-E to enrich the few-shot
training data without any manpower. For the other, we introduce a learnable
Multi-Knowledge Adapter (MK-Adapter) to adaptively blend the predictions from
CLIP and DINO. By such collaboration, CoMo can fully unleash the potential of
different pre-training methods and unify them to perform state-of-the-art for
few-shot classification. We conduct extensive experiments on 11 datasets to
demonstrate the superiority and generalization ability of our approach.
"
CLIP2Point: Transfer CLIP to Point Cloud Classification with Image-Depth  Pre-training,Tianyu Huang,http://arxiv.org/pdf/2210.01055v3.pdf,2022-10-03,['cs.cv'],2210.01055v3.pdf,"  Pre-training across 3D vision and language remains under development because
of limited training data. Recent works attempt to transfer vision-language
pre-training models to 3D vision. PointCLIP converts point cloud data to
multi-view depth maps, adopting CLIP for shape classification. However, its
performance is restricted by the domain gap between rendered depth maps and
images, as well as the diversity of depth distributions. To address this issue,
we propose CLIP2Point, an image-depth pre-training method by contrastive
learning to transfer CLIP to the 3D domain, and adapt it to point cloud
classification. We introduce a new depth rendering setting that forms a better
visual effect, and then render 52,460 pairs of images and depth maps from
ShapeNet for pre-training. The pre-training scheme of CLIP2Point combines
cross-modality learning to enforce the depth features for capturing expressive
visual and textual features and intra-modality learning to enhance the
invariance of depth aggregation. Additionally, we propose a novel Dual-Path
Adapter (DPA) module, i.e., a dual-path structure with simplified adapters for
few-shot learning. The dual-path structure allows the joint use of CLIP and
CLIP2Point, and the simplified adapter can well fit few-shot tasks without
post-search. Experimental results show that CLIP2Point is effective in
transferring CLIP knowledge to 3D vision. Our CLIP2Point outperforms PointCLIP
and other self-supervised 3D networks, achieving state-of-the-art results on
zero-shot and few-shot classification.
"
Instruction Tuning for Few-Shot Aspect-Based Sentiment Analysis,Siddharth Varia,http://arxiv.org/pdf/2210.06629v2.pdf,2022-10-12,['cs.cl'],2210.06629v2.pdf,"  Aspect-based Sentiment Analysis (ABSA) is a fine-grained sentiment analysis
task which involves four elements from user-generated texts: aspect term,
aspect category, opinion term, and sentiment polarity. Most computational
approaches focus on some of the ABSA sub-tasks such as tuple (aspect term,
sentiment polarity) or triplet (aspect term, opinion term, sentiment polarity)
extraction using either pipeline or joint modeling approaches. Recently,
generative approaches have been proposed to extract all four elements as (one
or more) quadruplets from text as a single task. In this work, we take a step
further and propose a unified framework for solving ABSA, and the associated
sub-tasks to improve the performance in few-shot scenarios. To this end, we
fine-tune a T5 model with instructional prompts in a multi-task learning
fashion covering all the sub-tasks, as well as the entire quadruple prediction
task. In experiments with multiple benchmark datasets, we show that the
proposed multi-task prompting approach brings performance boost (by absolute
8.29 F1) in the few-shot learning setting.
"
"RARR: Researching and Revising What Language Models Say, Using Language  Models",Luyu Gao,http://arxiv.org/pdf/2210.08726v3.pdf,2022-10-17,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",2210.08726v3.pdf,"  Language models (LMs) now excel at many tasks such as few-shot learning,
question answering, reasoning, and dialog. However, they sometimes generate
unsupported or misleading content. A user cannot easily determine whether their
outputs are trustworthy or not, because most LMs do not have any built-in
mechanism for attribution to external evidence. To enable attribution while
still preserving all the powerful advantages of recent generation models, we
propose RARR (Retrofit Attribution using Research and Revision), a system that
1) automatically finds attribution for the output of any text generation model
and 2) post-edits the output to fix unsupported content while preserving the
original output as much as possible. When applied to the output of several
state-of-the-art LMs on a diverse set of generation tasks, we find that RARR
significantly improves attribution while otherwise preserving the original
input to a much greater degree than previously explored edit models.
Furthermore, the implementation of RARR requires only a handful of training
examples, a large language model, and standard web search.
"
TAPE: Assessing Few-shot Russian Language Understanding,Ekaterina Taktasheva,http://arxiv.org/pdf/2210.12813v1.pdf,2022-10-23,['cs.cl'],2210.12813v1.pdf,"  Recent advances in zero-shot and few-shot learning have shown promise for a
scope of research and practical purposes. However, this fast-growing area lacks
standardized evaluation suites for non-English languages, hindering progress
outside the Anglo-centric paradigm. To address this line of research, we
propose TAPE (Text Attack and Perturbation Evaluation), a novel benchmark that
includes six more complex NLU tasks for Russian, covering multi-hop reasoning,
ethical concepts, logic and commonsense knowledge. The TAPE's design focuses on
systematic zero-shot and few-shot NLU evaluation: (i) linguistic-oriented
adversarial attacks and perturbations for analyzing robustness, and (ii)
subpopulations for nuanced interpretation. The detailed analysis of testing the
autoregressive baselines indicates that simple spelling-based perturbations
affect the performance the most, while paraphrasing the input has a more
negligible effect. At the same time, the results demonstrate a significant gap
between the neural and human baselines for most tasks. We publicly release TAPE
(tape-benchmark.com) to foster research on robust LMs that can generalize to
new tasks when little to no supervision is available.
"
Learning New Tasks from a Few Examples with Soft-Label Prototypes,Avyav Kumar Singh,http://arxiv.org/pdf/2210.17437v2.pdf,2022-10-31,"['cs.lg', 'cs.cl']",2210.17437v2.pdf,"  It has been experimentally demonstrated that humans are able to learn in a
manner that allows them to make predictions on categories for which they have
not seen any examples (Malaviya et al., 2022). Sucholutsky and Schonlau (2020)
have recently presented a machine learning approach that aims to do the same.
They utilise synthetically generated data and demonstrate that it is possible
to achieve sub-linear scaling and develop models that can learn to recognise N
classes from M training samples where M is less than N - aka less-than-one shot
learning. Their method was, however, defined for univariate or simple
multivariate data (Sucholutsky et al., 2021). We extend it to work on large,
high-dimensional and real-world datasets and empirically validate it in this
new and challenging setting. We apply this method to learn previously unseen
NLP tasks from very few examples (4, 8 or 16). We first generate compact,
sophisticated less-than-one shot representations called soft-label prototypes
which are fitted on training data, capturing the distribution of different
classes across the input domain space. We then use a modified k-Nearest
Neighbours classifier to demonstrate that soft-label prototypes can classify
data competitively, even outperforming much more computationally complex
few-shot learning methods.
"
QAmeleon: Multilingual QA with Only 5 Examples,Priyanka Agrawal,http://arxiv.org/pdf/2211.08264v2.pdf,2022-11-15,['cs.cl'],2211.08264v2.pdf,"  The availability of large, high-quality datasets has been one of the main
drivers of recent progress in question answering (QA). Such annotated datasets
however are difficult and costly to collect, and rarely exist in languages
other than English, rendering QA technology inaccessible to underrepresented
languages. An alternative to building large monolingual training datasets is to
leverage pre-trained language models (PLMs) under a few-shot learning setting.
Our approach, QAmeleon, uses a PLM to automatically generate multilingual data
upon which QA models are trained, thus avoiding costly annotation. Prompt
tuning the PLM for data synthesis with only five examples per language delivers
accuracy superior to translation-based baselines, bridges nearly 60% of the gap
between an English-only baseline and a fully supervised upper bound trained on
almost 50,000 hand labeled examples, and always leads to substantial
improvements compared to fine-tuning a QA model directly on labeled examples in
low resource settings. Experiments on the TyDiQA-GoldP and MLQA benchmarks show
that few-shot prompt tuning for data synthesis scales across languages and is a
viable alternative to large-scale annotation.
"
Explicit Knowledge Transfer for Weakly-Supervised Code Generation,Zhangir Azerbayev,http://arxiv.org/pdf/2211.16740v3.pdf,2022-11-30,['cs.cl'],2211.16740v3.pdf,"  Large language models (LLMs) can acquire strong code-generation capabilities
through few-shot learning. In contrast, supervised fine-tuning is still needed
for smaller models to achieve good performance. Such fine-tuning demands a
large number of task-specific NL-code pairs, which are expensive to obtain. In
this paper, we attempt to transfer the code generation ability of an LLM to a
smaller model with the aid of weakly-supervised data. More specifically, we
propose explicit knowledge transfer (EKT), which uses the few-shot capabilities
of a teacher LLM to create NL-code pairs that we then filter for correctness
and fine-tune the student on. We evaluate EKT on the task of generating code
solutions to math word problems from the GSM8k dataset. We find that EKT not
only yields better performance than training with expert iteration, but also
outperforms knowledge distillation, another form of knowledge transfer. A
GPT-Neo 1.3B model trained using EKT with a GPT-J teacher achieves a 12.4%
pass@100 on GSM8k, while the same student and teacher trained with knowledge
distillation yield only a 3.7% pass@100. We also show that it is possible for a
student model to outperform the teacher using EKT.
"
Can In-context Learners Learn a Reasoning Concept from Demonstrations?,Michal Štefánik,http://arxiv.org/pdf/2212.01692v4.pdf,2022-12-03,"['cs.cl', 'cs.ai', 'cs.lg']",2212.01692v4.pdf,"  Language models exhibit an emergent ability to learn a new task from a small
number of input-output demonstrations. However, recent work shows that
in-context learners largely rely on their pre-trained knowledge, such as the
sentiment of the labels, instead of learning new associations from the input.
We argue that the commonly-used few-shot evaluation using a random selection of
in-context demonstrations can not disentangle models' reliance on such biases,
as most of the randomly-selected demonstrations do not present relations
informative for prediction beyond exposing the task's input-output
distribution.
  Therefore, to evaluate models' in-context learning ability independent of
models' memory, we introduce a Concept-sharing few-shot learning method
choosing the demonstrations that share an underlying concept with the predicted
sample. We extract a set of such concepts from available human explanations and
measure how much models can benefit from presenting these concepts in few-shot
demonstrations.
  We find that most of the recent in-context learners can not consistently
benefit from the demonstrated concepts, irrespective of the model size.
However, we note that T0 models are more sensitive to exhibited concepts,
benefiting from concept-sharing demonstrations in 7 out of 8 evaluation
scenarios.
"
Frozen CLIP Model is An Efficient Point Cloud Backbone,Xiaoshui Huang,http://arxiv.org/pdf/2212.04098v2.pdf,2022-12-08,['cs.cv'],2212.04098v2.pdf,"  The pretraining-finetuning paradigm has demonstrated great success in NLP and
2D image fields because of the high-quality representation ability and
transferability of their pretrained models. However, pretraining such a strong
model is difficult in the 3D point cloud field since the training data is
limited and point cloud collection is expensive. This paper introduces
Efficient Point Cloud Learning (EPCL), an effective and efficient point cloud
learner for directly training high-quality point cloud models with a frozen
CLIP model. Our EPCL connects the 2D and 3D modalities by semantically aligning
the 2D features and point cloud features without paired 2D-3D data.
Specifically, the input point cloud is divided into a sequence of tokens and
directly fed into the frozen CLIP model to learn point cloud representation.
Furthermore, we design a task token to narrow the gap between 2D images and 3D
point clouds. Comprehensive experiments on 3D detection, semantic segmentation,
classification and few-shot learning demonstrate that the 2D CLIP model can be
an efficient point cloud backbone and our method achieves state-of-the-art
accuracy on both real-world and synthetic downstream tasks. Code will be
available.
"
Federated Few-Shot Learning for Mobile NLP,Dongqi Cai,http://arxiv.org/pdf/2212.05974v2.pdf,2022-12-12,"['cs.lg', 'cs.cl']",2212.05974v2.pdf,"  Natural language processing (NLP) sees rich mobile applications. To support
various language understanding tasks, a foundation NLP model is often
fine-tuned in a federated, privacy-preserving setting (FL). This process
currently relies on at least hundreds of thousands of labeled training samples
from mobile clients; yet mobile users often lack willingness or knowledge to
label their data. Such an inadequacy of data labels is known as a few-shot
scenario; it becomes the key blocker for mobile NLP applications.
  For the first time, this work investigates federated NLP in the few-shot
scenario (FedFSL). By retrofitting algorithmic advances of pseudo labeling and
prompt learning, we first establish a training pipeline that delivers
competitive accuracy when only 0.05% (fewer than 100) of the training data is
labeled and the remaining is unlabeled. To instantiate the workflow, we further
present a system FeS, addressing the high execution cost with novel designs.
(1) Curriculum pacing, which injects pseudo labels to the training workflow at
a rate commensurate to the learning progress; (2) Representational diversity, a
mechanism for selecting the most learnable data, only for which pseudo labels
will be generated; (3) Co-planning of a model's training depth and layer
capacity. Together, these designs reduce the training delay, client energy, and
network traffic by up to 46.0$\times$, 41.2$\times$ and 3000.0$\times$,
respectively. Through algorithm/system co-design, FFNLP demonstrates that FL
can apply to challenging settings where most training samples are unlabeled.
"
FewFedWeight: Few-shot Federated Learning Framework across Multiple NLP  Tasks,Weilong Dong,http://arxiv.org/pdf/2212.08354v1.pdf,2022-12-16,['cs.cl'],2212.08354v1.pdf,"  Massively multi-task learning with large language models has recently made
substantial progress on few-shot generalization. However, this is usually
performed in a centralized learning fashion, ignoring the privacy sensitivity
issue of (annotated) data used in multiple tasks. To mitigate this issue, we
propose FewFedWeight, a few-shot federated learning framework across multiple
tasks, to achieve the best of both worlds: privacy preservation and cross-task
generalization. FewFedWeight trains client models in isolated devices without
sharing data. It broadcasts the global model in the server to each client and
produces pseudo data for clients so that knowledge from the global model can be
explored to enhance few-shot learning of each client model. An energy-based
algorithm is further proposed to weight pseudo samples in order to reduce the
negative impact of noise from the generated pseudo data. Adaptive model weights
of client models are also tuned according to their performance. We use these
model weights to dynamically aggregate client models to update the global
model. Experiments on 118 NLP tasks show that FewFedWeight can significantly
improve the performance of client models on 61% tasks with an average
performance improvement rate of 30.5% over the baseline and substantially
outperform FedAvg and other decentralized learning methods.
"
Contrastive Distillation Is a Sample-Efficient Self-Supervised Loss  Policy for Transfer Learning,Chris Lengerich,http://arxiv.org/pdf/2212.11353v1.pdf,2022-12-21,"['cs.cl', 'cs.lg']",2212.11353v1.pdf,"  Traditional approaches to RL have focused on learning decision policies
directly from episodic decisions, while slowly and implicitly learning the
semantics of compositional representations needed for generalization. While
some approaches have been adopted to refine representations via auxiliary
self-supervised losses while simultaneously learning decision policies,
learning compositional representations from hand-designed and
context-independent self-supervised losses (multi-view) still adapts relatively
slowly to the real world, which contains many non-IID subspaces requiring rapid
distribution shift in both time and spatial attention patterns at varying
levels of abstraction. In contrast, supervised language model cascades have
shown the flexibility to adapt to many diverse manifolds, and hints of
self-learning needed for autonomous task transfer. However, to date, transfer
methods for language models like few-shot learning and fine-tuning still
require human supervision and transfer learning using self-learning methods has
been underexplored. We propose a self-supervised loss policy called contrastive
distillation which manifests latent variables with high mutual information with
both source and target tasks from weights to tokens. We show how this
outperforms common methods of transfer learning and suggests a useful design
axis of trading off compute for generalizability for online transfer.
Contrastive distillation is improved through sampling from memory and suggests
a simple algorithm for more efficiently sampling negative examples for
contrastive losses than random sampling.
"
Exploring Efficient Few-shot Adaptation for Vision Transformers,Chengming Xu,http://arxiv.org/pdf/2301.02419v1.pdf,2023-01-06,['cs.cv'],2301.02419v1.pdf,"  The task of Few-shot Learning (FSL) aims to do the inference on novel
categories containing only few labeled examples, with the help of knowledge
learned from base categories containing abundant labeled training samples.
While there are numerous works into FSL task, Vision Transformers (ViTs) have
rarely been taken as the backbone to FSL with few trials focusing on naive
finetuning of whole backbone or classification layer.} Essentially, despite
ViTs have been shown to enjoy comparable or even better performance on other
vision tasks, it is still very nontrivial to efficiently finetune the ViTs in
real-world FSL scenarios. To this end, we propose a novel efficient Transformer
Tuning (eTT) method that facilitates finetuning ViTs in the FSL tasks. The key
novelties come from the newly presented Attentive Prefix Tuning (APT) and
Domain Residual Adapter (DRA) for the task and backbone tuning, individually.
Specifically, in APT, the prefix is projected to new key and value pairs that
are attached to each self-attention layer to provide the model with
task-specific information. Moreover, we design the DRA in the form of learnable
offset vectors to handle the potential domain gaps between base and novel data.
To ensure the APT would not deviate from the initial task-specific information
much, we further propose a novel prototypical regularization, which maximizes
the similarity between the projected distribution of prefix and initial
prototypes, regularizing the update procedure. Our method receives outstanding
performance on the challenging Meta-Dataset. We conduct extensive experiments
to show the efficacy of our model.
"
Unleashing the Power of Shared Label Structures for Human Activity  Recognition,Xiyuan Zhang,http://arxiv.org/pdf/2301.03462v2.pdf,2023-01-01,"['cs.lg', 'cs.ai', 'eess.sp']",2301.03462v2.pdf,"  Current human activity recognition (HAR) techniques regard activity labels as
integer class IDs without explicitly modeling the semantics of class labels. We
observe that different activity names often have shared structures. For
example, ""open door"" and ""open fridge"" both have ""open"" as the action; ""kicking
soccer ball"" and ""playing tennis ball"" both have ""ball"" as the object. Such
shared structures in label names can be translated to the similarity in sensory
data and modeling common structures would help uncover knowledge across
different activities, especially for activities with limited samples. In this
paper, we propose SHARE, a HAR framework that takes into account shared
structures of label names for different activities. To exploit the shared
structures, SHARE comprises an encoder for extracting features from input
sensory time series and a decoder for generating label names as a token
sequence. We also propose three label augmentation techniques to help the model
more effectively capture semantic structures across activities, including a
basic token-level augmentation, and two enhanced embedding-level and
sequence-level augmentations utilizing the capabilities of pre-trained models.
SHARE outperforms state-of-the-art HAR models in extensive experiments on seven
HAR benchmark datasets. We also evaluate in few-shot learning and label
imbalance settings and observe even more significant performance gap.
"
"See, Think, Confirm: Interactive Prompting Between Vision and Language  Models for Knowledge-based Visual Reasoning",Zhenfang Chen,http://arxiv.org/pdf/2301.05226v1.pdf,2023-01-12,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",2301.05226v1.pdf,"  Large pre-trained vision and language models have demonstrated remarkable
capacities for various tasks. However, solving the knowledge-based visual
reasoning tasks remains challenging, which requires a model to comprehensively
understand image content, connect the external world knowledge, and perform
step-by-step reasoning to answer the questions correctly. To this end, we
propose a novel framework named Interactive Prompting Visual Reasoner (IPVR)
for few-shot knowledge-based visual reasoning. IPVR contains three stages, see,
think and confirm. The see stage scans the image and grounds the visual concept
candidates with a visual perception model. The think stage adopts a pre-trained
large language model (LLM) to attend to the key concepts from candidates
adaptively. It then transforms them into text context for prompting with a
visual captioning model and adopts the LLM to generate the answer. The confirm
stage further uses the LLM to generate the supporting rationale to the answer,
verify the generated rationale with a cross-modality classifier and ensure that
the rationale can infer the predicted output consistently. We conduct
experiments on a range of knowledge-based visual reasoning datasets. We found
our IPVR enjoys several benefits, 1). it achieves better performance than the
previous few-shot learning baselines; 2). it enjoys the total transparency and
trustworthiness of the whole reasoning process by providing rationales for each
reasoning step; 3). it is computation-efficient compared with other fine-tuning
baselines.
"
Large Language Models Are Latent Variable Models: Explaining and Finding  Good Demonstrations for In-Context Learning,Xinyi Wang,http://arxiv.org/pdf/2301.11916v3.pdf,2023-01-27,"['cs.cl', 'cs.ai', 'cs.lg']",2301.11916v3.pdf,"  In recent years, pre-trained large language models (LLMs) have demonstrated
remarkable efficiency in achieving an inference-time few-shot learning
capability known as in-context learning. However, existing literature has
highlighted the sensitivity of this capability to the selection of few-shot
demonstrations. Current understandings of the underlying mechanisms by which
this capability arises from regular language model pretraining objectives
remain disconnected from the real-world LLMs. This study aims to examine the
in-context learning phenomenon through a Bayesian lens, viewing real-world LLMs
as latent variable models. On this premise, we propose an algorithm to select
optimal demonstrations from a set of annotated data with a small LM, and then
directly generalize the selected demonstrations to larger LMs. We demonstrate
significant improvement over baselines, averaged over eight GPT models on eight
real-world text classification datasets. We also demonstrate the real-world
usefulness of our algorithm on GSM8K, a math word problem dataset. Our
empirical findings support our hypothesis that LLMs implicitly infer a latent
variable containing task information.
"
Language Quantized AutoEncoders: Towards Unsupervised Text-Image  Alignment,Hao Liu,http://arxiv.org/pdf/2302.00902v2.pdf,2023-02-02,"['cs.lg', 'cs.cl', 'cs.cv']",2302.00902v2.pdf,"  Recent progress in scaling up large language models has shown impressive
capabilities in performing few-shot learning across a wide range of text-based
tasks. However, a key limitation is that these language models fundamentally
lack visual perception - a crucial attribute needed to extend these models to
be able to interact with the real world and solve vision tasks, such as in
visual-question answering and robotics. Prior works have largely connected
image to text through pretraining and/or fine-tuning on curated image-text
datasets, which can be a costly and expensive process. In order to resolve this
limitation, we propose a simple yet effective approach called
Language-Quantized AutoEncoder (LQAE), a modification of VQ-VAE that learns to
align text-image data in an unsupervised manner by leveraging pretrained
language models (e.g., BERT, RoBERTa). Our main idea is to encode image as
sequences of text tokens by directly quantizing image embeddings using a
pretrained language codebook. We then apply random masking followed by a BERT
model, and have the decoder reconstruct the original image from BERT predicted
text token embeddings. By doing so, LQAE learns to represent similar images
with similar clusters of text tokens, thereby aligning these two modalities
without the use of aligned text-image pairs. This enables few-shot image
classification with large language models (e.g., GPT-3) as well as linear
classification of images based on BERT text features. To the best of our
knowledge, our work is the first work that uses unaligned images for multimodal
tasks by leveraging the power of pretrained language models.
"
The unreasonable effectiveness of few-shot learning for machine  translation,Xavier Garcia,http://arxiv.org/pdf/2302.01398v1.pdf,2023-02-02,['cs.cl'],2302.01398v1.pdf,"  We demonstrate the potential of few-shot translation systems, trained with
unpaired language data, for both high and low-resource language pairs. We show
that with only 5 examples of high-quality translation data shown at inference,
a transformer decoder-only model trained solely with self-supervised learning,
is able to match specialized supervised state-of-the-art models as well as more
general commercial translation systems. In particular, we outperform the best
performing system on the WMT'21 English - Chinese news translation task by only
using five examples of English - Chinese parallel data at inference. Moreover,
our approach in building these models does not necessitate joint multilingual
training or back-translation, is conceptually simple and shows the potential to
extend to the multilingual setting. Furthermore, the resulting models are two
orders of magnitude smaller than state-of-the-art language models. We then
analyze the factors which impact the performance of few-shot translation
systems, and highlight that the quality of the few-shot demonstrations heavily
determines the quality of the translations generated by our models. Finally, we
show that the few-shot paradigm also provides a way to control certain
attributes of the translation -- we show that we are able to control for
regional varieties and formality using only a five examples at inference,
paving the way towards controllable machine translation systems.
"
CrossCodeBench: Benchmarking Cross-Task Generalization of Source Code  Models,Changan Niu,http://arxiv.org/pdf/2302.04030v2.pdf,2023-02-08,"['cs.se', 'cs.ai']",2302.04030v2.pdf,"  Despite the recent advances showing that a model pre-trained on large-scale
source code data is able to gain appreciable generalization capability, it
still requires a sizeable amount of data on the target task for fine-tuning.
And the effectiveness of the model generalization is largely affected by the
size and quality of the fine-tuning data, which is detrimental for target tasks
with limited or unavailable resources. Therefore, cross-task generalization,
with the goal of improving the generalization of the model to unseen tasks that
have not been seen before, is of strong research and application value.
  In this paper, we propose a large-scale benchmark that includes 216 existing
code-related tasks. Then, we annotate each task with the corresponding meta
information such as task description and instruction, which contains detailed
information about the task and a solution guide. This also helps us to easily
create a wide variety of ``training/evaluation'' task splits to evaluate the
various cross-task generalization capabilities of the model. Then we perform
some preliminary experiments to demonstrate that the cross-task generalization
of models can be largely improved by in-context learning methods such as
few-shot learning and learning from task instructions, which shows the
promising prospects of conducting cross-task learning research on our
benchmark. We hope that the collection of the datasets and our benchmark will
facilitate future work that is not limited to cross-task generalization.
"
Re-ViLM: Retrieval-Augmented Visual Language Model for Zero and Few-Shot  Image Captioning,Zhuolin Yang,http://arxiv.org/pdf/2302.04858v2.pdf,2023-02-09,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.ir', 'cs.lg']",2302.04858v2.pdf,"  Augmenting pretrained language models (LMs) with a vision encoder (e.g.,
Flamingo) has obtained the state-of-the-art results in image-to-text
generation. However, these models store all the knowledge within their
parameters, thus often requiring enormous model parameters to model the
abundant visual concepts and very rich textual descriptions. Additionally, they
are inefficient in incorporating new data, requiring a computational-expensive
fine-tuning process. In this work, we introduce a Retrieval-augmented Visual
Language Model, Re-ViLM, built upon the Flamingo, that supports retrieving the
relevant knowledge from the external database for zero and in-context few-shot
image-to-text generations. By storing certain knowledge explicitly in the
external database, our approach reduces the number of model parameters and can
easily accommodate new data during evaluation by simply updating the database.
We also construct an interleaved image and text data that facilitates
in-context few-shot learning capabilities. We demonstrate that Re-ViLM
significantly boosts performance for image-to-text generation tasks, especially
for zero-shot and few-shot generation in out-of-domain settings with 4 times
less parameters compared with baseline methods.
"
Mask-guided BERT for Few Shot Text Classification,Wenxiong Liao,http://arxiv.org/pdf/2302.10447v3.pdf,2023-02-21,"['cs.cl', 'cs.ai']",2302.10447v3.pdf,"  Transformer-based language models have achieved significant success in
various domains. However, the data-intensive nature of the transformer
architecture requires much labeled data, which is challenging in low-resource
scenarios (i.e., few-shot learning (FSL)). The main challenge of FSL is the
difficulty of training robust models on small amounts of samples, which
frequently leads to overfitting. Here we present Mask-BERT, a simple and
modular framework to help BERT-based architectures tackle FSL. The proposed
approach fundamentally differs from existing FSL strategies such as prompt
tuning and meta-learning. The core idea is to selectively apply masks on text
inputs and filter out irrelevant information, which guides the model to focus
on discriminative tokens that influence prediction results. In addition, to
make the text representations from different categories more separable and the
text representations from the same category more compact, we introduce a
contrastive learning loss function. Experimental results on public-domain
benchmark datasets demonstrate the effectiveness of Mask-BERT.
"
Meta-Learning with Adaptive Weighted Loss for Imbalanced Cold-Start  Recommendation,Minchang Kim,http://arxiv.org/pdf/2302.14640v2.pdf,2023-02-28,"['cs.ir', 'cs.lg']",2302.14640v2.pdf,"  Sequential recommenders have made great strides in capturing a user's
preferences. Nevertheless, the cold-start recommendation remains a fundamental
challenge as they typically involve limited user-item interactions for
personalization. Recently, gradient-based meta-learning approaches have emerged
in the sequential recommendation field due to their fast adaptation and
easy-to-integrate abilities. The meta-learning algorithms formulate the
cold-start recommendation as a few-shot learning problem, where each user is
represented as a task to be adapted. While meta-learning algorithms generally
assume that task-wise samples are evenly distributed over classes or values,
user-item interactions in real-world applications do not conform to such a
distribution (e.g., watching favorite videos multiple times, leaving only
positive ratings without any negative ones). Consequently, imbalanced user
feedback, which accounts for the majority of task training data, may dominate
the user adaptation process and prevent meta-learning algorithms from learning
meaningful meta-knowledge for personalized recommendations. To alleviate this
limitation, we propose a novel sequential recommendation framework based on
gradient-based meta-learning that captures the imbalanced rating distribution
of each user and computes adaptive loss for user-specific learning. Our work is
the first to tackle the impact of imbalanced ratings in cold-start sequential
recommendation scenarios. Through extensive experiments conducted on real-world
datasets, we demonstrate the effectiveness of our framework.
"
"Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong  Few-shot Learners",Renrui Zhang,http://arxiv.org/pdf/2303.02151v1.pdf,2023-03-03,"['cs.cv', 'cs.cl']",2303.02151v1.pdf,"  Visual recognition in low-data regimes requires deep neural networks to learn
generalized representations from limited training samples. Recently, CLIP-based
methods have shown promising few-shot performance benefited from the
contrastive language-image pre-training. We then question, if the more diverse
pre-training knowledge can be cascaded to further assist few-shot
representation learning. In this paper, we propose CaFo, a Cascade of
Foundation models that incorporates diverse prior knowledge of various
pre-training paradigms for better few-shot learning. Our CaFo incorporates
CLIP's language-contrastive knowledge, DINO's vision-contrastive knowledge,
DALL-E's vision-generative knowledge, and GPT-3's language-generative
knowledge. Specifically, CaFo works by 'Prompt, Generate, then Cache'. Firstly,
we leverage GPT-3 to produce textual inputs for prompting CLIP with rich
downstream linguistic semantics. Then, we generate synthetic images via DALL-E
to expand the few-shot training data without any manpower. At last, we
introduce a learnable cache model to adaptively blend the predictions from CLIP
and DINO. By such collaboration, CaFo can fully unleash the potential of
different pre-training methods and unify them to perform state-of-the-art for
few-shot classification. Code is available at
https://github.com/ZrrSkywalker/CaFo.
"
Knowledge-augmented Few-shot Visual Relation Detection,Tianyu Yu,http://arxiv.org/pdf/2303.05342v1.pdf,2023-03-09,"['cs.cv', 'cs.ai']",2303.05342v1.pdf,"  Visual Relation Detection (VRD) aims to detect relationships between objects
for image understanding. Most existing VRD methods rely on thousands of
training samples of each relationship to achieve satisfactory performance. Some
recent papers tackle this problem by few-shot learning with elaborately
designed pipelines and pre-trained word vectors. However, the performance of
existing few-shot VRD models is severely hampered by the poor generalization
capability, as they struggle to handle the vast semantic diversity of visual
relationships. Nonetheless, humans have the ability to learn new relationships
with just few examples based on their knowledge. Inspired by this, we devise a
knowledge-augmented, few-shot VRD framework leveraging both textual knowledge
and visual relation knowledge to improve the generalization ability of few-shot
VRD. The textual knowledge and visual relation knowledge are acquired from a
pre-trained language model and an automatically constructed visual relation
knowledge graph, respectively. We extensively validate the effectiveness of our
framework. Experiments conducted on three benchmarks from the commonly used
Visual Genome dataset show that our performance surpasses existing
state-of-the-art models with a large improvement.
"
Gradient-Regulated Meta-Prompt Learning for Generalizable  Vision-Language Models,Juncheng Li,http://arxiv.org/pdf/2303.06571v2.pdf,2023-03-12,['cs.cv'],2303.06571v2.pdf,"  Prompt tuning, a recently emerging paradigm, enables the powerful
vision-language pre-training models to adapt to downstream tasks in a parameter
-- and data -- efficient way, by learning the ``soft prompts'' to condition
frozen pre-training models. Though effective, it is particularly problematic in
the few-shot scenario, where prompt tuning performance is sensitive to the
initialization and requires a time-consuming process to find a good
initialization, thus restricting the fast adaptation ability of the
pre-training models. In addition, prompt tuning could undermine the
generalizability of the pre-training models, because the learnable prompt
tokens are easy to overfit to the limited training samples. To address these
issues, we introduce a novel Gradient-RegulAted Meta-prompt learning (GRAM)
framework that jointly meta-learns an efficient soft prompt initialization for
better adaptation and a lightweight gradient regulating function for strong
cross-domain generalizability in a meta-learning paradigm using only the
unlabeled image-text pre-training data. Rather than designing a specific prompt
tuning method, our GRAM can be easily incorporated into various prompt tuning
methods in a model-agnostic way, and comprehensive experiments show that GRAM
brings about consistent improvement for them in several settings (i.e.,
few-shot learning, cross-domain generalization, cross-dataset generalization,
etc.) over 11 datasets. Further, experiments show that GRAM enables the
orthogonal methods of textual and visual prompt tuning to work in a
mutually-enhanced way, offering better generalizability beyond the uni-modal
prompt tuning methods.
"
Decomposed Prototype Learning for Few-Shot Scene Graph Generation,Xingchen Li,http://arxiv.org/pdf/2303.10863v1.pdf,2023-03-20,['cs.cv'],2303.10863v1.pdf,"  Today's scene graph generation (SGG) models typically require abundant manual
annotations to learn new predicate types. Thus, it is difficult to apply them
to real-world applications with a long-tailed distribution of predicates. In
this paper, we focus on a new promising task of SGG: few-shot SGG (FSSGG).
FSSGG encourages models to be able to quickly transfer previous knowledge and
recognize novel predicates well with only a few examples. Although many
advanced approaches have achieved great success on few-shot learning (FSL)
tasks, straightforwardly extending them into FSSGG is not applicable due to two
intrinsic characteristics of predicate concepts: 1) Each predicate category
commonly has multiple semantic meanings under different contexts. 2) The visual
appearance of relation triplets with the same predicate differs greatly under
different subject-object pairs. Both issues make it hard to model conventional
latent representations for predicate categories with state-of-the-art FSL
methods. To this end, we propose a novel Decomposed Prototype Learning (DPL).
Specifically, we first construct a decomposable prototype space to capture
intrinsic visual patterns of subjects and objects for predicates, and enhance
their feature representations with these decomposed prototypes. Then, we devise
an intelligent metric learner to assign adaptive weights to each support sample
by considering the relevance of their subject-object pairs. We further re-split
the VG dataset and compare DPL with various FSL methods to benchmark this task.
Extensive results show that DPL achieves excellent performance in both base and
novel categories.
"
Supervised Masked Knowledge Distillation for Few-Shot Transformers,Han Lin,http://arxiv.org/pdf/2303.15466v2.pdf,2023-03-25,"['cs.cv', 'cs.ai']",2303.15466v2.pdf,"  Vision Transformers (ViTs) emerge to achieve impressive performance on many
data-abundant computer vision tasks by capturing long-range dependencies among
local features. However, under few-shot learning (FSL) settings on small
datasets with only a few labeled data, ViT tends to overfit and suffers from
severe performance degradation due to its absence of CNN-alike inductive bias.
Previous works in FSL avoid such problem either through the help of
self-supervised auxiliary losses, or through the dextile uses of label
information under supervised settings. But the gap between self-supervised and
supervised few-shot Transformers is still unfilled. Inspired by recent advances
in self-supervised knowledge distillation and masked image modeling (MIM), we
propose a novel Supervised Masked Knowledge Distillation model (SMKD) for
few-shot Transformers which incorporates label information into
self-distillation frameworks. Compared with previous self-supervised methods,
we allow intra-class knowledge distillation on both class and patch tokens, and
introduce the challenging task of masked patch tokens reconstruction across
intra-class images. Experimental results on four few-shot classification
benchmark datasets show that our method with simple design outperforms previous
methods by a large margin and achieves a new start-of-the-art. Detailed
ablation studies confirm the effectiveness of each component of our model. Code
for this paper is available here: https://github.com/HL-hanlin/SMKD.
"
"Multimodal C4: An Open, Billion-scale Corpus of Images Interleaved with  Text",Wanrong Zhu,http://arxiv.org/pdf/2304.06939v3.pdf,2023-04-14,"['cs.cv', 'cs.cl']",2304.06939v3.pdf,"  In-context vision and language models like Flamingo support arbitrarily
interleaved sequences of images and text as input. This format not only enables
few-shot learning via interleaving independent supervised (image, text)
examples, but also, more complex prompts involving interaction between images,
e.g., ""What do image A and image B have in common?"" To support this interface,
pretraining occurs over web corpora that similarly contain interleaved
images+text. To date, however, large-scale data of this form have not been
publicly available.
  We release Multimodal C4, an augmentation of the popular text-only C4 corpus
with images interleaved. We use a linear assignment algorithm to place images
into longer bodies of text using CLIP features, a process that we show
outperforms alternatives. Multimodal C4 spans everyday topics like cooking,
travel, technology, etc. A manual inspection of a random sample of documents
shows that a vast majority (88%) of images are topically relevant, and that
linear assignment frequently selects individual sentences specifically
well-aligned with each image (80%). After filtering NSFW images, ads, etc., the
resulting corpus consists of 101.2M documents with 571M images interleaved in
43B English tokens.
"
A Survey on Few-Shot Class-Incremental Learning,Songsong Tian,http://arxiv.org/pdf/2304.08130v2.pdf,2023-04-17,['cs.cv'],2304.08130v2.pdf,"  Large deep learning models are impressive, but they struggle when real-time
data is not available. Few-shot class-incremental learning (FSCIL) poses a
significant challenge for deep neural networks to learn new tasks from just a
few labeled samples without forgetting the previously learned ones. This setup
easily leads to catastrophic forgetting and overfitting problems, severely
affecting model performance. Studying FSCIL helps overcome deep learning model
limitations on data volume and acquisition time, while improving practicality
and adaptability of machine learning models. This paper provides a
comprehensive survey on FSCIL. Unlike previous surveys, we aim to synthesize
few-shot learning and incremental learning, focusing on introducing FSCIL from
two perspectives, while reviewing over 30 theoretical research studies and more
than 20 applied research studies. From the theoretical perspective, we provide
a novel categorization approach that divides the field into five subcategories,
including traditional machine learning methods, meta-learning based methods,
feature and feature space-based methods, replay-based methods, and dynamic
network structure-based methods. We also evaluate the performance of recent
theoretical research on benchmark datasets of FSCIL. From the application
perspective, FSCIL has achieved impressive achievements in various fields of
computer vision such as image classification, object detection, and image
segmentation, as well as in natural language processing and graph. We summarize
the important applications. Finally, we point out potential future research
directions, including applications, problem setups, and theory development.
Overall, this paper offers a comprehensive analysis of the latest advances in
FSCIL from a methodological, performance, and application perspective.
"
Unified Quantum State Tomography and Hamiltonian Learning Using  Transformer Models: A Language-Translation-Like Approach for Quantum Systems,Zheng An,http://arxiv.org/pdf/2304.12010v1.pdf,2023-04-24,['quant-ph'],2304.12010v1.pdf,"  Schr\""odinger's equation serves as a fundamental component in characterizing
quantum systems, wherein both quantum state tomography and Hamiltonian learning
are instrumental in comprehending and interpreting quantum systems. While
numerous techniques exist for carrying out state tomography and learning
Hamiltonians individually, no method has been developed to combine these two
aspects. In this study, we introduce a new approach that employs the attention
mechanism in transformer models to effectively merge quantum state tomography
and Hamiltonian learning. By carefully choosing and preparing the training
data, our method integrates both tasks without altering the model's
architecture, allowing the model to effectively learn the intricate
relationships between quantum states and Hamiltonian. We also demonstrate the
effectiveness of our approach across various quantum systems, ranging from
simple 2-qubit cases to more involved 2D antiferromagnetic Heisenberg
structures. The data collection process is streamlined, as it only necessitates
a one-way generation process beginning with state tomography. Furthermore, the
scalability and few-shot learning capabilities of our method could potentially
minimize the resources required for characterizing and optimizing quantum
systems. Our research provides valuable insights into the relationship between
Hamiltonian structure and quantum system behavior, fostering opportunities for
additional studies on quantum systems and the advancement of quantum
computation and associated technologies.
"
Analogy-Forming Transformers for Few-Shot 3D Parsing,Nikolaos Gkanatsios,http://arxiv.org/pdf/2304.14382v2.pdf,2023-04-27,"['cs.cv', 'cs.ai', 'cs.lg']",2304.14382v2.pdf,"  We present Analogical Networks, a model that encodes domain knowledge
explicitly, in a collection of structured labelled 3D scenes, in addition to
implicitly, as model parameters, and segments 3D object scenes with analogical
reasoning: instead of mapping a scene to part segments directly, our model
first retrieves related scenes from memory and their corresponding part
structures, and then predicts analogous part structures for the input scene,
via an end-to-end learnable modulation mechanism. By conditioning on more than
one retrieved memories, compositions of structures are predicted, that mix and
match parts across the retrieved memories. One-shot, few-shot or many-shot
learning are treated uniformly in Analogical Networks, by conditioning on the
appropriate set of memories, whether taken from a single, few or many memory
exemplars, and inferring analogous parses. We show Analogical Networks are
competitive with state-of-the-art 3D segmentation transformers in many-shot
settings, and outperform them, as well as existing paradigms of meta-learning
and few-shot learning, in few-shot settings. Analogical Networks successfully
segment instances of novel object categories simply by expanding their memory,
without any weight updates. Our code and models are publicly available in the
project webpage: http://analogicalnets.github.io/.
"
HQP: A Human-Annotated Dataset for Detecting Online Propaganda,Abdurahman Maarouf,http://arxiv.org/pdf/2304.14931v2.pdf,2023-04-28,['cs.cl'],2304.14931v2.pdf,"  Online propaganda poses a severe threat to the integrity of societies.
However, existing datasets for detecting online propaganda have a key
limitation: they were annotated using weak labels that can be noisy and even
incorrect. To address this limitation, our work makes the following
contributions: (1) We present HQP: a novel dataset (N=30,000) for detecting
online propaganda with high-quality labels. To the best of our knowledge, HQP
is the first dataset for detecting online propaganda that was created through
human annotation. (2) We show empirically that state-of-the-art language models
fail in detecting online propaganda when trained with weak labels (AUC: 64.03).
In contrast, state-of-the-art language models can accurately detect online
propaganda when trained with our high-quality labels (AUC: 92.25), which is an
improvement of ~44%. (3) To address the cost of labeling, we extend our work to
few-shot learning. Specifically, we show that prompt-based learning using a
small sample of high-quality labels can still achieve a reasonable performance
(AUC: 80.27). Finally, we discuss implications for the NLP community to balance
the cost and quality of labeling. Crucially, our work highlights the importance
of high-quality labels for sensitive NLP tasks such as propaganda detection.
"
Parameter-Efficient Cross-lingual Transfer of Vision and Language Models  via Translation-based Alignment,Zhen Zhang,http://arxiv.org/pdf/2305.03510v2.pdf,2023-05-02,"['cs.cl', 'cs.ai']",2305.03510v2.pdf,"  Pre-trained vision and language models such as CLIP have witnessed remarkable
success in connecting images and texts with a primary focus on English texts.
Despite recent efforts to extend CLIP to support other languages, disparities
in performance among different languages have been observed due to uneven
resource availability. Additionally, current cross-lingual transfer methods of
those pre-trained models would consume excessive resources for a large number
of languages. Therefore, we propose a new parameter-efficient cross-lingual
transfer learning framework that utilizes a translation-based alignment method
to mitigate multilingual disparities and explores parameter-efficient
fine-tuning methods for parameter-efficient cross-lingual transfer. Extensive
experiments on XTD and Multi30K datasets, covering 11 languages under
zero-shot, few-shot, and full-dataset learning scenarios, show that our
framework significantly reduces the multilingual disparities among languages
and improves cross-lingual transfer results, especially in low-resource
scenarios, while only keeping and fine-tuning an extremely small number of
parameters compared to the full model (e.g., Our framework only requires 0.16\%
additional parameters of a full-model for each language in the few-shot
learning scenario). The codes are available at
\url{https://github.com/eric-ai-lab/PECTVLM}. The codes are available at
\url{https://github.com/eric-ai-lab/PECTVLM}.
"
CodeIE: Large Code Generation Models are Better Few-Shot Information  Extractors,Peng Li,http://arxiv.org/pdf/2305.05711v2.pdf,2023-05-09,"['cs.cl', 'cs.ai']",2305.05711v2.pdf,"  Large language models (LLMs) pre-trained on massive corpora have demonstrated
impressive few-shot learning ability on many NLP tasks. A common practice is to
recast the task into a text-to-text format such that generative LLMs of natural
language (NL-LLMs) like GPT-3 can be prompted to solve it. However, it is
nontrivial to perform information extraction (IE) tasks with NL-LLMs since the
output of the IE task is usually structured and therefore is hard to be
converted into plain text. In this paper, we propose to recast the structured
output in the form of code instead of natural language and utilize generative
LLMs of code (Code-LLMs) such as Codex to perform IE tasks, in particular,
named entity recognition and relation extraction. In contrast to NL-LLMs, we
show that Code-LLMs can be well-aligned with these IE tasks by designing
code-style prompts and formulating these IE tasks as code generation tasks.
Experiment results on seven benchmarks show that our method consistently
outperforms fine-tuning moderate-size pre-trained models specially designed for
IE tasks (e.g., UIE) and prompting NL-LLMs under few-shot settings. We further
conduct a series of in-depth analyses to demonstrate the merits of leveraging
Code-LLMs for IE tasks.
"
Qualifying Chinese Medical Licensing Examination with Knowledge Enhanced  Generative Pre-training Model,Jiageng Wu,http://arxiv.org/pdf/2305.10163v2.pdf,2023-05-17,"['cs.cl', 'cs.ai', 'cs.cy']",2305.10163v2.pdf,"  Generative Pre-Training (GPT) models like ChatGPT have demonstrated
exceptional performance in various Natural Language Processing (NLP) tasks.
Although ChatGPT has been integrated into the overall workflow to boost
efficiency in many domains, the lack of flexibility in the finetuning process
hinders its applications in areas that demand extensive domain expertise and
semantic knowledge, such as healthcare. In this paper, we evaluate ChatGPT on
the China National Medical Licensing Examination (CNMLE) and propose a novel
approach to improve ChatGPT from two perspectives: integrating medical domain
knowledge and enabling few-shot learning. By using a simple but effective
retrieval method, medical background knowledge is extracted as semantic
instructions to guide the inference of ChatGPT. Similarly, relevant medical
questions are identified and fed as demonstrations to ChatGPT. Experimental
results show that directly applying ChatGPT fails to qualify the CNMLE at a
score of 51 (i.e., only 51\% of questions are answered correctly). While our
knowledge-enhanced model achieves a high score of 70 on CNMLE-2022 which not
only passes the qualification but also surpasses the average score of humans
(61). This research demonstrates the potential of knowledge-enhanced ChatGPT to
serve as versatile medical assistants, capable of analyzing real-world medical
problems in a more accessible, user-friendly, and adaptable manner.
"
PointGPT: Auto-regressively Generative Pre-training from Point Clouds,Guangyan Chen,http://arxiv.org/pdf/2305.11487v2.pdf,2023-05-19,['cs.cv'],2305.11487v2.pdf,"  Large language models (LLMs) based on the generative pre-training transformer
(GPT) have demonstrated remarkable effectiveness across a diverse range of
downstream tasks. Inspired by the advancements of the GPT, we present PointGPT,
a novel approach that extends the concept of GPT to point clouds, addressing
the challenges associated with disorder properties, low information density,
and task gaps. Specifically, a point cloud auto-regressive generation task is
proposed to pre-train transformer models. Our method partitions the input point
cloud into multiple point patches and arranges them in an ordered sequence
based on their spatial proximity. Then, an extractor-generator based
transformer decoder, with a dual masking strategy, learns latent
representations conditioned on the preceding point patches, aiming to predict
the next one in an auto-regressive manner. Our scalable approach allows for
learning high-capacity models that generalize well, achieving state-of-the-art
performance on various downstream tasks. In particular, our approach achieves
classification accuracies of 94.9% on the ModelNet40 dataset and 93.4% on the
ScanObjectNN dataset, outperforming all other transformer models. Furthermore,
our method also attains new state-of-the-art accuracies on all four few-shot
learning benchmarks.
"
A Survey of Diffusion Models in Natural Language Processing,Hao Zou,http://arxiv.org/pdf/2305.14671v2.pdf,2023-05-24,['cs.cl'],2305.14671v2.pdf,"  This survey paper provides a comprehensive review of the use of diffusion
models in natural language processing (NLP). Diffusion models are a class of
mathematical models that aim to capture the diffusion of information or signals
across a network or manifold. In NLP, diffusion models have been used in a
variety of applications, such as natural language generation, sentiment
analysis, topic modeling, and machine translation. This paper discusses the
different formulations of diffusion models used in NLP, their strengths and
limitations, and their applications. We also perform a thorough comparison
between diffusion models and alternative generative models, specifically
highlighting the autoregressive (AR) models, while also examining how diverse
architectures incorporate the Transformer in conjunction with diffusion models.
Compared to AR models, diffusion models have significant advantages for
parallel generation, text interpolation, token-level controls such as syntactic
structures and semantic contents, and robustness. Exploring further
permutations of integrating Transformers into diffusion models would be a
valuable pursuit. Also, the development of multimodal diffusion models and
large-scale diffusion language models with notable capabilities for few-shot
learning would be important directions for the future advance of diffusion
models in NLP.
"
Benchmarking Arabic AI with Large Language Models,Ahmed Abdelali,http://arxiv.org/pdf/2305.14982v1.pdf,2023-05-24,"['cs.cl', 'cs.ai', '68t50', 'f.2.2; i.2.7']",2305.14982v1.pdf,"  With large Foundation Models (FMs), language technologies (AI in general) are
entering a new paradigm: eliminating the need for developing large-scale
task-specific datasets and supporting a variety of tasks through set-ups
ranging from zero-shot to few-shot learning. However, understanding FMs
capabilities requires a systematic benchmarking effort by comparing FMs
performance with the state-of-the-art (SOTA) task-specific models. With that
goal, past work focused on the English language and included a few efforts with
multiple languages. Our study contributes to ongoing research by evaluating FMs
performance for standard Arabic NLP and Speech processing, including a range of
tasks from sequence tagging to content classification across diverse domains.
We start with zero-shot learning using GPT-3.5-turbo, Whisper, and USM,
addressing 33 unique tasks using 59 publicly available datasets resulting in 96
test setups. For a few tasks, FMs performs on par or exceeds the performance of
the SOTA models but for the majority it under-performs. Given the importance of
prompt for the FMs performance, we discuss our prompt strategies in detail and
elaborate on our findings. Our future work on Arabic AI will explore few-shot
prompting, expand the range of tasks, and investigate additional open-source
models.
"
Sentiment Analysis in the Era of Large Language Models: A Reality Check,Wenxuan Zhang,http://arxiv.org/pdf/2305.15005v1.pdf,2023-05-24,['cs.cl'],2305.15005v1.pdf,"  Sentiment analysis (SA) has been a long-standing research area in natural
language processing. It can offer rich insights into human sentiments and
opinions and has thus seen considerable interest from both academia and
industry. With the advent of large language models (LLMs) such as ChatGPT,
there is a great potential for their employment on SA problems. However, the
extent to which existing LLMs can be leveraged for different sentiment analysis
tasks remains unclear. This paper aims to provide a comprehensive investigation
into the capabilities of LLMs in performing various sentiment analysis tasks,
from conventional sentiment classification to aspect-based sentiment analysis
and multifaceted analysis of subjective texts. We evaluate performance across
13 tasks on 26 datasets and compare the results against small language models
(SLMs) trained on domain-specific datasets. Our study reveals that while LLMs
demonstrate satisfactory performance in simpler tasks, they lag behind in more
complex tasks requiring deeper understanding or structured sentiment
information. However, LLMs significantly outperform SLMs in few-shot learning
settings, suggesting their potential when annotation resources are limited. We
also highlight the limitations of current evaluation practices in assessing
LLMs' SA abilities and propose a novel benchmark, \textsc{SentiEval}, for a
more comprehensive and realistic evaluation. Data and code during our
investigations are available at
\url{https://github.com/DAMO-NLP-SG/LLM-Sentiment}.
"
Impact of Large Language Models on Generating Software Specifications,Danning Xie,http://arxiv.org/pdf/2306.03324v2.pdf,2023-06-06,['cs.se'],2306.03324v2.pdf,"  Software specifications are essential for ensuring the reliability of
software systems. Existing specification extraction approaches, however, suffer
from limited generalizability and require manual efforts. The recent emergence
of Large Language Models (LLMs), which have been successfully applied to
numerous software engineering tasks, offers a promising avenue for automating
this process. In this paper, we conduct the first empirical study to evaluate
the capabilities of LLMs for generating software specifications from software
comments or documentation. We evaluate LLMs' performance with Few Shot Learning
(FSL), enabling LLMs to generalize from a small number of examples, as well as
different prompt construction strategies, and compare the performance of LLMs
with traditional approaches. Additionally, we conduct a comparative diagnosis
of the failure cases from both LLMs and traditional methods, identifying their
unique strengths and weaknesses. Lastly, we conduct extensive experiments on 15
state of the art LLMs, evaluating their performance and cost effectiveness for
generating software specifications.
  Our results show that with FSL, LLMs outperform traditional methods (by
5.6%), and more sophisticated prompt construction strategies can further
enlarge this performance gap (up to 5.1 to 10.0%). Yet, LLMs suffer from their
unique challenges, such as ineffective prompts and the lack of domain
knowledge, which together account for 53 to 60% of LLM unique failures. The
strong performance of open source models (e.g., StarCoder) makes closed source
models (e.g., GPT 3 Davinci) less desirable due to size and cost. Our study
offers valuable insights for future research to improve specification
generation.
"
One-for-All: Generalized LoRA for Parameter-Efficient Fine-tuning,Arnav Chavan,http://arxiv.org/pdf/2306.07967v2.pdf,2023-06-13,"['cs.lg', 'cs.ai', 'cs.cv']",2306.07967v2.pdf,"  We present Generalized LoRA (GLoRA), an advanced approach for universal
parameter-efficient fine-tuning tasks. Enhancing Low-Rank Adaptation (LoRA),
GLoRA employs a generalized prompt module to optimize pre-trained model weights
and adjust intermediate activations, providing more flexibility and capability
across diverse tasks and datasets. Moreover, GLoRA facilitates efficient
parameter adaptation by employing a scalable, modular, layer-wise structure
search that learns individual adapter of each layer. Originating from a unified
mathematical formulation, GLoRA exhibits strong transfer learning, few-shot
learning and domain generalization abilities, as it adapts to new tasks through
not only weights but also additional dimensions like activations. Comprehensive
experiments demonstrate that GLoRA outperforms all previous methods in natural,
specialized, and structured vision benchmarks, achieving superior accuracy with
fewer parameters and computations. The proposed method on LLaMA-1 and LLaMA-2
also show considerable enhancements compared to the original LoRA in the
language domain. Furthermore, our structural re-parameterization design ensures
that GLoRA incurs no extra inference cost, rendering it a practical solution
for resource-limited applications. Code and models are available at:
https://github.com/Arnav0400/ViT-Slim/tree/master/GLoRA.
"
Democratizing LLMs for Low-Resource Languages by Leveraging their  English Dominant Abilities with Linguistically-Diverse Prompts,Xuan-Phi Nguyen,http://arxiv.org/pdf/2306.11372v1.pdf,2023-06-20,"['cs.cl', 'cs.ai']",2306.11372v1.pdf,"  Large language models (LLMs) are known to effectively perform tasks by simply
observing few exemplars. However, in low-resource languages, obtaining such
hand-picked exemplars can still be challenging, where unsupervised techniques
may be necessary. Moreover, competent generative capabilities of LLMs are
observed only in high-resource languages, while their performances among
under-represented languages fall behind due to pre-training data imbalance. To
elicit LLMs' ability onto low-resource languages without any supervised data,
we propose to assemble synthetic exemplars from a diverse set of high-resource
languages to prompt the LLMs to translate from any language into English. These
prompts are then used to create intra-lingual exemplars to perform tasks in the
target languages. Our unsupervised prompting method performs on par with
supervised few-shot learning in LLMs of different sizes for translations
between English and 13 Indic and 21 African low-resource languages. We also
show that fine-tuning a 7B model on data generated from our method helps it
perform competitively with a 175B model. In non-English translation tasks, our
method even outperforms supervised prompting by up to 3 chrF++ in many
low-resource languages. When evaluated on zero-shot multilingual summarization,
our method surpasses other English-pivoting baselines by up to 4 ROUGE-L and is
also favored by GPT-4.
"
ProtoDiff: Learning to Learn Prototypical Networks by Task-Guided  Diffusion,Yingjun Du,http://arxiv.org/pdf/2306.14770v2.pdf,2023-06-26,"['cs.lg', 'cs.ai']",2306.14770v2.pdf,"  Prototype-based meta-learning has emerged as a powerful technique for
addressing few-shot learning challenges. However, estimating a deterministic
prototype using a simple average function from a limited number of examples
remains a fragile process. To overcome this limitation, we introduce ProtoDiff,
a novel framework that leverages a task-guided diffusion model during the
meta-training phase to gradually generate prototypes, thereby providing
efficient class representations. Specifically, a set of prototypes is optimized
to achieve per-task prototype overfitting, enabling accurately obtaining the
overfitted prototypes for individual tasks. Furthermore, we introduce a
task-guided diffusion process within the prototype space, enabling the
meta-learning of a generative process that transitions from a vanilla prototype
to an overfitted prototype. ProtoDiff gradually generates task-specific
prototypes from random noise during the meta-test stage, conditioned on the
limited samples available for the new task. Furthermore, to expedite training
and enhance ProtoDiff's performance, we propose the utilization of residual
prototype learning, which leverages the sparsity of the residual prototype. We
conduct thorough ablation studies to demonstrate its ability to accurately
capture the underlying prototype distribution and enhance generalization. The
new state-of-the-art performance on within-domain, cross-domain, and few-task
few-shot classification further substantiates the benefit of ProtoDiff.
"
Effective Transfer of Pretrained Large Visual Model for Fabric Defect  Segmentation via Specifc Knowledge Injection,Zhewei Chen,http://arxiv.org/pdf/2306.16186v1.pdf,2023-06-28,"['cs.cv', 'cs.ai', 'i.2.10; i.4.9; i.5.4']",2306.16186v1.pdf,"  Fabric defect segmentation is integral to textile quality control. Despite
this, the scarcity of high-quality annotated data and the diversity of fabric
defects present significant challenges to the application of deep learning in
this field. These factors limit the generalization and segmentation performance
of existing models, impeding their ability to handle the complexity of diverse
fabric types and defects. To overcome these obstacles, this study introduces an
innovative method to infuse specialized knowledge of fabric defects into the
Segment Anything Model (SAM), a large-scale visual model. By introducing and
training a unique set of fabric defect-related parameters, this approach
seamlessly integrates domain-specific knowledge into SAM without the need for
extensive modifications to the pre-existing model parameters. The revamped SAM
model leverages generalized image understanding learned from large-scale
natural image datasets while incorporating fabric defect-specific knowledge,
ensuring its proficiency in fabric defect segmentation tasks. The experimental
results reveal a significant improvement in the model's segmentation
performance, attributable to this novel amalgamation of generic and
fabric-specific knowledge. When benchmarking against popular existing
segmentation models across three datasets, our proposed model demonstrates a
substantial leap in performance. Its impressive results in cross-dataset
comparisons and few-shot learning experiments further demonstrate its potential
for practical applications in textile quality control.
"
Prompting classes: Exploring the Power of Prompt Class Learning in  Weakly Supervised Semantic Segmentation,Balamurali Murugesan,http://arxiv.org/pdf/2307.00097v2.pdf,2023-06-30,['cs.cv'],2307.00097v2.pdf,"  Recently, CLIP-based approaches have exhibited remarkable performance on
generalization and few-shot learning tasks, fueled by the power of contrastive
language-vision pre-training. In particular, prompt tuning has emerged as an
effective strategy to adapt the pre-trained language-vision models to
downstream tasks by employing task-related textual tokens. Motivated by this
progress, in this work we question whether other fundamental problems, such as
weakly supervised semantic segmentation (WSSS), can benefit from prompt tuning.
Our findings reveal two interesting observations that shed light on the impact
of prompt tuning on WSSS. First, modifying only the class token of the text
prompt results in a greater impact on the Class Activation Map (CAM), compared
to arguably more complex strategies that optimize the context. And second, the
class token associated with the image ground truth does not necessarily
correspond to the category that yields the best CAM. Motivated by these
observations, we introduce a novel approach based on a PrOmpt cLass lEarning
(POLE) strategy. Through extensive experiments we demonstrate that our simple,
yet efficient approach achieves SOTA performance in a well-known WSSS
benchmark. These results highlight not only the benefits of language-vision
models in WSSS but also the potential of prompt learning for this problem. The
code is available at https://github.com/rB080/WSS_POLE.
"
Meta-training with Demonstration Retrieval for Efficient Few-shot  Learning,Aaron Mueller,http://arxiv.org/pdf/2307.00119v1.pdf,2023-06-30,['cs.cl'],2307.00119v1.pdf,"  Large language models show impressive results on few-shot NLP tasks. However,
these models are memory and computation-intensive. Meta-training allows one to
leverage smaller models for few-shot generalization in a domain-general and
task-agnostic manner; however, these methods alone results in models that may
not have sufficient parameterization or knowledge to adapt quickly to a large
variety of tasks. To overcome this issue, we propose meta-training with
demonstration retrieval, where we use a dense passage retriever to retrieve
semantically similar labeled demonstrations to each example for more varied
supervision. By separating external knowledge from model parameters, we can use
meta-training to train parameter-efficient models that generalize well on a
larger variety of tasks. We construct a meta-training set from UnifiedQA and
CrossFit, and propose a demonstration bank based on UnifiedQA tasks. To our
knowledge, our work is the first to combine retrieval with meta-training, to
use DPR models to retrieve demonstrations, and to leverage demonstrations from
many tasks simultaneously, rather than randomly sampling demonstrations from
the training set of the target task. Our approach outperforms a variety of
targeted parameter-efficient and retrieval-augmented few-shot methods on QA,
NLI, and text classification tasks (including SQuAD, QNLI, and TREC). Our
approach can be meta-trained and fine-tuned quickly on a single GPU.
"
TablEye: Seeing small Tables through the Lens of Images,Seung-eon Lee,http://arxiv.org/pdf/2307.02491v1.pdf,2023-07-04,"['cs.lg', 'cs.ai']",2307.02491v1.pdf,"  The exploration of few-shot tabular learning becomes imperative. Tabular data
is a versatile representation that captures diverse information, yet it is not
exempt from limitations, property of data and model size. Labeling extensive
tabular data can be challenging, and it may not be feasible to capture every
important feature. Few-shot tabular learning, however, remains relatively
unexplored, primarily due to scarcity of shared information among independent
datasets and the inherent ambiguity in defining boundaries within tabular data.
To the best of our knowledge, no meaningful and unrestricted few-shot tabular
learning techniques have been developed without imposing constraints on the
dataset. In this paper, we propose an innovative framework called TablEye,
which aims to overcome the limit of forming prior knowledge for tabular data by
adopting domain transformation. It facilitates domain transformation by
generating tabular images, which effectively conserve the intrinsic semantics
of the original tabular data. This approach harnesses rigorously tested
few-shot learning algorithms and embedding functions to acquire and apply prior
knowledge. Leveraging shared data domains allows us to utilize this prior
knowledge, originally learned from the image domain. Specifically, TablEye
demonstrated a superior performance by outstripping the TabLLM in a 4-shot task
with a maximum 0.11 AUC and a STUNT in a 1- shot setting, where it led on
average by 3.17% accuracy.
"
Text Descriptions are Compressive and Invariant Representations for  Visual Learning,Zhili Feng,http://arxiv.org/pdf/2307.04317v2.pdf,2023-07-10,"['cs.cv', 'cs.lg']",2307.04317v2.pdf,"  Modern image classification is based upon directly predicting classes via
large discriminative networks, which do not directly contain information about
the intuitive visual features that may constitute a classification decision.
Recently, work in vision-language models (VLM) such as CLIP has provided ways
to specify natural language descriptions of image classes, but typically
focuses on providing single descriptions for each class. In this work, we
demonstrate that an alternative approach, in line with humans' understanding of
multiple visual features per class, can also provide compelling performance in
the robust few-shot learning setting. In particular, we introduce a novel
method, \textit{SLR-AVD (Sparse Logistic Regression using Augmented Visual
Descriptors)}. This method first automatically generates multiple visual
descriptions of each class via a large language model (LLM), then uses a VLM to
translate these descriptions to a set of visual feature embeddings of each
image, and finally uses sparse logistic regression to select a relevant subset
of these features to classify each image. Core to our approach is the fact
that, information-theoretically, these descriptive features are more invariant
to domain shift than traditional image embeddings, even though the VLM training
process is not explicitly designed for invariant representation learning. These
invariant descriptive features also compose a better input compression scheme.
When combined with finetuning, we show that SLR-AVD is able to outperform
existing state-of-the-art finetuning approaches on both in-distribution and
out-of-distribution performance.
"
DialogStudio: Towards Richest and Most Diverse Unified Dataset  Collection for Conversational AI,Jianguo Zhang,http://arxiv.org/pdf/2307.10172v2.pdf,2023-07-19,"['cs.cl', 'cs.ai']",2307.10172v2.pdf,"  Despite advancements in conversational AI, language models encounter
challenges to handle diverse conversational tasks, and existing dialogue
dataset collections often lack diversity and comprehensiveness. To tackle these
issues, we introduce DialogStudio: the largest and most diverse collection of
dialogue datasets, unified under a consistent format while preserving their
original information. Our collection encompasses data from open-domain
dialogues, task-oriented dialogues, natural language understanding,
conversational recommendation, dialogue summarization, and knowledge-grounded
dialogues, making it an incredibly rich and diverse resource for dialogue
research and model training. To further enhance the utility of DialogStudio, we
identify the licenses for each dataset and design domain-aware prompts for
selected dialogues to facilitate instruction-aware fine-tuning. Furthermore, we
develop conversational AI models using the dataset collection, and our
experiments in both zero-shot and few-shot learning scenarios demonstrate the
superiority of DialogStudio. To improve transparency and support dataset and
task-based research, as well as language model pre-training, all datasets,
licenses, codes, and models associated with DialogStudio are made publicly
accessible at https://github.com/salesforce/DialogStudio
"
Mutual Reinforcement Effects in Japanese Sentence Classification and  Named Entity Recognition Tasks,Chengguang Gan,http://arxiv.org/pdf/2307.10291v2.pdf,2023-07-18,['cs.cl'],2307.10291v2.pdf,"  Information extraction(IE) is a crucial subfield within natural language
processing. However, for the traditionally segmented approach to sentence
classification and Named Entity Recognition, the intricate interactions between
these individual subtasks remain largely uninvestigated. In this study, we
propose an integrative analysis, converging sentence classification with Named
Entity Recognition, with the objective to unveil and comprehend the mutual
reinforcement effect within these two information extraction subtasks. To
achieve this, we introduce a Sentence Classification and Named Entity
Recognition Multi-task (SCNM) approach that combines Sentence Classification
(SC) and Named Entity Recognition (NER). We develop a Sentence-to-Label
Generation (SLG) framework for SCNM and construct a Wikipedia dataset
containing both SC and NER. Using a format converter, we unify input formats
and employ a generative model to generate SC-labels, NER-labels, and associated
text segments. We propose a Constraint Mechanism (CM) to improve generated
format accuracy. Our results show SC accuracy increased by 1.13 points and NER
by 1.06 points in SCNM compared to standalone tasks, with CM raising format
accuracy from 63.61 to 100. The findings indicate mutual reinforcement effects
between SC and NER, and integration enhances both tasks' performance. We
additionally implemented the SLG framework on single SC task. It yielded
superior accuracies compared to the baseline on two distinct Japanese SC
datasets. Notably, in the experiment of few-shot learning, SLG framework shows
much better performance than fine-tune method. These empirical findings
contribute additional evidence to affirm the efficacy of the SLG framework.
"
CohortGPT: An Enhanced GPT for Participant Recruitment in Clinical Study,Zihan Guan,http://arxiv.org/pdf/2307.11346v1.pdf,2023-07-21,"['cs.cl', 'cs.ai']",2307.11346v1.pdf,"  Participant recruitment based on unstructured medical texts such as clinical
notes and radiology reports has been a challenging yet important task for the
cohort establishment in clinical research. Recently, Large Language Models
(LLMs) such as ChatGPT have achieved tremendous success in various downstream
tasks thanks to their promising performance in language understanding,
inference, and generation. It is then natural to test their feasibility in
solving the cohort recruitment task, which involves the classification of a
given paragraph of medical text into disease label(s). However, when applied to
knowledge-intensive problem settings such as medical text classification, where
the LLMs are expected to understand the decision made by human experts and
accurately identify the implied disease labels, the LLMs show a mediocre
performance. A possible explanation is that, by only using the medical text,
the LLMs neglect to use the rich context of additional information that
languages afford. To this end, we propose to use a knowledge graph as auxiliary
information to guide the LLMs in making predictions. Moreover, to further boost
the LLMs adapt to the problem setting, we apply a chain-of-thought (CoT) sample
selection strategy enhanced by reinforcement learning, which selects a set of
CoT samples given each individual medical report. Experimental results and
various ablation studies show that our few-shot learning method achieves
satisfactory performance compared with fine-tuning strategies and gains superb
advantages when the available data is limited. The code and sample dataset of
the proposed CohortGPT model is available at:
https://anonymous.4open.science/r/CohortGPT-4872/
"
Identifying Misinformation on YouTube through Transcript Contextual  Analysis with Transformer Models,Christos Christodoulou,http://arxiv.org/pdf/2307.12155v1.pdf,2023-07-22,['cs.cl'],2307.12155v1.pdf,"  Misinformation on YouTube is a significant concern, necessitating robust
detection strategies. In this paper, we introduce a novel methodology for video
classification, focusing on the veracity of the content. We convert the
conventional video classification task into a text classification task by
leveraging the textual content derived from the video transcripts. We employ
advanced machine learning techniques like transfer learning to solve the
classification challenge. Our approach incorporates two forms of transfer
learning: (a) fine-tuning base transformer models such as BERT, RoBERTa, and
ELECTRA, and (b) few-shot learning using sentence-transformers MPNet and
RoBERTa-large. We apply the trained models to three datasets: (a) YouTube
Vaccine-misinformation related videos, (b) YouTube Pseudoscience videos, and
(c) Fake-News dataset (a collection of articles). Including the Fake-News
dataset extended the evaluation of our approach beyond YouTube videos. Using
these datasets, we evaluated the models distinguishing valid information from
misinformation. The fine-tuned models yielded Matthews Correlation
Coefficient>0.81, accuracy>0.90, and F1 score>0.90 in two of three datasets.
Interestingly, the few-shot models outperformed the fine-tuned ones by 20% in
both Accuracy and F1 score for the YouTube Pseudoscience dataset, highlighting
the potential utility of this approach -- especially in the context of limited
training data.
"
ChatGPT for Arabic Grammatical Error Correction,Sang Yun Kwon,http://arxiv.org/pdf/2308.04492v1.pdf,2023-08-08,['cs.ai'],2308.04492v1.pdf,"  Recently, large language models (LLMs) fine-tuned to follow human instruction
have exhibited significant capabilities in various English NLP tasks. However,
their performance in grammatical error correction (GEC) tasks, particularly in
non-English languages, remains significantly unexplored. In this paper, we
delve into abilities of instruction fine-tuned LLMs in Arabic GEC, a task made
complex due to Arabic's rich morphology. Our findings suggest that various
prompting methods, coupled with (in-context) few-shot learning, demonstrate
considerable effectiveness, with GPT-4 achieving up to $65.49$
F\textsubscript{1} score under expert prompting (approximately $5$ points
higher than our established baseline). This highlights the potential of LLMs in
low-resource settings, offering a viable approach for generating useful
synthetic data for model training. Despite these positive results, we find that
instruction fine-tuned models, regardless of their size, significantly
underperform compared to fully fine-tuned models of significantly smaller
sizes. This disparity highlights a substantial room for improvements for LLMs.
Inspired by methods from low-resource machine translation, we also develop a
method exploiting synthetic data that significantly outperforms previous models
on two standard Arabic benchmarks. Our work sets new SoTA for Arabic GEC, with
$72.19\%$ and $73.26$ F$_{1}$ on the 2014 and 2015 QALB datasets, respectively.
"
LLMeBench: A Flexible Framework for Accelerating LLMs Benchmarking,Fahim Dalvi,http://arxiv.org/pdf/2308.04945v1.pdf,2023-08-09,"['cs.cl', 'cs.ai', '68t50', 'f.2.2; i.2.7']",2308.04945v1.pdf,"  The recent development and success of Large Language Models (LLMs)
necessitate an evaluation of their performance across diverse NLP tasks in
different languages. Although several frameworks have been developed and made
publicly available, their customization capabilities for specific tasks and
datasets are often complex for different users. In this study, we introduce the
LLMeBench framework. Initially developed to evaluate Arabic NLP tasks using
OpenAI's GPT and BLOOM models; it can be seamlessly customized for any NLP task
and model, regardless of language. The framework also features zero- and
few-shot learning settings. A new custom dataset can be added in less than 10
minutes, and users can use their own model API keys to evaluate the task at
hand. The developed framework has been already tested on 31 unique NLP tasks
using 53 publicly available datasets within 90 experimental setups, involving
approximately 296K data points. We plan to open-source the framework for the
community (https://github.com/qcri/LLMeBench/). A video demonstrating the
framework is available online (https://youtu.be/FkQn4UjYA0s).
"
Link-Context Learning for Multimodal LLMs,Yan Tai,http://arxiv.org/pdf/2308.07891v1.pdf,2023-08-15,"['cs.cv', 'cs.cl']",2308.07891v1.pdf,"  The ability to learn from context with novel concepts, and deliver
appropriate responses are essential in human conversations. Despite current
Multimodal Large Language Models (MLLMs) and Large Language Models (LLMs) being
trained on mega-scale datasets, recognizing unseen images or understanding
novel concepts in a training-free manner remains a challenge. In-Context
Learning (ICL) explores training-free few-shot learning, where models are
encouraged to ``learn to learn"" from limited tasks and generalize to unseen
tasks. In this work, we propose link-context learning (LCL), which emphasizes
""reasoning from cause and effect"" to augment the learning capabilities of
MLLMs. LCL goes beyond traditional ICL by explicitly strengthening the causal
relationship between the support set and the query set. By providing
demonstrations with causal links, LCL guides the model to discern not only the
analogy but also the underlying causal associations between data points, which
empowers MLLMs to recognize unseen images and understand novel concepts more
effectively. To facilitate the evaluation of this novel approach, we introduce
the ISEKAI dataset, comprising exclusively of unseen generated image-label
pairs designed for link-context learning. Extensive experiments show that our
LCL-MLLM exhibits strong link-context learning capabilities to novel concepts
over vanilla MLLMs. Code and data will be released at
https://github.com/isekai-portal/Link-Context-Learning.
"
CodeCoT and Beyond: Learning to Program and Test like a Developer,Dong Huang,http://arxiv.org/pdf/2308.08784v1.pdf,2023-08-17,"['cs.se', 'cs.ai']",2308.08784v1.pdf,"  In natural language processing, transformer-based large language models
(LLMs) like GPT-x models developed by OpenAI have revolutionized the landscape.
Despite their impressive capabilities, these models often encounter challenges
when handling tasks that differ from their training data, resulting in
compromised performance. To address this, few-shot learning has emerged as a
valuable technique, allowing LLMs to adapt with minimal task-specific data. One
innovative strategy, known as Chain-of-Thought Prompting (CoT), has been
introduced to guide LLMs in revealing cognitive processes during multi-step
reasoning. In this paper, we propose Code Chain-of-Thought~(CodeCoT), which
consists of two components: the Vanilla CodeCoT and the Self-exam CodeCoT. The
latter incorporates self-examination, empowering the model to iteratively
generate code, formulate test cases, and refine its outputs. Specifically, the
process entails the generation of test examples by the model corresponding to
the code it is tasked to implement. If it fails on the test examples, then it
regenerates the code based on the erroneous code and associated error types.
Through comprehensive experiments, we observed that both techniques
significantly enhance code generation accuracy across various LLM variants. Our
evaluation results reveal that CodeCoT improves the code generation
effectiveness, including an unprecedented pass@1 accuracy of 79.27\% using the
Self-exam CodeCoT approach on the gpt-3.5-turbo-0613 model in the HumanEval
dataset.
"
Large Language Models Vote: Prompting for Rare Disease Identification,David Oniani,http://arxiv.org/pdf/2308.12890v2.pdf,2023-08-24,"['cs.cl', 'cs.ai']",2308.12890v2.pdf,"  The emergence of generative Large Language Models (LLMs) emphasizes the need
for accurate and efficient prompting approaches. LLMs are often applied in
Few-Shot Learning (FSL) contexts, where tasks are executed with minimal
training data. FSL has become popular in many Artificial Intelligence (AI)
subdomains, including AI for health. Rare diseases affect a small fraction of
the population. Rare disease identification from clinical notes inherently
requires FSL techniques due to limited data availability. Manual data
collection and annotation is both expensive and time-consuming. In this paper,
we propose Models-Vote Prompting (MVP), a flexible prompting approach for
improving the performance of LLM queries in FSL settings. MVP works by
prompting numerous LLMs to perform the same tasks and then conducting a
majority vote on the resulting outputs. This method achieves improved results
to any one model in the ensemble on one-shot rare disease identification and
classification tasks. We also release a novel rare disease dataset for FSL,
available to those who signed the MIMIC-IV Data Use Agreement (DUA).
Furthermore, in using MVP, each model is prompted multiple times, substantially
increasing the time needed for manual annotation, and to address this, we
assess the feasibility of using JSON for automating generative LLM evaluation.
"
Diagnosing Infeasible Optimization Problems Using Large Language Models,Hao Chen,http://arxiv.org/pdf/2308.12923v1.pdf,2023-08-23,"['cs.hc', 'cs.cl', 'cs.lg', 'math.oc']",2308.12923v1.pdf,"  Decision-making problems can be represented as mathematical optimization
models, finding wide applications in fields such as economics, engineering and
manufacturing, transportation, and health care. Optimization models are
mathematical abstractions of the problem of making the best decision while
satisfying a set of requirements or constraints. One of the primary barriers to
deploying these models in practice is the challenge of helping practitioners
understand and interpret such models, particularly when they are infeasible,
meaning no decision satisfies all the constraints. Existing methods for
diagnosing infeasible optimization models often rely on expert systems,
necessitating significant background knowledge in optimization. In this paper,
we introduce OptiChat, a first-of-its-kind natural language-based system
equipped with a chatbot GUI for engaging in interactive conversations about
infeasible optimization models. OptiChat can provide natural language
descriptions of the optimization model itself, identify potential sources of
infeasibility, and offer suggestions to make the model feasible. The
implementation of OptiChat is built on GPT-4, which interfaces with an
optimization solver to identify the minimal subset of constraints that render
the entire optimization problem infeasible, also known as the Irreducible
Infeasible Subset (IIS). We utilize few-shot learning, expert chain-of-thought,
key-retrieve, and sentiment prompts to enhance OptiChat's reliability. Our
experiments demonstrate that OptiChat assists both expert and non-expert users
in improving their understanding of the optimization models, enabling them to
quickly identify the sources of infeasibility.
"
Less is More: Towards Efficient Few-shot 3D Semantic Segmentation via  Training-free Networks,Xiangyang Zhu,http://arxiv.org/pdf/2308.12961v1.pdf,2023-08-24,['cs.cv'],2308.12961v1.pdf,"  To reduce the reliance on large-scale datasets, recent works in 3D
segmentation resort to few-shot learning. Current 3D few-shot semantic
segmentation methods first pre-train the models on `seen' classes, and then
evaluate their generalization performance on `unseen' classes. However, the
prior pre-training stage not only introduces excessive time overhead, but also
incurs a significant domain gap on `unseen' classes. To tackle these issues, we
propose an efficient Training-free Few-shot 3D Segmentation netwrok, TFS3D, and
a further training-based variant, TFS3D-T. Without any learnable parameters,
TFS3D extracts dense representations by trigonometric positional encodings, and
achieves comparable performance to previous training-based methods. Due to the
elimination of pre-training, TFS3D can alleviate the domain gap issue and save
a substantial amount of time. Building upon TFS3D, TFS3D-T only requires to
train a lightweight query-support transferring attention (QUEST), which
enhances the interaction between the few-shot query and support data.
Experiments demonstrate TFS3D-T improves previous state-of-the-art methods by
+6.93% and +17.96% mIoU respectively on S3DIS and ScanNet, while reducing the
training time by -90%, indicating superior effectiveness and efficiency.
"
"LongBench: A Bilingual, Multitask Benchmark for Long Context  Understanding",Yushi Bai,http://arxiv.org/pdf/2308.14508v1.pdf,2023-08-28,['cs.cl'],2308.14508v1.pdf,"  Although large language models (LLMs) demonstrate impressive performance for
many language tasks, most of them can only handle texts a few thousand tokens
long, limiting their applications on longer sequence inputs, such as books,
reports, and codebases. Recent works have proposed methods to improve LLMs'
long context capabilities by extending context windows and more sophisticated
memory mechanisms. However, comprehensive benchmarks tailored for evaluating
long context understanding are lacking. In this paper, we introduce LongBench,
the first bilingual, multi-task benchmark for long context understanding,
enabling a more rigorous evaluation of long context understanding. LongBench
comprises 21 datasets across 6 task categories in both English and Chinese,
with an average length of 6,711 words (English) and 13,386 characters
(Chinese). These tasks cover key long-text application areas including
single-doc QA, multi-doc QA, summarization, few-shot learning, synthetic tasks,
and code completion. All datasets in LongBench are standardized into a unified
format, allowing for effortless automatic evaluation of LLMs. Upon
comprehensive evaluation of 8 LLMs on LongBench, we find that: (1) Commercial
model (GPT-3.5-Turbo-16k) outperforms other open-sourced models, but still
struggles on longer contexts. (2) Scaled position embedding and fine-tuning on
longer sequences lead to substantial improvement on long context understanding.
(3) Context compression technique such as retrieval brings improvement for
model with weak ability on long contexts, but the performance still lags behind
models that have strong long context understanding capability. The code and
datasets are available at https://github.com/THUDM/LongBench.
"
TransPrompt v2: A Transferable Prompting Framework for Cross-task Text  Classification,Jianing Wang,http://arxiv.org/pdf/2308.15010v1.pdf,2023-08-29,['cs.cl'],2308.15010v1.pdf,"  Text classification is one of the most imperative tasks in natural language
processing (NLP). Recent advances with pre-trained language models (PLMs) have
shown remarkable success on this task. However, the satisfying results obtained
by PLMs heavily depend on the large amounts of task-specific labeled data,
which may not be feasible in many application scenarios due to data access and
privacy constraints. The recently-proposed prompt-based fine-tuning paradigm
improves the performance of PLMs for few-shot text classification with
task-specific templates. Yet, it is unclear how the prompting knowledge can be
transferred across tasks, for the purpose of mutual reinforcement. We propose
TransPrompt v2, a novel transferable prompting framework for few-shot learning
across similar or distant text classification tasks. For learning across
similar tasks, we employ a multi-task meta-knowledge acquisition (MMA)
procedure to train a meta-learner that captures the cross-task transferable
knowledge. For learning across distant tasks, we further inject the task type
descriptions into the prompt, and capture the intra-type and inter-type prompt
embeddings among multiple distant tasks. Additionally, two de-biasing
techniques are further designed to make the trained meta-learner more
task-agnostic and unbiased towards any tasks. After that, the meta-learner can
be adapted to each specific task with better parameters initialization.
Extensive experiments show that TransPrompt v2 outperforms single-task and
cross-task strong baselines over multiple NLP tasks and datasets. We further
show that the meta-learner can effectively improve the performance of PLMs on
previously unseen tasks. In addition, TransPrompt v2 also outperforms strong
fine-tuning baselines when learning with full training sets.
"
AskIt: Unified Programming Interface for Programming with Large Language  Models,Katsumi Okuda,http://arxiv.org/pdf/2308.15645v1.pdf,2023-08-29,"['cs.pl', 'cs.ai', 'cs.se']",2308.15645v1.pdf,"  In the evolving landscape of software development, Large Language Models
(LLMs) exhibit a unique phenomenon known as emergent abilities, demonstrating
adeptness across numerous tasks, from text summarization to code generation.
While these abilities open up novel avenues in software design and crafting,
their incorporation presents substantial challenges. Developers grapple with
decisions surrounding the direct embedding of LLMs within applications versus
employing them for code generation. Moreover, effective prompt design becomes a
critical concern, given the necessity of data extraction from natural language
outputs. To address these intricacies, this paper introduces AskIt, a
domain-specific language (DSL) specifically designed for LLMs. AskIt simplifies
LLM integration, offering type-guided output control, template-based function
definitions, and a unified interface that diminishes the distinction between
LLM-based code generation and application integration. Furthermore, through
Programming by Example (PBE), AskIt harnesses the power of few-shot learning at
the programming language level. Our evaluations underscore AskIt's potency.
Across 50 tasks, AskIt generated concise prompts for the given tasks, achieving
a 16.14% reduction in prompt length relative to benchmarks. Additionally, by
enabling the transition from direct LLM application usage to function
generation, AskIt achieved significant speedups, as observed in our GSM8K
benchmark experiments. Through these advancements, AskIt streamlines the
integration of LLMs in software development, offering a more efficient,
versatile approach for leveraging emergent abilities. The implementations of
AskIt in TypeScript and Python are available at
https://github.com/katsumiok/ts-askit and https://github.com/katsumiok/pyaskit,
respectively.
"
Self-Sampling Meta SAM: Enhancing Few-shot Medical Image Segmentation  with Meta-Learning,Yiming Zhang,http://arxiv.org/pdf/2308.16466v3.pdf,2023-08-31,['cs.cv'],2308.16466v3.pdf,"  While the Segment Anything Model (SAM) excels in semantic segmentation for
general-purpose images, its performance significantly deteriorates when applied
to medical images, primarily attributable to insufficient representation of
medical images in its training dataset. Nonetheless, gathering comprehensive
datasets and training models that are universally applicable is particularly
challenging due to the long-tail problem common in medical images. To address
this gap, here we present a Self-Sampling Meta SAM (SSM-SAM) framework for
few-shot medical image segmentation. Our innovation lies in the design of three
key modules: 1) An online fast gradient descent optimizer, further optimized by
a meta-learner, which ensures swift and robust adaptation to new tasks. 2) A
Self-Sampling module designed to provide well-aligned visual prompts for
improved attention allocation; and 3) A robust attention-based decoder
specifically designed for medical few-shot learning to capture relationship
between different slices. Extensive experiments on a popular abdominal CT
dataset and an MRI dataset demonstrate that the proposed method achieves
significant improvements over state-of-the-art methods in few-shot
segmentation, with an average improvements of 10.21% and 1.80% in terms of DSC,
respectively. In conclusion, we present a novel approach for rapid online
adaptation in interactive image segmentation, adapting to a new organ in just
0.83 minutes. Code is publicly available on GitHub upon acceptance.
"
Prompt-based Node Feature Extractor for Few-shot Learning on  Text-Attributed Graphs,Xuanwen Huang,http://arxiv.org/pdf/2309.02848v1.pdf,2023-09-06,['cs.si'],2309.02848v1.pdf,"  Text-attributed Graphs (TAGs) are commonly found in the real world, such as
social networks and citation networks, and consist of nodes represented by
textual descriptions. Currently, mainstream machine learning methods on TAGs
involve a two-stage modeling approach: (1) unsupervised node feature extraction
with pre-trained language models (PLMs); and (2) supervised learning using
Graph Neural Networks (GNNs). However, we observe that these representations,
which have undergone large-scale pre-training, do not significantly improve
performance with a limited amount of training samples. The main issue is that
existing methods have not effectively integrated information from the graph and
downstream tasks simultaneously. In this paper, we propose a novel framework
called G-Prompt, which combines a graph adapter and task-specific prompts to
extract node features. First, G-Prompt introduces a learnable GNN layer
(\emph{i.e.,} adaptor) at the end of PLMs, which is fine-tuned to better
capture the masked tokens considering graph neighborhood information. After the
adapter is trained, G-Prompt incorporates task-specific prompts to obtain
\emph{interpretable} node representations for the downstream task. Our
experiment results demonstrate that our proposed method outperforms current
state-of-the-art (SOTA) methods on few-shot node classification. More
importantly, in zero-shot settings, the G-Prompt embeddings can not only
provide better task interpretability than vanilla PLMs but also achieve
comparable performance with fully-supervised baselines.
"
Cross-Image Context Matters for Bongard Problems,Nikhil Raghuraman,http://arxiv.org/pdf/2309.03468v1.pdf,2023-09-07,"['cs.cv', 'cs.ai', 'cs.lg']",2309.03468v1.pdf,"  Current machine learning methods struggle to solve Bongard problems, which
are a type of IQ test that requires deriving an abstract ""concept"" from a set
of positive and negative ""support"" images, and then classifying whether or not
a new query image depicts the key concept. On Bongard-HOI, a benchmark for
natural-image Bongard problems, existing methods have only reached 66% accuracy
(where chance is 50%). Low accuracy is often attributed to neural nets' lack of
ability to find human-like symbolic rules. In this work, we point out that many
existing methods are forfeiting accuracy due to a much simpler problem: they do
not incorporate information contained in the support set as a whole, and rely
instead on information extracted from individual supports. This is a critical
issue, because unlike in few-shot learning tasks concerning object
classification, the ""key concept"" in a typical Bongard problem can only be
distinguished using multiple positives and multiple negatives. We explore a
variety of simple methods to take this cross-image context into account, and
demonstrate substantial gains over prior methods, leading to new
state-of-the-art performance on Bongard-LOGO (75.3%) and Bongard-HOI (72.45%)
and strong performance on the original Bongard problem set (60.84%).
"
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning,Zhengxiang Shi,http://arxiv.org/pdf/2309.05173v2.pdf,2023-09-11,"['cs.cl', 'cs.ai', 'cs.cv', 'cs.lg']",2309.05173v2.pdf,"  Prompt tuning (PT), where a small amount of trainable soft (continuous)
prompt vectors is affixed to the input of language models (LM), has shown
promising results across various tasks and models for parameter-efficient
fine-tuning (PEFT). PT stands out from other PEFT approaches because it
maintains competitive performance with fewer trainable parameters and does not
drastically scale up its parameters as the model size expands. However, PT
introduces additional soft prompt tokens, leading to longer input sequences,
which significantly impacts training and inference time and memory usage due to
the Transformer's quadratic complexity. Particularly concerning for Large
Language Models (LLMs) that face heavy daily querying. To address this issue,
we propose Decomposed Prompt Tuning (DePT), which decomposes the soft prompt
into a shorter soft prompt and a pair of low-rank matrices that are then
optimised with two different learning rates. This allows DePT to achieve better
performance while saving over 20% memory and time costs compared to vanilla PT
and its variants, without changing trainable parameter sizes. Through extensive
experiments on 23 natural language processing (NLP) and vision-language (VL)
tasks, we demonstrate that DePT outperforms state-of-the-art PEFT approaches,
including the full fine-tuning baseline in some scenarios. Additionally, we
empirically show that DEPT grows more efficient as the model size increases.
Our further study reveals that DePT integrates seamlessly with
parameter-efficient transfer learning in the few-shot learning setting and
highlights its adaptability to various model architectures and sizes.
"
Zero-shot Learning with Minimum Instruction to Extract Social  Determinants and Family History from Clinical Notes using GPT Model,Neel Bhate,http://arxiv.org/pdf/2309.05475v2.pdf,2023-09-11,['cs.cl'],2309.05475v2.pdf,"  Demographics, Social determinants of health, and family history documented in
the unstructured text within the electronic health records are increasingly
being studied to understand how this information can be utilized with the
structured data to improve healthcare outcomes. After the GPT models were
released, many studies have applied GPT models to extract this information from
the narrative clinical notes. Different from the existing work, our research
focuses on investigating the zero-shot learning on extracting this information
together by providing minimum information to the GPT model. We utilize
de-identified real-world clinical notes annotated for demographics, various
social determinants, and family history information. Given that the GPT model
might provide text different from the text in the original data, we explore two
sets of evaluation metrics, including the traditional NER evaluation metrics
and semantic similarity evaluation metrics, to completely understand the
performance. Our results show that the GPT-3.5 method achieved an average of
0.975 F1 on demographics extraction, 0.615 F1 on social determinants
extraction, and 0.722 F1 on family history extraction. We believe these results
can be further improved through model fine-tuning or few-shots learning.
Through the case studies, we also identified the limitations of the GPT models,
which need to be addressed in future research.
"
GLAD: Content-aware Dynamic Graphs For Log Anomaly Detection,Yufei Li,http://arxiv.org/pdf/2309.05953v1.pdf,2023-09-12,"['cs.lg', 'cs.ir']",2309.05953v1.pdf,"  Logs play a crucial role in system monitoring and debugging by recording
valuable system information, including events and states. Although various
methods have been proposed to detect anomalies in log sequences, they often
overlook the significance of considering relations among system components,
such as services and users, which can be identified from log contents.
Understanding these relations is vital for detecting anomalies and their
underlying causes. To address this issue, we introduce GLAD, a Graph-based Log
Anomaly Detection framework designed to detect relational anomalies in system
logs. GLAD incorporates log semantics, relational patterns, and sequential
patterns into a unified framework for anomaly detection. Specifically, GLAD
first introduces a field extraction module that utilizes prompt-based few-shot
learning to identify essential fields from log contents. Then GLAD constructs
dynamic log graphs for sliding windows by interconnecting extracted fields and
log events parsed from the log parser. These graphs represent events and fields
as nodes and their relations as edges. Subsequently, GLAD utilizes a
temporal-attentive graph edge anomaly detection model for identifying anomalous
relations in these dynamic log graphs. This model employs a Graph Neural
Network (GNN)-based encoder enhanced with transformers to capture content,
structural and temporal features. We evaluate our proposed method on three
datasets, and the results demonstrate the effectiveness of GLAD in detecting
anomalies indicated by varying relational patterns.
"
Using Large Language Model to Solve and Explain Physics Word Problems  Approaching Human Level,Jingzhe Ding,http://arxiv.org/pdf/2309.08182v2.pdf,2023-09-15,"['cs.cl', 'cs.ai', 'i.2.7']",2309.08182v2.pdf,"  Our work demonstrates that large language model (LLM) pre-trained on texts
can not only solve pure math word problems, but also physics word problems,
whose solution requires calculation and inference based on prior physical
knowledge. We collect and annotate the first physics word problem
dataset-PhysQA, which contains over 1000 junior high school physics word
problems (covering Kinematics, Mass&Density, Mechanics, Heat, Electricity).
Then we use OpenAI' s GPT3.5 to generate the answer of these problems and found
that GPT3.5 could automatically solve 49.3% of the problems through zero-shot
learning and 73.2% through few-shot learning. This result demonstrates that by
using similar problems and their answers as prompt, LLM could solve elementary
physics word problems approaching human level performance. In addition to
solving problems, GPT3.5 can also summarize the knowledge or topics covered by
the problems, provide relevant explanations, and generate new physics word
problems based on the input. Our work is the first research to focus on the
automatic solving, explanation, and generation of physics word problems across
various types and scenarios, and we achieve an acceptable and state-of-the-art
accuracy. This underscores the potential of LLMs for further applications in
secondary education.
"
SCT: A Simple Baseline for Parameter-Efficient Fine-Tuning via Salient  Channels,Henry Hengyuan Zhao,http://arxiv.org/pdf/2309.08513v2.pdf,2023-09-15,"['cs.cv', 'cs.ai']",2309.08513v2.pdf,"  Pre-trained vision transformers have strong representation benefits to
various downstream tasks. Recently, many parameter-efficient fine-tuning (PEFT)
methods have been proposed, and their experiments demonstrate that tuning only
1% of extra parameters could surpass full fine-tuning in low-data resource
scenarios. However, these methods overlook the task-specific information when
fine-tuning diverse downstream tasks. In this paper, we propose a simple yet
effective method called ""Salient Channel Tuning"" (SCT) to leverage the
task-specific information by forwarding the model with the task images to
select partial channels in a feature map that enables us to tune only 1/8
channels leading to significantly lower parameter costs. Experiments outperform
full fine-tuning on 18 out of 19 tasks in the VTAB-1K benchmark by adding only
0.11M parameters of the ViT-B, which is 780$\times$ fewer than its full
fine-tuning counterpart. Furthermore, experiments on domain generalization and
few-shot learning surpass other PEFT methods with lower parameter costs,
demonstrating our proposed tuning technique's strong capability and
effectiveness in the low-data regime.
"
nnSAM: Plug-and-play Segment Anything Model Improves nnUNet Performance,Yunxiang Li,http://arxiv.org/pdf/2309.16967v2.pdf,2023-09-29,"['cs.cv', 'eess.iv']",2309.16967v2.pdf,"  The recent developments of foundation models in computer vision, especially
the Segment Anything Model (SAM), allow scalable and domain-agnostic image
segmentation to serve as a general-purpose segmentation tool. In parallel, the
field of medical image segmentation has benefited significantly from
specialized neural networks like the nnUNet, which is trained on
domain-specific datasets and can automatically configure the network to tailor
to specific segmentation challenges. To combine the advantages of foundation
models and domain-specific models, we present nnSAM, which synergistically
integrates the SAM model with the nnUNet model to achieve more accurate and
robust medical image segmentation. The nnSAM model leverages the powerful and
robust feature extraction capabilities of SAM, while harnessing the automatic
configuration capabilities of nnUNet to promote dataset-tailored learning. Our
comprehensive evaluation of nnSAM model on different sizes of training samples
shows that it allows few-shot learning, which is highly relevant for medical
image segmentation where high-quality, annotated data can be scarce and costly
to obtain. By melding the strengths of both its predecessors, nnSAM positions
itself as a potential new benchmark in medical image segmentation, offering a
tool that combines broad applicability with specialized efficiency. The code is
available at https://github.com/Kent0n-Li/Medical-Image-Segmentation.
"
An evaluation of GPT models for phenotype concept recognition,Tudor Groza,http://arxiv.org/pdf/2309.17169v1.pdf,2023-09-29,"['cs.cl', 'cs.ai']",2309.17169v1.pdf,"  Objective: Clinical deep phenotyping plays a critical role in both the
diagnosis of patients with rare disorders as well as in building care
coordination plans. The process relies on modelling and curating patient
profiles using ontology concepts, usually from the Human Phenotype Ontology.
Machine learning methods have been widely adopted to support this phenotype
concept recognition task. With the significant shift in the use of large
language models (LLMs) for most NLP tasks, herewithin, we examine the
performance of the latest Generative Pre-trained Transformer (GPT) models
underpinning ChatGPT in clinical deep phenotyping. Materials and Methods: The
experimental setup of the study included seven prompts of various levels of
specificity, two GPT models (gpt-3.5 and gpt-4.0) and an established gold
standard for phenotype recognition. Results: Our results show that, currently,
these models have not yet achieved state of the art performance. The best run,
using few-shots learning, achieved 0.41 F1 score, compared to a 0.62 F1 score
achieved by the current best in class tool. Conclusion: The non-deterministic
nature of the outcomes and the lack of concordance between different runs using
the same prompt and input makes the use of these LLMs in clinical settings
problematic.
"
RA-DIT: Retrieval-Augmented Dual Instruction Tuning,Xi Victoria Lin,http://arxiv.org/pdf/2310.01352v3.pdf,2023-10-02,"['cs.cl', 'cs.ai']",2310.01352v3.pdf,"  Retrieval-augmented language models (RALMs) improve performance by accessing
long-tail and up-to-date knowledge from external data stores, but are
challenging to build. Existing approaches require either expensive
retrieval-specific modifications to LM pre-training or use post-hoc integration
of the data store that leads to suboptimal performance. We introduce
Retrieval-Augmented Dual Instruction Tuning (RA-DIT), a lightweight fine-tuning
methodology that provides a third option by retrofitting any LLM with retrieval
capabilities. Our approach operates in two distinct fine-tuning steps: (1) one
updates a pre-trained LM to better use retrieved information, while (2) the
other updates the retriever to return more relevant results, as preferred by
the LM. By fine-tuning over tasks that require both knowledge utilization and
contextual awareness, we demonstrate that each stage yields significant
performance improvements, and using both leads to additional gains. Our best
model, RA-DIT 65B, achieves state-of-the-art performance across a range of
knowledge-intensive zero- and few-shot learning benchmarks, significantly
outperforming existing in-context RALM approaches by up to +8.9% in 0-shot
setting and +1.4% in 5-shot setting on average.
"
UniPredict: Large Language Models are Universal Tabular Predictors,Ruiyu Wang,http://arxiv.org/pdf/2310.03266v1.pdf,2023-10-05,['cs.lg'],2310.03266v1.pdf,"  Tabular data prediction is a fundamental machine learning task for many
applications. Existing methods predominantly employ discriminative modeling and
operate under the assumption of a fixed target column, necessitating
re-training for every new predictive task. Inspired by the generative power of
large language models (LLMs), this paper exploits the idea of building
universal tabular data predictors based on generative modeling, namely
UniPredict. Here, we show that scaling up an LLM to extensive tabular datasets
with the capability of comprehending diverse tabular inputs and predicting for
target variables following the input instructions. Specifically, we train a
single LLM on an aggregation of 169 tabular datasets with diverse targets and
compare its performance against baselines that are trained on each dataset
separately. We observe this versatile UniPredict model demonstrates an
advantage over other models, ranging from 5.4% to 13.4%, when compared with the
best tree-boosting baseline and the best neural network baseline, respectively.
We further test UniPredict in few-shot learning settings on another 62 tabular
datasets. Our method achieves strong performance in quickly adapting to new
tasks, where our method outperforms XGBoost over 100% on the low-resource setup
and shows a significant margin over all baselines. We envision that UniPredict
sheds light on developing a universal tabular data prediction system that
learns from data at scale and serves a wide range of prediction tasks.
"
LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios  via Prompt Compression,Huiqiang Jiang,http://arxiv.org/pdf/2310.06839v1.pdf,2023-10-10,"['cs.cl', 'cs.lg']",2310.06839v1.pdf,"  In long context scenarios, large language models (LLMs) face three main
challenges: higher computational/financial cost, longer latency, and inferior
performance. Some studies reveal that the performance of LLMs depends on both
the density and the position of the key information (question relevant) in the
input prompt. Inspired by these findings, we propose LongLLMLingua for prompt
compression towards improving LLMs' perception of the key information to
simultaneously address the three challenges. We conduct evaluation on a wide
range of long context scenarios including single-/multi-document QA, few-shot
learning, summarization, synthetic tasks, and code completion. The experimental
results show that LongLLMLingua compressed prompt can derive higher performance
with much less cost. The latency of the end-to-end system is also reduced. For
example, on NaturalQuestions benchmark, LongLLMLingua gains a performance boost
of up to 17.1% over the original prompt with ~4x fewer tokens as input to
GPT-3.5-Turbo. It can derive cost savings of \$28.5 and \$27.4 per 1,000
samples from the LongBench and ZeroScrolls benchmark, respectively.
Additionally, when compressing prompts of ~10k tokens at a compression rate of
2x-10x, LongLLMLingua can speed up the end-to-end latency by 1.4x-3.8x. Our
code is available at https://aka.ms/LLMLingua.
"
Empower Text-Attributed Graphs Learning with Large Language Models  (LLMs),Jianxiang Yu,http://arxiv.org/pdf/2310.09872v1.pdf,2023-10-15,['cs.lg'],2310.09872v1.pdf,"  Text-attributed graphs have recently garnered significant attention due to
their wide range of applications in web domains. Existing methodologies employ
word embedding models for acquiring text representations as node features,
which are subsequently fed into Graph Neural Networks (GNNs) for training.
Recently, the advent of Large Language Models (LLMs) has introduced their
powerful capabilities in information retrieval and text generation, which can
greatly enhance the text attributes of graph data. Furthermore, the acquisition
and labeling of extensive datasets are both costly and time-consuming
endeavors. Consequently, few-shot learning has emerged as a crucial problem in
the context of graph learning tasks. In order to tackle this challenge, we
propose a lightweight paradigm called ENG, which adopts a plug-and-play
approach to empower text-attributed graphs through node generation using LLMs.
Specifically, we utilize LLMs to extract semantic information from the labels
and generate samples that belong to these categories as exemplars.
Subsequently, we employ an edge predictor to capture the structural information
inherent in the raw dataset and integrate the newly generated samples into the
original graph. This approach harnesses LLMs for enhancing class-level
information and seamlessly introduces labeled nodes and edges without modifying
the raw dataset, thereby facilitating the node classification task in few-shot
scenarios. Extensive experiments demonstrate the outstanding performance of our
proposed paradigm, particularly in low-shot scenarios. For instance, in the
1-shot setting of the ogbn-arxiv dataset, ENG achieves a 76% improvement over
the baseline model.
"
In-Context Learning with Iterative Demonstration Selection,Chengwei Qin,http://arxiv.org/pdf/2310.09881v2.pdf,2023-10-15,"['cs.cl', 'cs.ai']",2310.09881v2.pdf,"  Spurred by advancements in scale, large language models (LLMs) have
demonstrated strong few-shot learning ability via in-context learning (ICL).
However, the performance of ICL has been shown to be highly sensitive to the
selection of few-shot demonstrations. Selecting the most suitable examples as
context remains an ongoing challenge and an open problem. Existing literature
has highlighted the importance of selecting examples that are diverse or
semantically similar to the test sample while ignoring the fact that the
optimal selection dimension, i.e., diversity or similarity, is task-specific.
Leveraging the merits of both dimensions, we propose Iterative Demonstration
Selection (IDS). Using zero-shot chain-of-thought reasoning (Zero-shot-CoT),
IDS iteratively selects examples that are diverse but still strongly correlated
with the test sample as ICL demonstrations. Specifically, IDS applies
Zero-shot-CoT to the test sample before demonstration selection. The output
reasoning path is then used to choose demonstrations that are prepended to the
test sample for inference. The generated answer is accompanied by its
corresponding reasoning path for extracting a new set of demonstrations in the
next iteration. After several iterations, IDS adopts majority voting to obtain
the final result. Through extensive experiments on tasks including commonsense
reasoning, question answering, topic classification, and sentiment analysis, we
demonstrate that IDS can consistently outperform existing ICL demonstration
selection methods.
"
The Skipped Beat: A Study of Sociopragmatic Understanding in LLMs for 64  Languages,Chiyu Zhang,http://arxiv.org/pdf/2310.14557v1.pdf,2023-10-23,['cs.cl'],2310.14557v1.pdf,"  Instruction tuned large language models (LLMs), such as ChatGPT, demonstrate
remarkable performance in a wide range of tasks. Despite numerous recent
studies that examine the performance of instruction-tuned LLMs on various NLP
benchmarks, there remains a lack of comprehensive investigation into their
ability to understand cross-lingual sociopragmatic meaning (SM), i.e., meaning
embedded within social and interactive contexts. This deficiency arises partly
from SM not being adequately represented in any of the existing benchmarks. To
address this gap, we present SPARROW, an extensive multilingual benchmark
specifically designed for SM understanding. SPARROW comprises 169 datasets
covering 13 task types across six primary categories (e.g., anti-social
language detection, emotion recognition). SPARROW datasets encompass 64
different languages originating from 12 language families representing 16
writing scripts. We evaluate the performance of various multilingual pretrained
language models (e.g., mT5) and instruction-tuned LLMs (e.g., BLOOMZ, ChatGPT)
on SPARROW through fine-tuning, zero-shot, and/or few-shot learning. Our
comprehensive analysis reveals that existing open-source instruction tuned LLMs
still struggle to understand SM across various languages, performing close to a
random baseline in some cases. We also find that although ChatGPT outperforms
many LLMs, it still falls behind task-specific finetuned models with a gap of
12.19 SPARROW score. Our benchmark is available at:
https://github.com/UBC-NLP/SPARROW
"
PAC-tuning:Fine-tuning Pretrained Language Models with PAC-driven  Perturbed Gradient Descent,Guangliang Liu,http://arxiv.org/pdf/2310.17588v1.pdf,2023-10-26,"['cs.lg', 'cs.cl']",2310.17588v1.pdf,"  Fine-tuning pretrained language models (PLMs) for downstream tasks is a
large-scale optimization problem, in which the choice of the training algorithm
critically determines how well the trained model can generalize to unseen test
data, especially in the context of few-shot learning. To achieve good
generalization performance and avoid overfitting, techniques such as data
augmentation and pruning are often applied. However, adding these
regularizations necessitates heavy tuning of the hyperparameters of
optimization algorithms, such as the popular Adam optimizer. In this paper, we
propose a two-stage fine-tuning method, PAC-tuning, to address this
optimization challenge. First, based on PAC-Bayes training, PAC-tuning directly
minimizes the PAC-Bayes generalization bound to learn proper parameter
distribution. Second, PAC-tuning modifies the gradient by injecting noise with
the variance learned in the first stage into the model parameters during
training, resulting in a variant of perturbed gradient descent (PGD). In the
past, the few-shot scenario posed difficulties for PAC-Bayes training because
the PAC-Bayes bound, when applied to large models with limited training data,
might not be stringent. Our experimental results across 5 GLUE benchmark tasks
demonstrate that PAC-tuning successfully handles the challenges of fine-tuning
tasks and outperforms strong baseline methods by a visible margin, further
confirming the potential to apply PAC training for any other settings where the
Adam optimizer is currently used for training.
"
Unleashing the Power of Pre-trained Language Models for Offline  Reinforcement Learning,Ruizhe Shi,http://arxiv.org/pdf/2310.20587v3.pdf,2023-10-31,['cs.lg'],2310.20587v3.pdf,"  Offline reinforcement learning (RL) aims to find a near-optimal policy using
pre-collected datasets. In real-world scenarios, data collection could be
costly and risky; therefore, offline RL becomes particularly challenging when
the in-domain data is limited. Given recent advances in Large Language Models
(LLMs) and their few-shot learning prowess, this paper introduces
$\textbf{La}$nguage Models for $\textbf{Mo}$tion Control ($\textbf{LaMo}$), a
general framework based on Decision Transformers to effectively use pre-trained
Language Models (LMs) for offline RL. Our framework highlights four crucial
components: (1) Initializing Decision Transformers with sequentially
pre-trained LMs, (2) employing the LoRA fine-tuning method, in contrast to
full-weight fine-tuning, to combine the pre-trained knowledge from LMs and
in-domain knowledge effectively, (3) using the non-linear MLP transformation
instead of linear projections, to generate embeddings, and (4) integrating an
auxiliary language prediction loss during fine-tuning to stabilize the LMs and
retain their original abilities on languages. Empirical results indicate
$\textbf{LaMo}$ achieves state-of-the-art performance in sparse-reward tasks
and closes the gap between value-based offline RL methods and decision
transformers in dense-reward tasks. In particular, our method demonstrates
superior performance in scenarios with limited data samples. Our project
website is $\href{https://lamo2023.github.io}{\text{this https URL}}$.
"
On Task-personalized Multimodal Few-shot Learning for Visually-rich  Document Entity Retrieval,Jiayi Chen,http://arxiv.org/pdf/2311.00693v1.pdf,2023-11-01,['cs.ai'],2311.00693v1.pdf,"  Visually-rich document entity retrieval (VDER), which extracts key
information (e.g. date, address) from document images like invoices and
receipts, has become an important topic in industrial NLP applications. The
emergence of new document types at a constant pace, each with its unique entity
types, presents a unique challenge: many documents contain unseen entity types
that occur only a couple of times. Addressing this challenge requires models to
have the ability of learning entities in a few-shot manner. However, prior
works for Few-shot VDER mainly address the problem at the document level with a
predefined global entity space, which doesn't account for the entity-level
few-shot scenario: target entity types are locally personalized by each task
and entity occurrences vary significantly among documents. To address this
unexplored scenario, this paper studies a novel entity-level few-shot VDER
task. The challenges lie in the uniqueness of the label space for each task and
the increased complexity of out-of-distribution (OOD) contents. To tackle this
novel task, we present a task-aware meta-learning based framework, with a
central focus on achieving effective task personalization that distinguishes
between in-task and out-of-task distribution. Specifically, we adopt a
hierarchical decoder (HC) and employ contrastive learning (ContrastProtoNet) to
achieve this goal. Furthermore, we introduce a new dataset, FewVEX, to boost
future research in the field of entity-level few-shot VDER. Experimental
results demonstrate our approaches significantly improve the robustness of
popular meta-learning baselines.
"
A Survey of Large Language Models for Autonomous Driving,Zhenjie Yang,http://arxiv.org/pdf/2311.01043v1.pdf,2023-11-02,['cs.ai'],2311.01043v1.pdf,"  Autonomous driving technology, a catalyst for revolutionizing transportation
and urban mobility, has the tend to transition from rule-based systems to
data-driven strategies. Traditional module-based systems are constrained by
cumulative errors among cascaded modules and inflexible pre-set rules. In
contrast, end-to-end autonomous driving systems have the potential to avoid
error accumulation due to their fully data-driven training process, although
they often lack transparency due to their ``black box"" nature, complicating the
validation and traceability of decisions. Recently, large language models
(LLMs) have demonstrated abilities including understanding context, logical
reasoning, and generating answers. A natural thought is to utilize these
abilities to empower autonomous driving. By combining LLM with foundation
vision models, it could open the door to open-world understanding, reasoning,
and few-shot learning, which current autonomous driving systems are lacking. In
this paper, we systematically review a research line about \textit{Large
Language Models for Autonomous Driving (LLM4AD)}. This study evaluates the
current state of technological advancements, distinctly outlining the principal
challenges and prospective directions for the field. For the convenience of
researchers in academia and industry, we provide real-time updates on the
latest advances in the field as well as relevant open-source resources via the
designated link: https://github.com/Thinklab-SJTU/Awesome-LLM4AD.
"
Robust Fine-Tuning of Vision-Language Models for Domain Generalization,Kevin Vogt-Lowell,http://arxiv.org/pdf/2311.02236v1.pdf,2023-11-03,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",2311.02236v1.pdf,"  Transfer learning enables the sharing of common knowledge among models for a
variety of downstream tasks, but traditional methods suffer in limited training
data settings and produce narrow models incapable of effectively generalizing
under distribution shifts. Foundation models have recently demonstrated
impressive zero-shot inference capabilities and robustness under distribution
shifts. However, zero-shot evaluation for these models has been predominantly
confined to benchmarks with simple distribution shifts, limiting our
understanding of their effectiveness under the more realistic shifts found in
practice. Moreover, common fine-tuning methods for these models have yet to be
evaluated against vision models in few-shot scenarios where training data is
limited. To address these gaps, we present a new recipe for few-shot
fine-tuning of the popular vision-language foundation model CLIP and evaluate
its performance on challenging benchmark datasets with realistic distribution
shifts from the WILDS collection. Our experimentation demonstrates that, while
zero-shot CLIP fails to match performance of trained vision models on more
complex benchmarks, few-shot CLIP fine-tuning outperforms its vision-only
counterparts in terms of in-distribution and out-of-distribution accuracy at
all levels of training data availability. This provides a strong incentive for
adoption of foundation models within few-shot learning applications operating
with real-world data. Code is available at
https://github.com/mit-ll/robust-vision-language-finetuning
"
"A Minimalist Dataset for Systematic Generalization of Perception,  Syntax, and Semantics",Qing Li,http://arxiv.org/pdf/2103.01403v3.pdf,2021-03-02,"['cs.lg', 'cs.ai', 'cs.cv']",2103.01403v3.pdf,"  Inspired by humans' exceptional ability to master arithmetic and generalize
to new problems, we present a new dataset, Handwritten arithmetic with INTegers
(HINT), to examine machines' capability of learning generalizable concepts at
three levels: perception, syntax, and semantics. In HINT, machines are tasked
with learning how concepts are perceived from raw signals such as images (i.e.,
perception), how multiple concepts are structurally combined to form a valid
expression (i.e., syntax), and how concepts are realized to afford various
reasoning tasks (i.e., semantics), all in a weakly supervised manner. Focusing
on systematic generalization, we carefully design a five-fold test set to
evaluate both the interpolation and the extrapolation of learned concepts
w.r.t. the three levels. Further, we design a few-shot learning split to
determine whether or not models can rapidly learn new concepts and generalize
them to more complex scenarios. To comprehend existing models' limitations, we
undertake extensive experiments with various sequence-to-sequence models,
including RNNs, Transformers, and GPT-3 (with the chain of thought prompting).
The results indicate that current models struggle to extrapolate to long-range
syntactic dependency and semantics. Models exhibit a considerable gap toward
human-level generalization when evaluated with new concepts in a few-shot
setting. Moreover, we discover that it is infeasible to solve HINT by merely
scaling up the dataset and the model size; this strategy contributes little to
the extrapolation of syntax and semantics. Finally, in zero-shot GPT-3
experiments, the chain of thought prompting exhibits impressive results and
significantly boosts the test accuracy. We believe the HINT dataset and the
experimental findings are of great interest to the learning community on
systematic generalization.
"
Lesion2Vec: Deep Metric Learning for Few-Shot Multiple Lesions  Recognition in Wireless Capsule Endoscopy Video,Sodiq Adewole,http://arxiv.org/pdf/2101.04240v2.pdf,2021-01-11,['cs.cv'],2101.04240v2.pdf,"  Effective and rapid detection of lesions in the Gastrointestinal tract is
critical to gastroenterologist's response to some life-threatening diseases.
Wireless Capsule Endoscopy (WCE) has revolutionized traditional endoscopy
procedure by allowing gastroenterologists visualize the entire GI tract
non-invasively. Once the tiny capsule is swallowed, it sequentially capture
images of the GI tract at about 2 to 6 frames per second (fps). A single video
can last up to 8 hours producing between 30,000 to 100,000 images. Automating
the detection of frames containing specific lesion in WCE video would relieve
gastroenterologists the arduous task of reviewing the entire video before
making diagnosis. While the WCE produces large volume of images, only about 5\%
of the frames contain lesions that aid the diagnosis process. Convolutional
Neural Network (CNN) based models have been very successful in various image
classification tasks. However, they suffer excessive parameters, are sample
inefficient and rely on very large amount of training data. Deploying a CNN
classifier for lesion detection task will require time-to-time fine-tuning to
generalize to any unforeseen category. In this paper, we propose a metric-based
learning framework followed by a few-shot lesion recognition in WCE data.
Metric-based learning is a meta-learning framework designed to establish
similarity or dissimilarity between concepts while few-shot learning (FSL) aims
to identify new concepts from only a small number of examples. We train a
feature extractor to learn a representation for different small bowel lesions
using metric-based learning. At the testing stage, the category of an unseen
sample is predicted from only a few support examples, thereby allowing the
model to generalize to a new category that has never been seen before. We
demonstrated the efficacy of this method on real patient capsule endoscopy
data.
"
Program Synthesis with Large Language Models,Jacob Austin,http://arxiv.org/pdf/2108.07732v1.pdf,2021-08-16,"['cs.pl', 'cs.lg']",2108.07732v1.pdf,"  This paper explores the limits of the current generation of large language
models for program synthesis in general purpose programming languages. We
evaluate a collection of such models (with between 244M and 137B parameters) on
two new benchmarks, MBPP and MathQA-Python, in both the few-shot and
fine-tuning regimes. Our benchmarks are designed to measure the ability of
these models to synthesize short Python programs from natural language
descriptions. The Mostly Basic Programming Problems (MBPP) dataset contains 974
programming tasks, designed to be solvable by entry-level programmers. The
MathQA-Python dataset, a Python version of the MathQA benchmark, contains 23914
problems that evaluate the ability of the models to synthesize code from more
complex text. On both datasets, we find that synthesis performance scales
log-linearly with model size. Our largest models, even without finetuning on a
code dataset, can synthesize solutions to 59.6 percent of the problems from
MBPP using few-shot learning with a well-designed prompt. Fine-tuning on a
held-out portion of the dataset improves performance by about 10 percentage
points across most model sizes. On the MathQA-Python dataset, the largest
fine-tuned model achieves 83.8 percent accuracy. Going further, we study the
model's ability to engage in dialog about code, incorporating human feedback to
improve its solutions. We find that natural language feedback from a human
halves the error rate compared to the model's initial prediction. Additionally,
we conduct an error analysis to shed light on where these models fall short and
what types of programs are most difficult to generate. Finally, we explore the
semantic grounding of these models by fine-tuning them to predict the results
of program execution. We find that even our best models are generally unable to
predict the output of a program given a specific input.
"
Unsupervised Law Article Mining based on Deep Pre-Trained Language  Representation Models with Application to the Italian Civil Code,Andrea Tagarelli,http://arxiv.org/pdf/2112.03033v1.pdf,2021-12-02,"['cs.cl', 'cs.ai', 'cs.ir', 'physics.soc-ph']",2112.03033v1.pdf,"  Modeling law search and retrieval as prediction problems has recently emerged
as a predominant approach in law intelligence. Focusing on the law article
retrieval task, we present a deep learning framework named LamBERTa, which is
designed for civil-law codes, and specifically trained on the Italian civil
code. To our knowledge, this is the first study proposing an advanced approach
to law article prediction for the Italian legal system based on a BERT
(Bidirectional Encoder Representations from Transformers) learning framework,
which has recently attracted increased attention among deep learning
approaches, showing outstanding effectiveness in several natural language
processing and learning tasks. We define LamBERTa models by fine-tuning an
Italian pre-trained BERT on the Italian civil code or its portions, for law
article retrieval as a classification task. One key aspect of our LamBERTa
framework is that we conceived it to address an extreme classification
scenario, which is characterized by a high number of classes, the few-shot
learning problem, and the lack of test query benchmarks for Italian legal
prediction tasks. To solve such issues, we define different methods for the
unsupervised labeling of the law articles, which can in principle be applied to
any law article code system. We provide insights into the explainability and
interpretability of our LamBERTa models, and we present an extensive
experimental analysis over query sets of different type, for single-label as
well as multi-label evaluation tasks. Empirical evidence has shown the
effectiveness of LamBERTa, and also its superiority against widely used
deep-learning text classifiers and a few-shot learner conceived for an
attribute-aware prediction task.
"
"Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A  Large-Scale Generative Language Model",Shaden Smith,http://arxiv.org/pdf/2201.11990v3.pdf,2022-01-28,['cs.cl'],2201.11990v3.pdf,"  Pretrained general-purpose language models can achieve state-of-the-art
accuracies in various natural language processing domains by adapting to
downstream tasks via zero-shot, few-shot and fine-tuning techniques. Because of
their success, the size of these models has increased rapidly, requiring
high-performance hardware, software, and algorithmic techniques to enable
training such large models. As the result of a joint effort between Microsoft
and NVIDIA, we present details on the training of the largest monolithic
transformer based language model, Megatron-Turing NLG 530B (MT-NLG), with 530
billion parameters. In this paper, we first focus on the infrastructure as well
as the 3D parallelism methodology used to train this model using DeepSpeed and
Megatron. Next, we detail the training process, the design of our training
corpus, and our data curation techniques, which we believe is a key ingredient
to the success of the model. Finally, we discuss various evaluation results, as
well as other interesting observations and new properties exhibited by MT-NLG.
We demonstrate that MT-NLG achieves superior zero-, one-, and few-shot learning
accuracies on several NLP benchmarks and establishes new state-of-the-art
results. We believe that our contributions will help further the development of
large-scale training infrastructures, large-scale language models, and natural
language generations.
"
Data Distributional Properties Drive Emergent In-Context Learning in  Transformers,Stephanie C. Y. Chan,http://arxiv.org/pdf/2205.05055v6.pdf,2022-04-22,"['cs.lg', 'cs.ai', 'cs.cl']",2205.05055v6.pdf,"  Large transformer-based models are able to perform in-context few-shot
learning, without being explicitly trained for it. This observation raises the
question: what aspects of the training regime lead to this emergent behavior?
Here, we show that this behavior is driven by the distributions of the training
data itself. In-context learning emerges when the training data exhibits
particular distributional properties such as burstiness (items appear in
clusters rather than being uniformly distributed over time) and having large
numbers of rarely occurring classes. In-context learning also emerges more
strongly when item meanings or interpretations are dynamic rather than fixed.
These properties are exemplified by natural language, but are also inherent to
naturalistic data in a wide range of other domains. They also depart
significantly from the uniform, i.i.d. training distributions typically used
for standard supervised learning. In our initial experiments, we found that
in-context learning traded off against more conventional weight-based learning,
and models were unable to achieve both simultaneously. However, our later
experiments uncovered that the two modes of learning could co-exist in a single
model when it was trained on data following a skewed Zipfian distribution --
another common property of naturalistic data, including language. In further
experiments, we found that naturalistic data distributions were only able to
elicit in-context learning in transformers, and not in recurrent models. In
sum, our findings indicate how the transformer architecture works together with
particular properties of the training data to drive the intriguing emergent
in-context learning behaviour of large language models, and how future work
might encourage both in-context and in-weights learning in domains beyond
language.
"
Large Language Models are Zero-Shot Reasoners,Takeshi Kojima,http://arxiv.org/pdf/2205.11916v4.pdf,2022-05-24,"['cs.cl', 'cs.ai', 'cs.lg']",2205.11916v4.pdf,"  Pretrained large language models (LLMs) are widely used in many sub-fields of
natural language processing (NLP) and generally known as excellent few-shot
learners with task-specific exemplars. Notably, chain of thought (CoT)
prompting, a recent technique for eliciting complex multi-step reasoning
through step-by-step answer examples, achieved the state-of-the-art
performances in arithmetics and symbolic reasoning, difficult system-2 tasks
that do not follow the standard scaling laws for LLMs. While these successes
are often attributed to LLMs' ability for few-shot learning, we show that LLMs
are decent zero-shot reasoners by simply adding ""Let's think step by step""
before each answer. Experimental results demonstrate that our Zero-shot-CoT,
using the same single prompt template, significantly outperforms zero-shot LLM
performances on diverse benchmark reasoning tasks including arithmetics
(MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, Coin
Flip), and other logical reasoning tasks (Date Understanding, Tracking Shuffled
Objects), without any hand-crafted few-shot examples, e.g. increasing the
accuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with
large InstructGPT model (text-davinci-002), as well as similar magnitudes of
improvements with another off-the-shelf large model, 540B parameter PaLM. The
versatility of this single prompt across very diverse reasoning tasks hints at
untapped and understudied fundamental zero-shot capabilities of LLMs,
suggesting high-level, multi-task broad cognitive capabilities may be extracted
by simple prompting. We hope our work not only serves as the minimal strongest
zero-shot baseline for the challenging reasoning benchmarks, but also
highlights the importance of carefully exploring and analyzing the enormous
zero-shot knowledge hidden inside LLMs before crafting finetuning datasets or
few-shot exemplars.
"
Hungry Hungry Hippos: Towards Language Modeling with State Space Models,Daniel Y. Fu,http://arxiv.org/pdf/2212.14052v3.pdf,2022-12-28,"['cs.lg', 'cs.cl']",2212.14052v3.pdf,"  State space models (SSMs) have demonstrated state-of-the-art sequence
modeling performance in some modalities, but underperform attention in language
modeling. Moreover, despite scaling nearly linearly in sequence length instead
of quadratically, SSMs are still slower than Transformers due to poor hardware
utilization. In this paper, we make progress on understanding the expressivity
gap between SSMs and attention in language modeling, and on reducing the
hardware barrier between SSMs and attention. First, we use synthetic language
modeling tasks to understand the gap between SSMs and attention. We find that
existing SSMs struggle with two capabilities: recalling earlier tokens in the
sequence and comparing tokens across the sequence. To understand the impact on
language modeling, we propose a new SSM layer, H3, that is explicitly designed
for these abilities. H3 matches attention on the synthetic languages and comes
within 0.4 PPL of Transformers on OpenWebText. Furthermore, a hybrid
125M-parameter H3-attention model that retains two attention layers
surprisingly outperforms Transformers on OpenWebText by 1.0 PPL. Next, to
improve the efficiency of training SSMs on modern hardware, we propose
FlashConv. FlashConv uses a fused block FFT algorithm to improve efficiency on
sequences up to 8K, and introduces a novel state passing algorithm that
exploits the recurrent properties of SSMs to scale to longer sequences.
FlashConv yields 2$\times$ speedup on the long-range arena benchmark and allows
hybrid language models to generate text 2.4$\times$ faster than Transformers.
Using FlashConv, we scale hybrid H3-attention language models up to 2.7B
parameters on the Pile and find promising initial results, achieving lower
perplexity than Transformers and outperforming Transformers in zero- and
few-shot learning on a majority of tasks in the SuperGLUE benchmark.
"
CLIP2Scene: Towards Label-efficient 3D Scene Understanding by CLIP,Runnan Chen,http://arxiv.org/pdf/2301.04926v2.pdf,2023-01-12,['cs.cv'],2301.04926v2.pdf,"  Contrastive Language-Image Pre-training (CLIP) achieves promising results in
2D zero-shot and few-shot learning. Despite the impressive performance in 2D,
applying CLIP to help the learning in 3D scene understanding has yet to be
explored. In this paper, we make the first attempt to investigate how CLIP
knowledge benefits 3D scene understanding. We propose CLIP2Scene, a simple yet
effective framework that transfers CLIP knowledge from 2D image-text
pre-trained models to a 3D point cloud network. We show that the pre-trained 3D
network yields impressive performance on various downstream tasks, i.e.,
annotation-free and fine-tuning with labelled data for semantic segmentation.
Specifically, built upon CLIP, we design a Semantic-driven Cross-modal
Contrastive Learning framework that pre-trains a 3D network via semantic and
spatial-temporal consistency regularization. For the former, we first leverage
CLIP's text semantics to select the positive and negative point samples and
then employ the contrastive loss to train the 3D network. In terms of the
latter, we force the consistency between the temporally coherent point cloud
features and their corresponding image features. We conduct experiments on
SemanticKITTI, nuScenes, and ScanNet. For the first time, our pre-trained
network achieves annotation-free 3D semantic segmentation with 20.8% and 25.08%
mIoU on nuScenes and ScanNet, respectively. When fine-tuned with 1% or 100%
labelled data, our method significantly outperforms other self-supervised
methods, with improvements of 8% and 1% mIoU, respectively. Furthermore, we
demonstrate the generalizability for handling cross-domain datasets. Code is
publicly available https://github.com/runnanchen/CLIP2Scene.
"
An Empirical Evaluation of Using Large Language Models for Automated  Unit Test Generation,Max Schäfer,http://arxiv.org/pdf/2302.06527v3.pdf,2023-02-13,"['cs.se', 'cs.ai']",2302.06527v3.pdf,"  Unit tests play a key role in ensuring the correctness of software. However,
manually creating unit tests is a laborious task, motivating the need for
automation. Large Language Models (LLMs) have recently been applied to this
problem, utilizing additional training or few-shot learning on examples of
existing tests. This paper presents a large-scale empirical evaluation on the
effectiveness of LLMs for automated unit test generation without additional
training or manual effort, providing the LLM with the signature and
implementation of the function under test, along with usage examples extracted
from documentation. We also attempt to repair failed generated tests by
re-prompting the model with the failing test and error message. We implement
our approach in TestPilot, a test generation tool for JavaScript that
automatically generates unit tests for all API functions in an npm package. We
evaluate TestPilot using OpenAI's gpt3.5-turbo LLM on 25 npm packages with a
total of 1,684 API functions. The generated tests achieve a median statement
coverage of 70.2% and branch coverage of 52.8%, significantly improving on
Nessie, a recent feedback-directed JavaScript test generation technique, which
achieves only 51.3% statement coverage and 25.6% branch coverage. We also find
that 92.8% of TestPilot's generated tests have no more than 50% similarity with
existing tests (as measured by normalized edit distance), with none of them
being exact copies. Finally, we run TestPilot with two additional LLMs,
OpenAI's older code-cushman-002 LLM and the open LLM StarCoder. Overall, we
observed similar results with the former (68.2% median statement coverage), and
somewhat worse results with the latter (54.0% median statement coverage),
suggesting that the effectiveness of the approach is influenced by the size and
training set of the LLM, but does not fundamentally depend on the specific
model.
"
On the Opportunities and Challenges of Foundation Models for Geospatial  Artificial Intelligence,Gengchen Mai,http://arxiv.org/pdf/2304.06798v1.pdf,2023-04-13,"['cs.ai', 'cs.cl', 'cs.cv', 'i.2.0; i.2.4; i.2.7; i.2.10; i.5.1']",2304.06798v1.pdf,"  Large pre-trained models, also known as foundation models (FMs), are trained
in a task-agnostic manner on large-scale data and can be adapted to a wide
range of downstream tasks by fine-tuning, few-shot, or even zero-shot learning.
Despite their successes in language and vision tasks, we have yet seen an
attempt to develop foundation models for geospatial artificial intelligence
(GeoAI). In this work, we explore the promises and challenges of developing
multimodal foundation models for GeoAI. We first investigate the potential of
many existing FMs by testing their performances on seven tasks across multiple
geospatial subdomains including Geospatial Semantics, Health Geography, Urban
Geography, and Remote Sensing. Our results indicate that on several geospatial
tasks that only involve text modality such as toponym recognition, location
description recognition, and US state-level/county-level dementia time series
forecasting, these task-agnostic LLMs can outperform task-specific
fully-supervised models in a zero-shot or few-shot learning setting. However,
on other geospatial tasks, especially tasks that involve multiple data
modalities (e.g., POI-based urban function classification, street view
image-based urban noise intensity classification, and remote sensing image
scene classification), existing foundation models still underperform
task-specific models. Based on these observations, we propose that one of the
major challenges of developing a FM for GeoAI is to address the multimodality
nature of geospatial tasks. After discussing the distinct challenges of each
geospatial data modality, we suggest the possibility of a multimodal foundation
model which can reason over various types of geospatial data through geospatial
alignments. We conclude this paper by discussing the unique risks and
challenges to develop such a model for GeoAI.
"
Learning to detect an animal sound from five examples,InĂŞs Nolasco,http://arxiv.org/pdf/2305.13210v1.pdf,2023-05-22,"['cs.sd', 'eess.as', 'q-bio.qm']",2305.13210v1.pdf,"  Automatic detection and classification of animal sounds has many applications
in biodiversity monitoring and animal behaviour. In the past twenty years, the
volume of digitised wildlife sound available has massively increased, and
automatic classification through deep learning now shows strong results.
However, bioacoustics is not a single task but a vast range of small-scale
tasks (such as individual ID, call type, emotional indication) with wide
variety in data characteristics, and most bioacoustic tasks do not come with
strongly-labelled training data. The standard paradigm of supervised learning,
focussed on a single large-scale dataset and/or a generic pre-trained
algorithm, is insufficient. In this work we recast bioacoustic sound event
detection within the AI framework of few-shot learning. We adapt this framework
to sound event detection, such that a system can be given the annotated
start/end times of as few as 5 events, and can then detect events in
long-duration audio -- even when the sound category was not known at the time
of algorithm training. We introduce a collection of open datasets designed to
strongly test a system's ability to perform few-shot sound event detections,
and we present the results of a public contest to address the task. We show
that prototypical networks are a strong-performing method, when enhanced with
adaptations for general characteristics of animal sounds. We demonstrate that
widely-varying sound event durations are an important factor in performance, as
well as non-stationarity, i.e. gradual changes in conditions throughout the
duration of a recording. For fine-grained bioacoustic recognition tasks without
massive annotated training data, our results demonstrate that few-shot sound
event detection is a powerful new method, strongly outperforming traditional
signal-processing detection methods in the fully automated scenario.
"
The Rise of AI Language Pathologists: Exploring Two-level Prompt  Learning for Few-shot Weakly-supervised Whole Slide Image Classification,Linhao Qu,http://arxiv.org/pdf/2305.17891v1.pdf,2023-05-29,['cs.cv'],2305.17891v1.pdf,"  This paper introduces the novel concept of few-shot weakly supervised
learning for pathology Whole Slide Image (WSI) classification, denoted as FSWC.
A solution is proposed based on prompt learning and the utilization of a large
language model, GPT-4. Since a WSI is too large and needs to be divided into
patches for processing, WSI classification is commonly approached as a Multiple
Instance Learning (MIL) problem. In this context, each WSI is considered a bag,
and the obtained patches are treated as instances. The objective of FSWC is to
classify both bags and instances with only a limited number of labeled bags.
Unlike conventional few-shot learning problems, FSWC poses additional
challenges due to its weak bag labels within the MIL framework. Drawing
inspiration from the recent achievements of vision-language models (V-L models)
in downstream few-shot classification tasks, we propose a two-level prompt
learning MIL framework tailored for pathology, incorporating language prior
knowledge. Specifically, we leverage CLIP to extract instance features for each
patch, and introduce a prompt-guided pooling strategy to aggregate these
instance features into a bag feature. Subsequently, we employ a small number of
labeled bags to facilitate few-shot prompt learning based on the bag features.
Our approach incorporates the utilization of GPT-4 in a question-and-answer
mode to obtain language prior knowledge at both the instance and bag levels,
which are then integrated into the instance and bag level language prompts.
Additionally, a learnable component of the language prompts is trained using
the available few-shot labeled data. We conduct extensive experiments on three
real WSI datasets encompassing breast cancer, lung cancer, and cervical cancer,
demonstrating the notable performance of the proposed method in bag and
instance classification. All codes will be made publicly accessible.
"
Effective Test Generation Using Pre-trained Large Language Models and  Mutation Testing,Arghavan Moradi Dakhel,http://arxiv.org/pdf/2308.16557v1.pdf,2023-08-31,['cs.se'],2308.16557v1.pdf,"  One of the critical phases in software development is software testing.
Testing helps with identifying potential bugs and reducing maintenance costs.
The goal of automated test generation tools is to ease the development of tests
by suggesting efficient bug-revealing tests. Recently, researchers have
leveraged Large Language Models (LLMs) of code to generate unit tests. While
the code coverage of generated tests was usually assessed, the literature has
acknowledged that the coverage is weakly correlated with the efficiency of
tests in bug detection. To improve over this limitation, in this paper, we
introduce MuTAP for improving the effectiveness of test cases generated by LLMs
in terms of revealing bugs by leveraging mutation testing. Our goal is achieved
by augmenting prompts with surviving mutants, as those mutants highlight the
limitations of test cases in detecting bugs. MuTAP is capable of generating
effective test cases in the absence of natural language descriptions of the
Program Under Test (PUTs). We employ different LLMs within MuTAP and evaluate
their performance on different benchmarks. Our results show that our proposed
method is able to detect up to 28% more faulty human-written code snippets.
Among these, 17% remained undetected by both the current state-of-the-art fully
automated test generation tool (i.e., Pynguin) and zero-shot/few-shot learning
approaches on LLMs. Furthermore, MuTAP achieves a Mutation Score (MS) of 93.57%
on synthetic buggy code, outperforming all other approaches in our evaluation.
Our findings suggest that although LLMs can serve as a useful tool to generate
test cases, they require specific post-processing steps to enhance the
effectiveness of the generated test cases which may suffer from syntactic or
functional errors and may be ineffective in detecting certain types of bugs and
testing corner cases PUTs.
"
LLM4SGG: Large Language Model for Weakly Supervised Scene Graph  Generation,Kibum Kim,http://arxiv.org/pdf/2310.10404v4.pdf,2023-10-16,['cs.cv'],2310.10404v4.pdf,"  Weakly-Supervised Scene Graph Generation (WSSGG) research has recently
emerged as an alternative to the fully-supervised approach that heavily relies
on costly annotations. In this regard, studies on WSSGG have utilized image
captions to obtain unlocalized triplets while primarily focusing on grounding
the unlocalized triplets over image regions. However, they have overlooked the
two issues involved in the triplet formation process from the captions: 1)
Semantic over-simplification issue arises when extracting triplets from
captions, where fine-grained predicates in captions are undesirably converted
into coarse-grained predicates, resulting in a long-tailed predicate
distribution, and 2) Low-density scene graph issue arises when aligning the
triplets in the caption with entity/predicate classes of interest, where many
triplets are discarded and not used in training, leading to insufficient
supervision. To tackle the two issues, we propose a new approach, i.e., Large
Language Model for weakly-supervised SGG (LLM4SGG), where we mitigate the two
issues by leveraging the LLM's in-depth understanding of language and reasoning
ability during the extraction of triplets from captions and alignment of
entity/predicate classes with target data. To further engage the LLM in these
processes, we adopt the idea of Chain-of-Thought and the in-context few-shot
learning strategy. To validate the effectiveness of LLM4SGG, we conduct
extensive experiments on Visual Genome and GQA datasets, showing significant
improvements in both Recall@K and mean Recall@K compared to the
state-of-the-art WSSGG methods. A further appeal is that LLM4SGG is
data-efficient, enabling effective model training with a small amount of
training images.
"
Language Models are Few-Shot Learners,Tom B. Brown,http://arxiv.org/pdf/2005.14165v4.pdf,2020-05-28,['cs.cl'],2005.14165v4.pdf,"  Recent work has demonstrated substantial gains on many NLP tasks and
benchmarks by pre-training on a large corpus of text followed by fine-tuning on
a specific task. While typically task-agnostic in architecture, this method
still requires task-specific fine-tuning datasets of thousands or tens of
thousands of examples. By contrast, humans can generally perform a new language
task from only a few examples or from simple instructions - something which
current NLP systems still largely struggle to do. Here we show that scaling up
language models greatly improves task-agnostic, few-shot performance, sometimes
even reaching competitiveness with prior state-of-the-art fine-tuning
approaches. Specifically, we train GPT-3, an autoregressive language model with
175 billion parameters, 10x more than any previous non-sparse language model,
and test its performance in the few-shot setting. For all tasks, GPT-3 is
applied without any gradient updates or fine-tuning, with tasks and few-shot
demonstrations specified purely via text interaction with the model. GPT-3
achieves strong performance on many NLP datasets, including translation,
question-answering, and cloze tasks, as well as several tasks that require
on-the-fly reasoning or domain adaptation, such as unscrambling words, using a
novel word in a sentence, or performing 3-digit arithmetic. At the same time,
we also identify some datasets where GPT-3's few-shot learning still struggles,
as well as some datasets where GPT-3 faces methodological issues related to
training on large web corpora. Finally, we find that GPT-3 can generate samples
of news articles which human evaluators have difficulty distinguishing from
articles written by humans. We discuss broader societal impacts of this finding
and of GPT-3 in general.
"
MasakhaNEWS: News Topic Classification for African languages,David Ifeoluwa Adelani,http://arxiv.org/pdf/2304.09972v2.pdf,2023-04-19,['cs.cl'],2304.09972v2.pdf,"  African languages are severely under-represented in NLP research due to lack
of datasets covering several NLP tasks. While there are individual language
specific datasets that are being expanded to different tasks, only a handful of
NLP tasks (e.g. named entity recognition and machine translation) have
standardized benchmark datasets covering several geographical and
typologically-diverse African languages. In this paper, we develop MasakhaNEWS
-- a new benchmark dataset for news topic classification covering 16 languages
widely spoken in Africa. We provide an evaluation of baseline models by
training classical machine learning models and fine-tuning several language
models. Furthermore, we explore several alternatives to full fine-tuning of
language models that are better suited for zero-shot and few-shot learning such
as cross-lingual parameter-efficient fine-tuning (like MAD-X), pattern
exploiting training (PET), prompting language models (like ChatGPT), and
prompt-free sentence transformer fine-tuning (SetFit and Cohere Embedding API).
Our evaluation in zero-shot setting shows the potential of prompting ChatGPT
for news topic classification in low-resource African languages, achieving an
average performance of 70 F1 points without leveraging additional supervision
like MAD-X. In few-shot setting, we show that with as little as 10 examples per
label, we achieved more than 90\% (i.e. 86.0 F1 points) of the performance of
full supervised training (92.6 F1 points) leveraging the PET approach.
"
Exploring Effectiveness of GPT-3 in Grammatical Error Correction: A  Study on Performance and Controllability in Prompt-Based Methods,Mengsay Loem,http://arxiv.org/pdf/2305.18156v1.pdf,2023-05-29,"['cs.cl', 'cs.ai']",2305.18156v1.pdf,"  Large-scale pre-trained language models such as GPT-3 have shown remarkable
performance across various natural language processing tasks. However, applying
prompt-based methods with GPT-3 for Grammatical Error Correction (GEC) tasks
and their controllability remains underexplored. Controllability in GEC is
crucial for real-world applications, particularly in educational settings,
where the ability to tailor feedback according to learner levels and specific
error types can significantly enhance the learning process. This paper
investigates the performance and controllability of prompt-based methods with
GPT-3 for GEC tasks using zero-shot and few-shot setting. We explore the impact
of task instructions and examples on GPT-3's output, focusing on controlling
aspects such as minimal edits, fluency edits, and learner levels. Our findings
demonstrate that GPT-3 could effectively perform GEC tasks, outperforming
existing supervised and unsupervised approaches. We also showed that GPT-3
could achieve controllability when appropriate task instructions and examples
are given.
"
Causal Intervention-based Prompt Debiasing for Event Argument Extraction,Jiaju Lin,http://arxiv.org/pdf/2210.01561v1.pdf,2022-10-04,"['cs.cl', 'cs.ai']",2210.01561v1.pdf,"  Prompt-based methods have become increasingly popular among information
extraction tasks, especially in low-data scenarios. By formatting a finetune
task into a pre-training objective, prompt-based methods resolve the data
scarce problem effectively. However, seldom do previous research investigate
the discrepancy among different prompt formulating strategies. In this work, we
compare two kinds of prompts, name-based prompt and ontology-base prompt, and
reveal how ontology-base prompt methods exceed its counterpart in zero-shot
event argument extraction (EAE) . Furthermore, we analyse the potential risk in
ontology-base prompts via a causal view and propose a debias method by causal
intervention. Experiments on two benchmarks demonstrate that modified by our
debias method, the baseline model becomes both more effective and robust, with
significant improvement in the resistance to adversarial attacks.
"
When Prompt-based Incremental Learning Does Not Meet Strong Pretraining,Yu-Ming Tang,http://arxiv.org/pdf/2308.10445v1.pdf,2023-08-21,['cs.cv'],2308.10445v1.pdf,"  Incremental learning aims to overcome catastrophic forgetting when learning
deep networks from sequential tasks. With impressive learning efficiency and
performance, prompt-based methods adopt a fixed backbone to sequential tasks by
learning task-specific prompts. However, existing prompt-based methods heavily
rely on strong pretraining (typically trained on ImageNet-21k), and we find
that their models could be trapped if the potential gap between the pretraining
task and unknown future tasks is large. In this work, we develop a learnable
Adaptive Prompt Generator (APG). The key is to unify the prompt retrieval and
prompt learning processes into a learnable prompt generator. Hence, the whole
prompting process can be optimized to reduce the negative effects of the gap
between tasks effectively. To make our APG avoid learning ineffective
knowledge, we maintain a knowledge pool to regularize APG with the feature
distribution of each class. Extensive experiments show that our method
significantly outperforms advanced methods in exemplar-free incremental
learning without (strong) pretraining. Besides, under strong retraining, our
method also has comparable performance to existing prompt-based models, showing
that our method can still benefit from pretraining. Codes can be found at
https://github.com/TOM-tym/APG
"
Zero-shot Domain Adaptation for Neural Machine Translation with  Retrieved Phrase-level Prompts,Zewei Sun,http://arxiv.org/pdf/2209.11409v1.pdf,2022-09-23,['cs.cl'],2209.11409v1.pdf,"  Domain adaptation is an important challenge for neural machine translation.
However, the traditional fine-tuning solution requires multiple extra training
and yields a high cost. In this paper, we propose a non-tuning paradigm,
resolving domain adaptation with a prompt-based method. Specifically, we
construct a bilingual phrase-level database and retrieve relevant pairs from it
as a prompt for the input sentences. By utilizing Retrieved Phrase-level
Prompts (RePP), we effectively boost the translation quality. Experiments show
that our method improves domain-specific machine translation for 6.2 BLEU
scores and improves translation constraints for 11.5% accuracy without
additional training.
"
NSP-BERT: A Prompt-based Few-Shot Learner Through an Original  Pre-training Task--Next Sentence Prediction,Yi Sun,http://arxiv.org/pdf/2109.03564v2.pdf,2021-09-08,"['cs.cl', 'cs.ai']",2109.03564v2.pdf,"  Using prompts to utilize language models to perform various downstream tasks,
also known as prompt-based learning or prompt-learning, has lately gained
significant success in comparison to the pre-train and fine-tune paradigm.
Nonetheless, virtually all prompt-based methods are token-level, meaning they
all utilize GPT's left-to-right language model or BERT's masked language model
to perform cloze-style tasks. In this paper, we attempt to accomplish several
NLP tasks in the zero-shot scenario using a BERT original pre-training task
abandoned by RoBERTa and other models--Next Sentence Prediction (NSP). Unlike
token-level techniques, our sentence-level prompt-based method NSP-BERT does
not need to fix the length of the prompt or the position to be predicted,
allowing it to handle tasks such as entity linking with ease. Based on the
characteristics of NSP-BERT, we offer several quick building templates for
various downstream tasks. We suggest a two-stage prompt method for word sense
disambiguation tasks in particular. Our strategies for mapping the labels
significantly enhance the model's performance on sentence pair tasks. On the
FewCLUE benchmark, our NSP-BERT outperforms other zero-shot methods on most of
these tasks and comes close to the few-shot methods.
"
Introducing Language Guidance in Prompt-based Continual Learning,Muhammad Gul Zain Ali Khan,http://arxiv.org/pdf/2308.15827v1.pdf,2023-08-30,['cs.cv'],2308.15827v1.pdf,"  Continual Learning aims to learn a single model on a sequence of tasks
without having access to data from previous tasks. The biggest challenge in the
domain still remains catastrophic forgetting: a loss in performance on seen
classes of earlier tasks. Some existing methods rely on an expensive replay
buffer to store a chunk of data from previous tasks. This, while promising,
becomes expensive when the number of tasks becomes large or data can not be
stored for privacy reasons. As an alternative, prompt-based methods have been
proposed that store the task information in a learnable prompt pool. This
prompt pool instructs a frozen image encoder on how to solve each task. While
the model faces a disjoint set of classes in each task in this setting, we
argue that these classes can be encoded to the same embedding space of a
pre-trained language encoder. In this work, we propose Language Guidance for
Prompt-based Continual Learning (LGCL) as a plug-in for prompt-based methods.
LGCL is model agnostic and introduces language guidance at the task level in
the prompt pool and at the class level on the output feature of the vision
encoder. We show with extensive experimentation that LGCL consistently improves
the performance of prompt-based continual learning methods to set a new
state-of-the art. LGCL achieves these performance improvements without needing
any additional learnable parameters.
"
Enable Language Models to Implicitly Learn Self-Improvement From Data,Ziqi Wang,http://arxiv.org/pdf/2310.00898v2.pdf,2023-10-02,['cs.cl'],2310.00898v2.pdf,"  Large Language Models (LLMs) have demonstrated remarkable capabilities in
open-ended text generation tasks. However, the inherent open-ended nature of
these tasks implies that there is always room for improvement in the quality of
model responses. To address this challenge, various approaches have been
proposed to enhance the performance of LLMs. There has been a growing focus on
enabling LLMs to self-improve their response quality, thereby reducing the
reliance on extensive human annotation efforts for collecting diverse and
high-quality training data. Recently, prompting-based methods have been widely
explored among self-improvement methods owing to their effectiveness,
efficiency, and convenience. However, those methods usually require explicitly
and thoroughly written rubrics as inputs to LLMs. It is expensive and
challenging to manually derive and provide all necessary rubrics with a
real-world complex goal for improvement (e.g., being more helpful and less
harmful). To this end, we propose an ImPlicit Self-ImprovemenT (PIT) framework
that implicitly learns the improvement goal from human preference data. PIT
only requires preference data that are used to train reward models without
extra human efforts. Specifically, we reformulate the training objective of
reinforcement learning from human feedback (RLHF) -- instead of maximizing
response quality for a given input, we maximize the quality gap of the response
conditioned on a reference response. In this way, PIT is implicitly trained
with the improvement goal of better aligning with human preferences.
Experiments on two real-world datasets and one synthetic dataset show that our
method significantly outperforms prompting-based methods.
"
MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal  Emotion Recognition,Jinming Zhao,http://arxiv.org/pdf/2111.00865v1.pdf,2021-10-27,"['cs.cv', 'eess.iv']",2111.00865v1.pdf,"  Multimodal emotion recognition study is hindered by the lack of labelled
corpora in terms of scale and diversity, due to the high annotation cost and
label ambiguity. In this paper, we propose a pre-training model
\textbf{MEmoBERT} for multimodal emotion recognition, which learns multimodal
joint representations through self-supervised learning from large-scale
unlabeled video data that come in sheer volume. Furthermore, unlike the
conventional ""pre-train, finetune"" paradigm, we propose a prompt-based method
that reformulates the downstream emotion classification task as a masked text
prediction one, bringing the downstream task closer to the pre-training.
Extensive experiments on two benchmark datasets, IEMOCAP and MSP-IMPROV, show
that our proposed MEmoBERT significantly enhances emotion recognition
performance.
"
PSG: Prompt-based Sequence Generation for Acronym Extraction,Bin Li,http://arxiv.org/pdf/2111.14301v2.pdf,2021-11-29,"['cs.cl', 'cs.ai']",2111.14301v2.pdf,"  Acronym extraction aims to find acronyms (i.e., short-forms) and their
meanings (i.e., long-forms) from the documents, which is important for
scientific document understanding (SDU@AAAI-22) tasks. Previous works are
devoted to modeling this task as a paragraph-level sequence labeling problem.
However, it lacks the effective use of the external knowledge, especially when
the datasets are in a low-resource setting. Recently, the prompt-based method
with the vast pre-trained language model can significantly enhance the
performance of the low-resourced downstream tasks. In this paper, we propose a
Prompt-based Sequence Generation (PSG) method for the acronym extraction task.
Specifically, we design a template for prompting the extracted acronym texts
with auto-regression. A position extraction algorithm is designed for
extracting the position of the generated answers. The results on the acronym
extraction of Vietnamese and Persian in a low-resource setting show that the
proposed method outperforms all other competitive state-of-the-art (SOTA)
methods.
"
Chemical Identification and Indexing in PubMed Articles via BERT and  Text-to-Text Approaches,Virginia Adams,http://arxiv.org/pdf/2111.15622v1.pdf,2021-11-30,['cs.cl'],2111.15622v1.pdf,"  The Biocreative VII Track-2 challenge consists of named entity recognition,
entity-linking (or entity-normalization), and topic indexing tasks -- with
entities and topics limited to chemicals for this challenge. Named entity
recognition is a well-established problem and we achieve our best performance
with BERT-based BioMegatron models. We extend our BERT-based approach to the
entity linking task. After the second stage of pretraining BioBERT with a
metric-learning loss strategy called self-alignment pretraining (SAP), we link
entities based on the cosine similarity between their SAP-BioBERT word
embeddings. Despite the success of our named entity recognition experiments, we
find the chemical indexing task generally more challenging.
  In addition to conventional NER methods, we attempt both named entity
recognition and entity linking with a novel text-to-text or ""prompt"" based
method that uses generative language models such as T5 and GPT. We achieve
encouraging results with this new approach.
"
AdaPrompt: Adaptive Model Training for Prompt-based NLP,Yulong Chen,http://arxiv.org/pdf/2202.04824v2.pdf,2022-02-10,['cs.cl'],2202.04824v2.pdf,"  Prompt-based learning, with its capability to tackle zero-shot and few-shot
NLP tasks, has gained much attention in community. The main idea is to bridge
the gap between NLP downstream tasks and language modeling (LM), by mapping
these tasks into natural language prompts, which are then filled by pre-trained
language models (PLMs). However, for prompt learning, there are still two
salient gaps between NLP tasks and pretraining. First, prompt information is
not necessarily sufficiently present during LM pretraining. Second,
task-specific data are not necessarily well represented during pretraining. We
address these two issues by proposing AdaPrompt, adaptively retrieving external
data for continual pretraining of PLMs by making use of both task and prompt
characteristics. In addition, we make use of knowledge in Natural Language
Inference models for deriving adaptive verbalizers. Experimental results on
five NLP benchmarks show that AdaPrompt can improve over standard PLMs in
few-shot settings. In addition, in zero-shot settings, our method outperforms
standard prompt-based methods by up to 26.35\% relative error reduction.
"
Prompting to Distill: Boosting Data-Free Knowledge Distillation via  Reinforced Prompt,Xinyin Ma,http://arxiv.org/pdf/2205.07523v1.pdf,2022-05-16,['cs.cl'],2205.07523v1.pdf,"  Data-free knowledge distillation (DFKD) conducts knowledge distillation via
eliminating the dependence of original training data, and has recently achieved
impressive results in accelerating pre-trained language models. At the heart of
DFKD is to reconstruct a synthetic dataset by inverting the parameters of the
uncompressed model. Prior DFKD approaches, however, have largely relied on
hand-crafted priors of the target data distribution for the reconstruction,
which can be inevitably biased and often incompetent to capture the intrinsic
distributions. To address this problem, we propose a prompt-based method,
termed as PromptDFD, that allows us to take advantage of learned language
priors, which effectively harmonizes the synthetic sentences to be semantically
and grammatically correct. Specifically, PromptDFD leverages a pre-trained
generative model to provide language priors and introduces a reinforced topic
prompter to control data synthesis, making the generated samples thematically
relevant and semantically plausible, and thus friendly to downstream tasks. As
shown in our experiments, the proposed method substantially improves the
synthesis quality and achieves considerable improvements on distillation
performance. In some cases, PromptDFD even gives rise to results on par with
those from the data-driven knowledge distillation with access to the original
training data.
"
"Fewer Errors, but More Stereotypes? The Effect of Model Size on Gender  Bias",Yarden Tal,http://arxiv.org/pdf/2206.09860v1.pdf,2022-06-20,['cs.cl'],2206.09860v1.pdf,"  The size of pretrained models is increasing, and so is their performance on a
variety of NLP tasks. However, as their memorization capacity grows, they might
pick up more social biases. In this work, we examine the connection between
model size and its gender bias (specifically, occupational gender bias). We
measure bias in three masked language model families (RoBERTa, DeBERTa, and T5)
in two setups: directly using prompt based method, and using a downstream task
(Winogender). We find on the one hand that larger models receive higher bias
scores on the former task, but when evaluated on the latter, they make fewer
gender errors. To examine these potentially conflicting results, we carefully
investigate the behavior of the different models on Winogender. We find that
while larger models outperform smaller ones, the probability that their
mistakes are caused by gender bias is higher. Moreover, we find that the
proportion of stereotypical errors compared to anti-stereotypical ones grows
with the model size. Our findings highlight the potential risks that can arise
from increasing model size.
"
PromptAttack: Prompt-based Attack for Language Models via Gradient  Search,Yundi Shi,http://arxiv.org/pdf/2209.01882v1.pdf,2022-09-05,"['cs.cl', 'cs.ai', 'cs.cr']",2209.01882v1.pdf,"  As the pre-trained language models (PLMs) continue to grow, so do the
hardware and data requirements for fine-tuning PLMs. Therefore, the researchers
have come up with a lighter method called \textit{Prompt Learning}. However,
during the investigations, we observe that the prompt learning methods are
vulnerable and can easily be attacked by some illegally constructed prompts,
resulting in classification errors, and serious security problems for PLMs.
Most of the current research ignores the security issue of prompt-based
methods. Therefore, in this paper, we propose a malicious prompt template
construction method (\textbf{PromptAttack}) to probe the security performance
of PLMs. Several unfriendly template construction approaches are investigated
to guide the model to misclassify the task. Extensive experiments on three
datasets and three PLMs prove the effectiveness of our proposed approach
PromptAttack. We also conduct experiments to verify that our method is
applicable in few-shot scenarios.
"
ConvFinQA: Exploring the Chain of Numerical Reasoning in Conversational  Finance Question Answering,Zhiyu Chen,http://arxiv.org/pdf/2210.03849v1.pdf,2022-10-07,['cs.cl'],2210.03849v1.pdf,"  With the recent advance in large pre-trained language models, researchers
have achieved record performances in NLP tasks that mostly focus on language
pattern matching. The community is experiencing the shift of the challenge from
how to model language to the imitation of complex reasoning abilities like
human beings. In this work, we investigate the application domain of finance
that involves real-world, complex numerical reasoning. We propose a new
large-scale dataset, ConvFinQA, aiming to study the chain of numerical
reasoning in conversational question answering. Our dataset poses great
challenge in modeling long-range, complex numerical reasoning paths in
real-world conversations. We conduct comprehensive experiments and analyses
with both the neural symbolic methods and the prompting-based methods, to
provide insights into the reasoning mechanisms of these two divisions. We
believe our new dataset should serve as a valuable resource to push forward the
exploration of real-world, complex reasoning tasks as the next research focus.
Our dataset and code is publicly available at
https://github.com/czyssrs/ConvFinQA.
"
Can Language Models Be Specific? How?,Jie Huang,http://arxiv.org/pdf/2210.05159v2.pdf,2022-10-11,"['cs.cl', 'cs.ai']",2210.05159v2.pdf,"  ""He is a person"", ""Paris is located on the earth"". Both statements are
correct but meaningless - due to lack of specificity. In this paper, we propose
to measure how specific the language of pre-trained language models (PLMs) is.
To achieve this, we introduce a novel approach to build a benchmark for
specificity testing by forming masked token prediction tasks with prompts. For
instance, given ""Toronto is located in [MASK]."", we want to test whether a more
specific answer will be better filled in by PLMs, e.g., Ontario instead of
Canada. From our evaluations, we show that existing PLMs have only a slight
preference for more specific answers. We identify underlying factors affecting
the specificity and design two prompt-based methods to improve the specificity.
Results show that the specificity of the models can be improved by the proposed
methods without additional training. We hope this work can bring to awareness
the notion of specificity of language models and encourage the research
community to further explore this important but understudied problem.
"
Multilingual Relation Classification via Efficient and Effective  Prompting,Yuxuan Chen,http://arxiv.org/pdf/2210.13838v2.pdf,2022-10-25,"['cs.cl', 'cs.lg']",2210.13838v2.pdf,"  Prompting pre-trained language models has achieved impressive performance on
various NLP tasks, especially in low data regimes. Despite the success of
prompting in monolingual settings, applying prompt-based methods in
multilingual scenarios has been limited to a narrow set of tasks, due to the
high cost of handcrafting multilingual prompts. In this paper, we present the
first work on prompt-based multilingual relation classification (RC), by
introducing an efficient and effective method that constructs prompts from
relation triples and involves only minimal translation for the class labels. We
evaluate its performance in fully supervised, few-shot and zero-shot scenarios,
and analyze its effectiveness across 14 languages, prompt variants, and
English-task training in cross-lingual settings. We find that in both fully
supervised and few-shot scenarios, our prompt method beats competitive
baselines: fine-tuning XLM-R_EM and null prompts. It also outperforms the
random baseline by a large margin in zero-shot experiments. Our method requires
little in-language knowledge and can be used as a strong baseline for similar
multilingual classification tasks.
"
Steps towards prompt-based creation of virtual worlds,Jasmine Roberts,http://arxiv.org/pdf/2211.05875v1.pdf,2022-11-10,"['cs.hc', 'cs.ai', 'cs.lg', 'cs.mm']",2211.05875v1.pdf,"  Large language models trained for code generation can be applied to speaking
virtual worlds into existence (creating virtual worlds). In this work we show
that prompt-based methods can both accelerate in-VR level editing, as well as
can become part of gameplay rather than just part of game development. As an
example, we present Codex VR Pong which shows non-deterministic game mechanics
using generative processes to not only create static content but also
non-trivial interactions between 3D objects. This demonstration naturally leads
to an integral discussion on how one would evaluate and benchmark experiences
created by generative models - as there are no qualitative or quantitative
metrics that apply in these scenarios. We conclude by discussing impending
challenges of AI-assisted co-creation in VR.
"
SPE: Symmetrical Prompt Enhancement for Fact Probing,Yiyuan Li,http://arxiv.org/pdf/2211.07078v1.pdf,2022-11-14,"['cs.cl', 'cs.ai', 'cs.lg']",2211.07078v1.pdf,"  Pretrained language models (PLMs) have been shown to accumulate factual
knowledge during pretrainingng (Petroni et al., 2019). Recent works probe PLMs
for the extent of this knowledge through prompts either in discrete or
continuous forms. However, these methods do not consider symmetry of the task:
object prediction and subject prediction. In this work, we propose Symmetrical
Prompt Enhancement (SPE), a continuous prompt-based method for factual probing
in PLMs that leverages the symmetry of the task by constructing symmetrical
prompts for subject and object prediction. Our results on a popular factual
probing dataset, LAMA, show significant improvement of SPE over previous
probing methods.
"
Interactive-Chain-Prompting: Ambiguity Resolution for Crosslingual  Conditional Generation with Interaction,Jonathan Pilault,http://arxiv.org/pdf/2301.10309v1.pdf,2023-01-24,"['cs.lg', 'cs.ai', 'cs.cl']",2301.10309v1.pdf,"  Crosslingual conditional generation (e.g., machine translation) has long
enjoyed the benefits of scaling. Nonetheless, there are still issues that scale
alone may not overcome. A source query in one language, for instance, may yield
several translation options in another language without any extra context. Only
one translation could be acceptable however, depending on the translator's
preferences and goals. Choosing the incorrect option might significantly affect
translation usefulness and quality. We propose a novel method interactive-chain
prompting -- a series of question, answering and generation intermediate steps
between a Translator model and a User model -- that reduces translations into a
list of subproblems addressing ambiguities and then resolving such subproblems
before producing the final text to be translated. To check ambiguity resolution
capabilities and evaluate translation quality, we create a dataset exhibiting
different linguistic phenomena which leads to ambiguities at inference for four
languages. To encourage further exploration in this direction, we release all
datasets. We note that interactive-chain prompting, using eight interactions as
exemplars, consistently surpasses prompt-based methods with direct access to
background information to resolve ambiguities.
"
Evaluating the Robustness of Discrete Prompts,Yoichi Ishibashi,http://arxiv.org/pdf/2302.05619v1.pdf,2023-02-11,"['cs.cl', 'cs.ai']",2302.05619v1.pdf,"  Discrete prompts have been used for fine-tuning Pre-trained Language Models
for diverse NLP tasks. In particular, automatic methods that generate discrete
prompts from a small set of training instances have reported superior
performance. However, a closer look at the learnt prompts reveals that they
contain noisy and counter-intuitive lexical constructs that would not be
encountered in manually-written prompts. This raises an important yet
understudied question regarding the robustness of automatically learnt discrete
prompts when used in downstream tasks. To address this question, we conduct a
systematic study of the robustness of discrete prompts by applying carefully
designed perturbations into an application using AutoPrompt and then measure
their performance in two Natural Language Inference (NLI) datasets. Our
experimental results show that although the discrete prompt-based method
remains relatively robust against perturbations to NLI inputs, they are highly
sensitive to other types of perturbations such as shuffling and deletion of
prompt tokens. Moreover, they generalize poorly across different NLI datasets.
We hope our findings will inspire future work on robust discrete prompt
learning.
"
Stabilized In-Context Learning with Pre-trained Language Models for Few  Shot Dialogue State Tracking,Derek Chen,http://arxiv.org/pdf/2302.05932v1.pdf,2023-02-12,['cs.cl'],2302.05932v1.pdf,"  Prompt-based methods with large pre-trained language models (PLMs) have shown
impressive unaided performance across many NLP tasks. These models improve even
further with the addition of a few labeled in-context exemplars to guide output
generation. However, for more complex tasks such as dialogue state tracking
(DST), designing prompts that reliably convey the desired intent is nontrivial,
leading to unstable results. Furthermore, building in-context exemplars for
dialogue tasks is difficult because conversational contexts are long while
model input lengths are relatively short. To overcome these issues we first
adapt a meta-learning scheme to the dialogue domain which stabilizes the
ability of the model to perform well under various prompts. We additionally
design a novel training method to improve upon vanilla retrieval mechanisms to
find ideal in-context examples. Finally, we introduce a saliency model to limit
dialogue text length, allowing us to include more exemplars per query. In
effect, we are able to achieve highly competitive results for few-shot DST on
MultiWOZ.
"
Zero-Shot Information Extraction via Chatting with ChatGPT,Xiang Wei,http://arxiv.org/pdf/2302.10205v1.pdf,2023-02-20,['cs.cl'],2302.10205v1.pdf,"  Zero-shot information extraction (IE) aims to build IE systems from the
unannotated text. It is challenging due to involving little human intervention.
Challenging but worthwhile, zero-shot IE reduces the time and effort that data
labeling takes. Recent efforts on large language models (LLMs, e.g., GPT-3,
ChatGPT) show promising performance on zero-shot settings, thus inspiring us to
explore prompt-based methods. In this work, we ask whether strong IE models can
be constructed by directly prompting LLMs. Specifically, we transform the
zero-shot IE task into a multi-turn question-answering problem with a two-stage
framework (ChatIE). With the power of ChatGPT, we extensively evaluate our
framework on three IE tasks: entity-relation triple extract, named entity
recognition, and event extraction. Empirical results on six datasets across two
languages show that ChatIE achieves impressive performance and even surpasses
some full-shot models on several datasets (e.g., NYT11-HRL). We believe that
our work could shed light on building IE models with limited resources.
"
Divide and Prompt: Chain of Thought Prompting for Text-to-SQL,Xiping Liu,http://arxiv.org/pdf/2304.11556v1.pdf,2023-04-23,"['cs.cl', 'cs.ai']",2304.11556v1.pdf,"  Chain-of-thought (CoT) prompting combined with large language models (LLMs)
have achieved encouraging results on complex reasoning tasks. Text-to-SQL is a
critical semantic parsing task that converts natural language questions into
SQL statements, involving a complex reasoning process. However, there is little
work about using CoT prompting to activate LLM's reasoning capabilities on
Text-to-SQL tasks. In this work, we propose a new paradigm for prompting
Text-to-SQL tasks, called Divide-and-Prompt, which first divides the task into
subtasks, and then approach each subtask through CoT. We present 3
prompting-based methods to enhance the Text-to-SQL ability of LLMs. Experiments
show that these prompts guide LLMs to generate Text-to-SQL with higher
execution accuracy.
"
Few-shot Event Detection: An Empirical Study and a Unified View,Yubo Ma,http://arxiv.org/pdf/2305.01901v2.pdf,2023-05-03,"['cs.cl', 'cs.ai']",2305.01901v2.pdf,"  Few-shot event detection (ED) has been widely studied, while this brings
noticeable discrepancies, e.g., various motivations, tasks, and experimental
settings, that hinder the understanding of models for future progress.This
paper presents a thorough empirical study, a unified view of ED models, and a
better unified baseline. For fair evaluation, we compare 12 representative
methods on three datasets, which are roughly grouped into prompt-based and
prototype-based models for detailed analysis. Experiments consistently
demonstrate that prompt-based methods, including ChatGPT, still significantly
trail prototype-based methods in terms of overall performance. To investigate
their superior performance, we break down their design elements along several
dimensions and build a unified framework on prototype-based methods. Under such
unified view, each prototype-method can be viewed a combination of different
modules from these design elements. We further combine all advantageous modules
and propose a simple yet effective baseline, which outperforms existing methods
by a large margin (e.g., 2.7% F1 gains under low-resource setting).
"
PURR: Efficiently Editing Language Model Hallucinations by Denoising  Language Model Corruptions,Anthony Chen,http://arxiv.org/pdf/2305.14908v1.pdf,2023-05-24,['cs.cl'],2305.14908v1.pdf,"  The remarkable capabilities of large language models have been accompanied by
a persistent drawback: the generation of false and unsubstantiated claims
commonly known as ""hallucinations"". To combat this issue, recent research has
introduced approaches that involve editing and attributing the outputs of
language models, particularly through prompt-based editing. However, the
inference cost and speed of using large language models for editing currently
bottleneck prompt-based methods. These bottlenecks motivate the training of
compact editors, which is challenging due to the scarcity of training data for
this purpose. To overcome these challenges, we exploit the power of large
language models to introduce corruptions (i.e., noise) into text and
subsequently fine-tune compact editors to denoise the corruptions by
incorporating relevant evidence. Our methodology is entirely unsupervised and
provides us with faux hallucinations for training in any domain. Our Petite
Unsupervised Research and Revision model, PURR, not only improves attribution
over existing editing methods based on fine-tuning and prompting, but also
achieves faster execution times by orders of magnitude.
"
Syntax-aware Hybrid prompt model for Few-shot multi-modal sentiment  analysis,Zikai Zhou,http://arxiv.org/pdf/2306.01312v2.pdf,2023-06-02,['cs.cl'],2306.01312v2.pdf,"  Multimodal Sentiment Analysis (MSA) has been a popular topic in natural
language processing nowadays, at both sentence and aspect level. However, the
existing approaches almost require large-size labeled datasets, which bring
about large consumption of time and resources. Therefore, it is practical to
explore the method for few-shot sentiment analysis in cross-modalities.
Previous works generally execute on textual modality, using the prompt-based
methods, mainly two types: hand-crafted prompts and learnable prompts. The
existing approach in few-shot multi-modality sentiment analysis task has
utilized both methods, separately. We further design a hybrid pattern that can
combine one or more fixed hand-crafted prompts and learnable prompts and
utilize the attention mechanisms to optimize the prompt encoder. The
experiments on both sentence-level and aspect-level datasets prove that we get
a significant outperformance.
"
Scaling Sentence Embeddings with Large Language Models,Ting Jiang,http://arxiv.org/pdf/2307.16645v1.pdf,2023-07-31,['cs.cl'],2307.16645v1.pdf,"  Large language models (LLMs) have recently garnered significant interest.
With in-context learning, LLMs achieve impressive results in various natural
language tasks. However, the application of LLMs to sentence embeddings remains
an area of ongoing research. In this work, we propose an in-context
learning-based method aimed at improving sentence embeddings performance. Our
approach involves adapting the previous prompt-based representation method for
autoregressive models, constructing a demonstration set that enables LLMs to
perform in-context learning, and scaling up the LLMs to different model sizes.
Through extensive experiments, in-context learning enables LLMs to generate
high-quality sentence embeddings without any fine-tuning. It helps LLMs achieve
performance comparable to current contrastive learning methods. By scaling
model size, we find scaling to more than tens of billion parameters harms the
performance on semantic textual similarity (STS) tasks. However, the largest
model outperforms other counterparts and achieves the new state-of-the-art
result on transfer tasks. We also fine-tune LLMs with current contrastive
learning approach, and the 2.7B OPT model, incorporating our prompt-based
method, surpasses the performance of 4.8B ST5, achieving the new
state-of-the-art results on STS tasks. Our code is available at
https://github.com/kongds/scaling_sentemb.
"
Unified Multimodal Pre-training and Prompt-based Tuning for  Vision-Language Understanding and Generation,Tianyi Liu,http://arxiv.org/pdf/2112.05587v2.pdf,2021-12-10,"['cs.cv', 'cs.cl', 'cs.lg']",2112.05587v2.pdf,"  Most existing vision-language pre-training methods focus on understanding
tasks and use BERT-like objectives (masked language modeling and image-text
matching) during pretraining. Although they perform well in many understanding
downstream tasks, e.g., visual question answering, image-text retrieval and
visual entailment, they do not possess the ability to generate. To tackle this
problem, we propose Unified multimodal pre-training for both Vision-Language
understanding and generation (UniVL). The proposed UniVL is capable of handling
both understanding tasks and generative tasks. We augment existing pretraining
paradigms that only use random masks with causal masks, i.e., triangular masks
that mask out future tokens, such that the pre-trained models can have
autoregressive generation abilities by design. We formulate several previous
understanding tasks as a text generation task and propose to use prompt-based
method for fine-tuning on different downstream tasks. Our experiments show that
there is a trade-off between understanding tasks and generation tasks while
using the same model, and a feasible way to improve both tasks is to use more
data. Our UniVL framework attains comparable performance to recent
vision-language pre-training methods on both understanding tasks and generation
tasks. Moreover, we demostrate that prompt-based finetuning is more
data-efficient - it outperforms discriminative methods in few-shot scenarios.
"
Learning to Transfer Prompts for Text Generation,Junyi Li,http://arxiv.org/pdf/2205.01543v2.pdf,2022-05-03,['cs.cl'],2205.01543v2.pdf,"  Pretrained language models (PLMs) have made remarkable progress in text
generation tasks via fine-tuning. While, it is challenging to fine-tune PLMs in
a data-scarce situation. Therefore, it is non-trivial to develop a general and
lightweight model that can adapt to various text generation tasks based on
PLMs. To fulfill this purpose, the recent prompt-based learning offers a
potential solution. In this paper, we improve this technique and propose a
novel prompt-based method (PTG) for text generation in a transferable setting.
First, PTG learns a set of source prompts for various source generation tasks
and then transfers these prompts as target prompts to perform target generation
tasks. To consider both task- and instance-level information, we design an
adaptive attention mechanism to derive the target prompts. For each data
instance, PTG learns a specific target prompt by attending to highly relevant
source prompts. In extensive experiments, PTG yields competitive or better
results than fine-tuning methods. We release our source prompts as an open
resource, where users can add or reuse them to improve new text generation
tasks for future research. Code and data can be available at
https://github.com/RUCAIBox/Transfer-Prompts-for-Text-Generation.
"
On the Robustness of Dialogue History Representation in Conversational  Question Answering: A Comprehensive Study and a New Prompt-based Method,Zorik Gekhman,http://arxiv.org/pdf/2206.14796v2.pdf,2022-06-29,"['cs.cl', 'cs.ai', 'cs.lg']",2206.14796v2.pdf,"  Most works on modeling the conversation history in Conversational Question
Answering (CQA) report a single main result on a common CQA benchmark. While
existing models show impressive results on CQA leaderboards, it remains unclear
whether they are robust to shifts in setting (sometimes to more realistic
ones), training data size (e.g. from large to small sets) and domain. In this
work, we design and conduct the first large-scale robustness study of history
modeling approaches for CQA. We find that high benchmark scores do not
necessarily translate to strong robustness, and that various methods can
perform extremely differently under different settings. Equipped with the
insights from our study, we design a novel prompt-based history modeling
approach, and demonstrate its strong robustness across various settings. Our
approach is inspired by existing methods that highlight historic answers in the
passage. However, instead of highlighting by modifying the passage token
embeddings, we add textual prompts directly in the passage text. Our approach
is simple, easy-to-plug into practically any model, and highly effective, thus
we recommend it as a starting point for future model developers. We also hope
that our study and insights will raise awareness to the importance of
robustness-focused evaluation, in addition to obtaining high leaderboard
scores, leading to better CQA systems.
"
GPTs at Factify 2022: Prompt Aided Fact-Verification,Pawan Kumar Sahu,http://arxiv.org/pdf/2206.14913v1.pdf,2022-06-29,['cs.cl'],2206.14913v1.pdf,"  One of the most pressing societal issues is the fight against false news. The
false claims, as difficult as they are to expose, create a lot of damage. To
tackle the problem, fact verification becomes crucial and thus has been a topic
of interest among diverse research communities. Using only the textual form of
data we propose our solution to the problem and achieve competitive results
with other approaches. We present our solution based on two approaches - PLM
(pre-trained language model) based method and Prompt based method. The
PLM-based approach uses the traditional supervised learning, where the model is
trained to take 'x' as input and output prediction 'y' as P(y|x). Whereas,
Prompt-based learning reflects the idea to design input to fit the model such
that the original objective may be re-framed as a problem of (masked) language
modeling. We may further stimulate the rich knowledge provided by PLMs to
better serve downstream tasks by employing extra prompts to fine-tune PLMs. Our
experiments showed that the proposed method performs better than just
fine-tuning PLMs. We achieved an F1 score of 0.6946 on the FACTIFY dataset and
a 7th position on the competition leader-board.
"
Towards Realistic Low-resource Relation Extraction: A Benchmark with  Empirical Baseline Study,Xin Xu,http://arxiv.org/pdf/2210.10678v3.pdf,2022-10-19,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",2210.10678v3.pdf,"  This paper presents an empirical study to build relation extraction systems
in low-resource settings. Based upon recent pre-trained language models, we
comprehensively investigate three schemes to evaluate the performance in
low-resource settings: (i) different types of prompt-based methods with
few-shot labeled data; (ii) diverse balancing methods to address the
long-tailed distribution issue; (iii) data augmentation technologies and
self-training to generate more labeled in-domain data. We create a benchmark
with 8 relation extraction (RE) datasets covering different languages, domains
and contexts and perform extensive comparisons over the proposed schemes with
combinations. Our experiments illustrate: (i) Though prompt-based tuning is
beneficial in low-resource RE, there is still much potential for improvement,
especially in extracting relations from cross-sentence contexts with multiple
relational triples; (ii) Balancing methods are not always helpful for RE with
long-tailed distribution; (iii) Data augmentation complements existing
baselines and can bring much performance gain, while self-training may not
consistently achieve advancement to low-resource RE. Code and datasets are in
https://github.com/zjunlp/LREBench.
"
PromptFusion: Decoupling Stability and Plasticity for Continual Learning,Haoran Chen,http://arxiv.org/pdf/2303.07223v1.pdf,2023-03-13,['cs.cv'],2303.07223v1.pdf,"  Continual learning refers to the capability of continuously learning from a
stream of data. Current research mainly focuses on relieving catastrophic
forgetting, and most of their success is at the cost of limiting the
performance of newly incoming tasks. Such a trade-off is referred to as the
stabilityplasticity dilemma and is a more general and challenging problem for
continual learning. However, the inherent conflict between these two concepts
makes it seemingly impossible to devise a satisfactory solution to both of them
simultaneously. Therefore, we ask, ""is it possible to divide them into two
problems to conquer independently?"" To this end, we propose a
prompt-tuning-based method termed PromptFusion to enable the decoupling of
stability and plasticity. Specifically, PromptFusion consists of a carefully
designed Stabilizer module that deals with catastrophic forgetting and a
Booster module to learn new knowledge concurrently. During training,
PromptFusion first passes an input image to the two modules separately. Then
the resulting logits are further fused with a learnable weight parameter.
Finally, a weight mask is applied to the derived logits to balance between old
and new classes. Extensive experiments show that our method achieves promising
results on popular continual learning datasets for both class-incremental and
domain incremental settings. Especially on Split-Imagenet-R, one of the most
challenging datasets for class-incremental learning, our method exceeds
state-of-the-art prompt-based methods L2P and DualPrompt by more than 10%.
"
Progressive Visual Prompt Learning with Contrastive Feature Re-formation,Chen Xu,http://arxiv.org/pdf/2304.08386v1.pdf,2023-04-17,['cs.cv'],2304.08386v1.pdf,"  Prompt learning has been designed as an alternative to fine-tuning for
adapting Vision-language (V-L) models to the downstream tasks. Previous works
mainly focus on text prompt while visual prompt works are limited for V-L
models. The existing visual prompt methods endure either mediocre performance
or unstable training process, indicating the difficulty of visual prompt
learning. In this paper, we propose a new Progressive Visual Prompt (ProVP)
structure to strengthen the interactions among prompts of different layers.
More importantly, our ProVP could effectively propagate the image embeddings to
deep layers and behave partially similar to an instance adaptive prompt method.
To alleviate generalization deterioration, we further propose a new contrastive
feature re-formation, which prevents the serious deviation of the prompted
visual feature from the fixed CLIP visual feature distribution. Combining both,
our method (ProVP-Ref) is evaluated on 11 image benchmark datasets and achieves
7/11 state-of-theart results on both few-shot and base-to-novel settings. To
the best of our knowledge, we are the first to demonstrate the superior
performance of visual prompts in V-L models to previous prompt-based methods in
downstream tasks. Meanwhile, it implies that our ProVP-Ref shows the best
capability to adapt and to generalize.
"
SelfEvolve: A Code Evolution Framework via Large Language Models,Shuyang Jiang,http://arxiv.org/pdf/2306.02907v1.pdf,2023-06-05,"['cs.cl', 'cs.se']",2306.02907v1.pdf,"  Large language models (LLMs) have already revolutionized code generation,
after being pretrained on publicly available code data. However, while various
methods have been proposed to augment LLMs with retrieved knowledge and enhance
the quality of code generation, the performance of these retrieval-based
methods is limited by the strength of the retrievers used. In addition, while
LLMs show great emergent ability, they still struggle to produce the correct
code in one turn. To address these challenges, we propose a novel two-step
pipeline, called \autoknow, that leverages LLMs as both knowledge providers and
self-reflective programmers. Unlike retrieval-based methods, \autoknow~obtains
the knowledge from input prompts and generates intermediate code based on the
generated knowledge. After that, \autoknow~asks LLM to act as an expert
programmer to perform debugging for the generated code. This is achieved by
receiving the error message from the interpreter, without requiring special
test cases for correctness verification. We evaluate \autoknow~on three code
generation datasets, including DS-1000 for data science code, HumanEval for
software engineering code, and TransCoder for C++-to-Python translation. Our
empirical experiments show that \autoknow~outperforms strong baselines by a
significant margin on all datasets. We also conduct exhaustive analytical
experiments to validate the effectiveness of the two stages of \autoknow, and
find that both are superior to other prompting-based methods. Further
scalability analysis demonstrates that \autoknow~can be adapted to other more
advanced models, such as GPT-4, and bring consistent efficacy improvement.
"
Quantifying Language Models' Sensitivity to Spurious Features in Prompt  Design or: How I learned to start worrying about prompt formatting,Melanie Sclar,http://arxiv.org/pdf/2310.11324v1.pdf,2023-10-17,"['cs.cl', 'cs.ai', 'cs.lg']",2310.11324v1.pdf,"  As large language models (LLMs) are adopted as a fundamental component of
language technologies, it is crucial to accurately characterize their
performance. Because choices in prompt design can strongly influence model
behavior, this design process is critical in effectively using any modern
pre-trained generative language model. In this work, we focus on LLM
sensitivity to a quintessential class of meaning-preserving design choices:
prompt formatting. We find that several widely used open-source LLMs are
extremely sensitive to subtle changes in prompt formatting in few-shot
settings, with performance differences of up to 76 accuracy points when
evaluated using LLaMA-2-13B. Sensitivity remains even when increasing model
size, the number of few-shot examples, or performing instruction tuning. Our
analysis suggests that work evaluating LLMs with prompting-based methods would
benefit from reporting a range of performance across plausible prompt formats,
instead of the currently-standard practice of reporting performance on a single
format. We also show that format performance only weakly correlates between
models, which puts into question the methodological validity of comparing
models with an arbitrarily chosen, fixed prompt format. To facilitate
systematic analysis we propose FormatSpread, an algorithm that rapidly
evaluates a sampled set of plausible prompt formats for a given task, and
reports the interval of expected performance without accessing model weights.
Furthermore, we present a suite of analyses that characterize the nature of
this sensitivity, including exploring the influence of particular atomic
perturbations and the internal representation of particular formats.
"
GPT-3-driven pedagogical agents for training children's curious  question-asking skills,Rania Abdelghani,http://arxiv.org/pdf/2211.14228v6.pdf,2022-11-25,"['cs.cl', 'cs.hc']",2211.14228v6.pdf,"  In order to train children's ability to ask curiosity-driven questions,
previous research has explored designing specific exercises relying on
providing semantic and linguistic cues to help formulate such questions. But
despite showing pedagogical efficiency, this method is still limited as it
relies on generating the said cues by hand, which can be a very costly process.
In this context, we propose to leverage advances in the natural language
processing field (NLP) and investigate the efficiency of using a large language
model (LLM) for automating the production of the pedagogical content of a
curious question-asking (QA) training. We study generating the said content
using the ""prompt-based"" method that consists of explaining the task to the LLM
in natural text. We evaluate the output using human experts annotations and
comparisons with hand-generated content. Results suggested indeed the relevance
and usefulness of this content. We also conduct a field study in primary school
(75 children aged 9-10), where we evaluate children's QA performance when
having this training. We compare 3 types of content : 1) hand-generated content
that proposes ""closed"" cues leading to predefined questions; 2) GPT-3-generated
content that proposes the same type of cues; 3) GPT-3-generated content that
proposes ""open"" cues leading to several possible questions. We see a similar QA
performance between the two ""closed"" trainings (showing the scalability of the
approach using GPT-3), and a better one for participants with the ""open""
training. These results suggest the efficiency of using LLMs to support
children in generating more curious questions, using a natural language
prompting approach that affords usability by teachers and other users not
specialists of AI techniques. Furthermore, results also show that open-ended
content may be more suitable for training curious question-asking skills.
"
Towards using Few-Shot Prompt Learning for Automating Model Completion,Meriem Ben Chaaben,http://arxiv.org/pdf/2212.03404v1.pdf,2022-12-07,"['cs.se', 'cs.cl']",2212.03404v1.pdf,"  We propose a simple yet a novel approach to improve completion in domain
modeling activities. Our approach exploits the power of large language models
by using few-shot prompt learning without the need to train or fine-tune those
models with large datasets that are scarce in this field. We implemented our
approach and tested it on the completion of static and dynamic domain diagrams.
Our initial evaluation shows that such an approach is effective and can be
integrated in different ways during the modeling activities.
"
Are Prompt-based Models Clueless?,Pride Kavumba,http://arxiv.org/pdf/2205.09295v2.pdf,2022-05-19,['cs.cl'],2205.09295v2.pdf,"  Finetuning large pre-trained language models with a task-specific head has
advanced the state-of-the-art on many natural language understanding
benchmarks. However, models with a task-specific head require a lot of training
data, making them susceptible to learning and exploiting dataset-specific
superficial cues that do not generalize to other datasets. Prompting has
reduced the data requirement by reusing the language model head and formatting
the task input to match the pre-training objective. Therefore, it is expected
that few-shot prompt-based models do not exploit superficial cues. This paper
presents an empirical examination of whether few-shot prompt-based models also
exploit superficial cues. Analyzing few-shot prompt-based models on MNLI, SNLI,
HANS, and COPA has revealed that prompt-based models also exploit superficial
cues. While the models perform well on instances with superficial cues, they
often underperform or only marginally outperform random accuracy on instances
without superficial cues.
"
Decomposed Prompting for Machine Translation Between Related Languages  using Large Language Models,Ratish Puduppully,http://arxiv.org/pdf/2305.13085v2.pdf,2023-05-22,['cs.cl'],2305.13085v2.pdf,"  This study investigates machine translation between related languages i.e.,
languages within the same family that share linguistic characteristics such as
word order and lexical similarity. Machine translation through few-shot
prompting leverages a small set of translation pair examples to generate
translations for test sentences. This procedure requires the model to learn how
to generate translations while simultaneously ensuring that token ordering is
maintained to produce a fluent and accurate translation. We propose that for
related languages, the task of machine translation can be simplified by
leveraging the monotonic alignment characteristic of such languages. We
introduce DecoMT, a novel approach of few-shot prompting that decomposes the
translation process into a sequence of word chunk translations. Through
automatic and human evaluation conducted on multiple related language pairs
across various language families, we demonstrate that our proposed approach of
decomposed prompting surpasses multiple established few-shot baseline
approaches. For example, DecoMT outperforms the strong few-shot prompting BLOOM
model with an average improvement of 8 chrF++ scores across the examined
languages.
"
Multilingual Large Language Models Are Not (Yet) Code-Switchers,Ruochen Zhang,http://arxiv.org/pdf/2305.14235v2.pdf,2023-05-23,"['cs.cl', 'cs.ai']",2305.14235v2.pdf,"  Multilingual Large Language Models (LLMs) have recently shown great
capabilities in a wide range of tasks, exhibiting state-of-the-art performance
through zero-shot or few-shot prompting methods. While there have been
extensive studies on their abilities in monolingual tasks, the investigation of
their potential in the context of code-switching (CSW), the practice of
alternating languages within an utterance, remains relatively uncharted. In
this paper, we provide a comprehensive empirical analysis of various
multilingual LLMs, benchmarking their performance across four tasks: sentiment
analysis, machine translation, summarization and word-level language
identification. Our results indicate that despite multilingual LLMs exhibiting
promising outcomes in certain tasks using zero or few-shot prompting, they
still underperform in comparison to fine-tuned models of much smaller scales.
We argue that current ""multilingualism"" in LLMs does not inherently imply
proficiency with code-switching texts, calling for future research to bridge
this discrepancy.
"
"Text and Patterns: For Effective Chain of Thought, It Takes Two to Tango",Aman Madaan,http://arxiv.org/pdf/2209.07686v2.pdf,2022-09-16,"['cs.cl', 'cs.ai', 'cs.lg']",2209.07686v2.pdf,"  The past decade has witnessed dramatic gains in natural language processing
and an unprecedented scaling of large language models. These developments have
been accelerated by the advent of few-shot techniques such as chain of thought
(CoT) prompting. Specifically, CoT pushes the performance of large language
models in a few-shot setup by augmenting the prompts with intermediate steps.
Despite impressive results across various tasks, the reasons behind their
success have not been explored. This work uses counterfactual prompting to
develop a deeper understanding of CoT-based few-shot prompting mechanisms in
large language models. We first systematically identify and define the key
components of a prompt: symbols, patterns, and text. Then, we devise and
conduct an exhaustive set of experiments across four different tasks, by
querying the model with counterfactual prompts where only one of these
components is altered. Our experiments across three models (PaLM, GPT-3, and
CODEX) reveal several surprising findings and brings into question the
conventional wisdom around few-shot prompting. First, the presence of factual
patterns in a prompt is practically immaterial to the success of CoT. Second,
our results conclude that the primary role of intermediate steps may not be to
facilitate learning how to solve a task. The intermediate steps are rather a
beacon for the model to realize what symbols to replicate in the output to form
a factual answer. Further, text imbues patterns with commonsense knowledge and
meaning. Our empirical and qualitative analysis reveals that a symbiotic
relationship between text and patterns explains the success of few-shot
prompting: text helps extract commonsense from the question to help patterns,
and patterns enforce task understanding and direct text generation.
"
Understanding How Model Size Affects Few-shot Instruction Prompting,Ayrton San Joaquin,http://arxiv.org/pdf/2212.01907v1.pdf,2022-12-04,"['cs.cl', 'cs.lg', 'stat.ml']",2212.01907v1.pdf,"  Large Language Models are affected by the phenomena of memorizing and
forgetting their training data. But how do these vary by model size? We work
towards this question by investigating how the model size affects the model's
ability to discriminate a word's meaning in a given context. We introduce a
dataset called DeltaWords, which evaluates a model's ability to follow
instructions to select a sentence which replaces the target word with its
antonym. We show a weak inverse scaling trend, where task accuracy degrades as
model size increase, under extremely few-shot prompting regimes. We show that
increasing the number of examples tend to disproportionately benefit larger
models than smaller models.
"
Prompted LLMs as Chatbot Modules for Long Open-domain Conversation,Gibbeum Lee,http://arxiv.org/pdf/2305.04533v1.pdf,2023-05-08,"['cs.cl', 'cs.ai', 'cs.lg']",2305.04533v1.pdf,"  In this paper, we propose MPC (Modular Prompted Chatbot), a new approach for
creating high-quality conversational agents without the need for fine-tuning.
Our method utilizes pre-trained large language models (LLMs) as individual
modules for long-term consistency and flexibility, by using techniques such as
few-shot prompting, chain-of-thought (CoT), and external memory. Our human
evaluation results show that MPC is on par with fine-tuned chatbot models in
open-domain conversations, making it an effective solution for creating
consistent and engaging chatbots.
"
Internet-augmented language models through few-shot prompting for  open-domain question answering,Angeliki Lazaridou,http://arxiv.org/pdf/2203.05115v2.pdf,2022-03-10,"['cs.cl', 'cs.lg']",2203.05115v2.pdf,"  In this work, we aim to capitalize on the unique few-shot capabilities of
large-scale language models (LSLMs) to overcome some of their challenges with
respect to grounding to factual and up-to-date information. Motivated by
semi-parametric language models (LMs), which ground their decisions in external
retrieved evidence, we use few-shot prompting to learn to condition LMs on
information returned from the web using Google Search, a broad and constantly
updated knowledge source. Our approach does not involve fine-tuning or learning
additional parameters, thus making it applicable to any LM, offering therefore
a strong baseline. Indeed, we find that LMs conditioned on the web surpass
performance of closed-book models of similar, or even larger, model sizes in
open-domain question answering. Finally, we find that increasing the
inference-time compute of models, achieved via using multiple retrieved
evidences to generate multiple answers followed by a reranking stage that uses
scores generated by the same LMs, leads to better performance and alleviates
lower performance of smaller few-shot LMs. All in all, our findings suggest
that it might be beneficial to slow down the race towards the biggest model and
instead shift attention towards finding more effective ways to use models,
including but not limited to, better prompting or increasing inference-time
compute.
"
Decomposed Prompting: A Modular Approach for Solving Complex Tasks,Tushar Khot,http://arxiv.org/pdf/2210.02406v2.pdf,2022-10-05,['cs.cl'],2210.02406v2.pdf,"  Few-shot prompting is a surprisingly powerful way to use Large Language
Models (LLMs) to solve various tasks. However, this approach struggles as the
task complexity increases or when the individual reasoning steps of the task
themselves are hard to learn, especially when embedded in more complex tasks.
To address this, we propose Decomposed Prompting, a new approach to solve
complex tasks by decomposing them (via prompting) into simpler sub-tasks that
can be delegated to a library of prompting-based LLMs dedicated to these
sub-tasks. This modular structure allows each prompt to be optimized for its
specific sub-task, further decomposed if necessary, and even easily replaced
with more effective prompts, trained models, or symbolic functions if desired.
We show that the flexibility and modularity of Decomposed Prompting allows it
to outperform prior work on few-shot prompting using GPT3. On symbolic
reasoning tasks, we can further decompose sub-tasks that are hard for LLMs into
even simpler solvable sub-tasks. When the complexity comes from the input
length, we can recursively decompose the task into the same task but with
smaller inputs. We also evaluate our approach on textual multi-step reasoning
tasks: on long-context multi-hop QA task, we can more effectively teach the
sub-tasks via our separate sub-tasks prompts; and on open-domain multi-hop QA,
we can incorporate a symbolic information retrieval within our decomposition
framework, leading to improved performance on both tasks. Datasets, Code and
Prompts available at https://github.com/allenai/DecomP.
"
Language Model Crossover: Variation through Few-Shot Prompting,Elliot Meyerson,http://arxiv.org/pdf/2302.12170v2.pdf,2023-02-23,['cs.ne'],2302.12170v2.pdf,"  This paper pursues the insight that language models naturally enable an
intelligent variation operator similar in spirit to evolutionary crossover. In
particular, language models of sufficient scale demonstrate in-context
learning, i.e. they can learn from associations between a small number of input
patterns to generate outputs incorporating such associations (also called
few-shot prompting). This ability can be leveraged to form a simple but
powerful variation operator, i.e. to prompt a language model with a few
text-based genotypes (such as code, plain-text sentences, or equations), and to
parse its corresponding output as those genotypes' offspring. The promise of
such language model crossover (which is simple to implement and can leverage
many different open-source language models) is that it enables a simple
mechanism to evolve semantically-rich text representations (with few
domain-specific tweaks), and naturally benefits from current progress in
language models. Experiments in this paper highlight the versatility of
language-model crossover, through evolving binary bit-strings, sentences,
equations, text-to-image prompts, and Python code. The conclusion is that
language model crossover is a promising method for evolving genomes
representable as text.
"
Distilling Step-by-Step! Outperforming Larger Language Models with Less  Training Data and Smaller Model Sizes,Cheng-Yu Hsieh,http://arxiv.org/pdf/2305.02301v2.pdf,2023-05-03,"['cs.cl', 'cs.ai', 'cs.lg']",2305.02301v2.pdf,"  Deploying large language models (LLMs) is challenging because they are memory
inefficient and compute-intensive for practical applications. In reaction,
researchers train smaller task-specific models by either finetuning with human
labels or distilling using LLM-generated labels. However, finetuning and
distillation require large amounts of training data to achieve comparable
performance to LLMs. We introduce Distilling step-by-step, a new mechanism that
(a) trains smaller models that outperform LLMs, and (b) achieves so by
leveraging less training data needed by finetuning or distillation. Our method
extracts LLM rationales as additional supervision for training small models
within a multi-task framework. We present three findings across 4 NLP
benchmarks: First, compared to both finetuning and distillation, our mechanism
achieves better performance with much fewer labeled/unlabeled training
examples. Second, compared to few-shot prompted LLMs, we achieve better
performance using substantially smaller model sizes. Third, we reduce both the
model size and the amount of data required to outperform LLMs; our finetuned
770M T5 model outperforms the few-shot prompted 540B PaLM model using only 80%
of available data on a benchmark, whereas standard finetuning the same T5 model
struggles to match even by using 100% of the dataset. We release the code at:
https://github.com/google-research/distilling-step-by-step .
"
Leveraging Training Data in Few-Shot Prompting for Numerical Reasoning,Zhanming Jie,http://arxiv.org/pdf/2305.18170v2.pdf,2023-05-29,['cs.cl'],2305.18170v2.pdf,"  Chain-of-thought (CoT) prompting with large language models has proven
effective in numerous natural language processing tasks, but designing prompts
that generalize well to diverse problem types can be challenging, especially in
the context of math word problem (MWP) solving. Additionally, it is common to
have a large amount of training data that have a better diversity coverage but
CoT annotations are not available, which limits the use of supervised learning
techniques. To address these issues, we investigate two approaches to leverage
the training data in a few-shot prompting scenario: dynamic program prompting
and program distillation. Our approach is largely inspired by Gao et al.,
(2022), where they proposed to replace the CoT with the programs as the
intermediate reasoning step. Such a prompting strategy allows us to accurately
verify the answer correctness through program execution in MWP solving. Our
dynamic program prompting involves annotating the training data by sampling
correct programs from a large language model, while program distillation
involves adapting a smaller model to the program-annotated training data. Our
experiments on three standard MWP datasets demonstrate the effectiveness of
these approaches, yielding significant improvements over previous baselines for
prompting and fine-tuning. Our results suggest that leveraging a large amount
of training data can improve the generalization ability of prompts and boost
the performance of fine-tuned small models in MWP solving.
"
Zero- and Few-Shot Prompting with LLMs: A Comparative Study with  Fine-tuned Models for Bangla Sentiment Analysis,Md. Arid Hasan,http://arxiv.org/pdf/2308.10783v1.pdf,2023-08-21,"['cs.cl', 'cs.lg', '68t50', 'i.2.7']",2308.10783v1.pdf,"  The rapid expansion of the digital world has propelled sentiment analysis
into a critical tool across diverse sectors such as marketing, politics,
customer service, and healthcare. While there have been significant
advancements in sentiment analysis for widely spoken languages, low-resource
languages, such as Bangla, remain largely under-researched due to resource
constraints. Furthermore, the recent unprecedented performance of Large
Language Models (LLMs) in various applications highlights the need to evaluate
them in the context of low-resource languages. In this study, we present a
sizeable manually annotated dataset encompassing 33,605 Bangla news tweets and
Facebook comments. We also investigate zero- and few-shot in-context learning
with several language models, including Flan-T5, GPT-4, and Bloomz, offering a
comparative analysis against fine-tuned models. Our findings suggest that
monolingual transformer-based models consistently outperform other models, even
in zero and few-shot scenarios. To foster continued exploration, we intend to
make this dataset and our research tools publicly available to the broader
research community. In the spirit of further research, we plan to make this
dataset and our experimental resources publicly accessible to the wider
research community.
"
FOLIO: Natural Language Reasoning with First-Order Logic,Simeng Han,http://arxiv.org/pdf/2209.00840v1.pdf,2022-09-02,['cs.cl'],2209.00840v1.pdf,"  We present FOLIO, a human-annotated, open-domain, and logically complex and
diverse dataset for reasoning in natural language (NL), equipped with first
order logic (FOL) annotations. FOLIO consists of 1,435 examples (unique
conclusions), each paired with one of 487 sets of premises which serve as rules
to be used to deductively reason for the validity of each conclusion. The
logical correctness of premises and conclusions is ensured by their parallel
FOL annotations, which are automatically verified by our FOL inference engine.
In addition to the main NL reasoning task, NL-FOL pairs in FOLIO automatically
constitute a new NL-FOL translation dataset using FOL as the logical form. Our
experiments on FOLIO systematically evaluate the FOL reasoning ability of
supervised fine-tuning on medium-sized language models (BERT, RoBERTa) and
few-shot prompting on large language models (GPT-NeoX, OPT, GPT-3, Codex). For
NL-FOL translation, we experiment with GPT-3 and Codex. Our results show that
one of the most capable Large Language Model (LLM) publicly available, GPT-3
davinci, achieves only slightly better than random results with few-shot
prompting on a subset of FOLIO, and the model is especially bad at predicting
the correct truth values for False and Unknown conclusions. Our dataset and
code are available at https://github.com/Yale-LILY/FOLIO.
"
Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them,Mirac Suzgun,http://arxiv.org/pdf/2210.09261v1.pdf,2022-10-17,"['cs.cl', 'cs.ai']",2210.09261v1.pdf,"  BIG-Bench (Srivastava et al., 2022) is a diverse evaluation suite that
focuses on tasks believed to be beyond the capabilities of current language
models. Language models have already made good progress on this benchmark, with
the best model in the BIG-Bench paper outperforming average reported
human-rater results on 65% of the BIG-Bench tasks via few-shot prompting. But
on what tasks do language models fall short of average human-rater performance,
and are those tasks actually unsolvable by current language models?
  In this work, we focus on a suite of 23 challenging BIG-Bench tasks which we
call BIG-Bench Hard (BBH). These are the task for which prior language model
evaluations did not outperform the average human-rater. We find that applying
chain-of-thought (CoT) prompting to BBH tasks enables PaLM to surpass the
average human-rater performance on 10 of the 23 tasks, and Codex
(code-davinci-002) to surpass the average human-rater performance on 17 of the
23 tasks. Since many tasks in BBH require multi-step reasoning, few-shot
prompting without CoT, as done in the BIG-Bench evaluations (Srivastava et al.,
2022), substantially underestimates the best performance and capabilities of
language models, which is better captured via CoT prompting. As further
analysis, we explore the interaction between CoT and model scale on BBH,
finding that CoT enables emergent task performance on several BBH tasks with
otherwise flat scaling curves.
"
Mental-LLM: Leveraging Large Language Models for Mental Health  Prediction via Online Text Data,Xuhai Xu,http://arxiv.org/pdf/2307.14385v3.pdf,2023-07-26,"['cs.cl', '68u35', 'h.5.2; i.2.m']",2307.14385v3.pdf,"  Advances in large language models (LLMs) have empowered a variety of
applications. However, there is still a significant gap in research when it
comes to understanding and enhancing the capabilities of LLMs in the field of
mental health. In this work, we present the first comprehensive evaluation of
multiple LLMs, including Alpaca, Alpaca-LoRA, FLAN-T5, GPT-3.5, and GPT-4, on
various mental health prediction tasks via online text data. We conduct a broad
range of experiments, covering zero-shot prompting, few-shot prompting, and
instruction fine-tuning. The results indicate a promising yet limited
performance of LLMs with zero-shot and few-shot prompt designs for the mental
health tasks. More importantly, our experiments show that instruction
finetuning can significantly boost the performance of LLMs for all tasks
simultaneously. Our best-finetuned models, Mental-Alpaca and Mental-FLAN-T5,
outperform the best prompt design of GPT-3.5 (25 and 15 times bigger) by 10.9%
on balanced accuracy and the best of GPT-4 (250 and 150 times bigger) by 4.8%.
They further perform on par with the state-of-the-art task-specific language
model. We also conduct an exploratory case study on LLMs' capability on the
mental health reasoning tasks, illustrating the promising capability of certain
models such as GPT-4. We summarize our findings into a set of action guidelines
for potential methods to enhance LLMs' capability for mental health tasks.
Meanwhile, we also emphasize the important limitations before achieving
deployability in real-world mental health settings, such as known racial and
gender bias. We highlight the important ethical risks accompanying this line of
research.
"
Prompt Programming for Large Language Models: Beyond the Few-Shot  Paradigm,Laria Reynolds,http://arxiv.org/pdf/2102.07350v1.pdf,2021-02-15,"['cs.cl', 'cs.ai']",2102.07350v1.pdf,"  Prevailing methods for mapping large generative language models to supervised
tasks may fail to sufficiently probe models' novel capabilities. Using GPT-3 as
a case study, we show that 0-shot prompts can significantly outperform few-shot
prompts. We suggest that the function of few-shot examples in these cases is
better described as locating an already learned task rather than meta-learning.
This analysis motivates rethinking the role of prompts in controlling and
evaluating powerful language models. In this work, we discuss methods of prompt
programming, emphasizing the usefulness of considering prompts through the lens
of natural language. We explore techniques for exploiting the capacity of
narratives and cultural anchors to encode nuanced intentions and techniques for
encouraging deconstruction of a problem into components before producing a
verdict. Informed by this more encompassing theory of prompt programming, we
also introduce the idea of a metaprompt that seeds the model to generate its
own natural language prompts for a range of tasks. Finally, we discuss how
these more general methods of interacting with language models can be
incorporated into existing and future benchmarks and practical applications.
"
Fantastically Ordered Prompts and Where to Find Them: Overcoming  Few-Shot Prompt Order Sensitivity,Yao Lu,http://arxiv.org/pdf/2104.08786v2.pdf,2021-04-18,"['cs.cl', 'cs.ai']",2104.08786v2.pdf,"  When primed with only a handful of training samples, very large, pretrained
language models such as GPT-3 have shown competitive results when compared to
fully-supervised, fine-tuned, large, pretrained language models. We demonstrate
that the order in which the samples are provided can make the difference
between near state-of-the-art and random guess performance: essentially some
permutations are ""fantastic"" and some not. We analyse this phenomenon in
detail, establishing that: it is present across model sizes (even for the
largest current models), it is not related to a specific subset of samples, and
that a given good permutation for one model is not transferable to another.
While one could use a development set to determine which permutations are
performant, this would deviate from the true few-shot setting as it requires
additional annotated data. Instead, we use the generative nature of language
models to construct an artificial development set and based on entropy
statistics of the candidate permutations on this set, we identify performant
prompts. Our method yields a 13% relative improvement for GPT-family models
across eleven different established text classification tasks.
"
Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning,Prasetya Ajie Utama,http://arxiv.org/pdf/2109.04144v1.pdf,2021-09-09,"['cs.cl', 'cs.ai']",2109.04144v1.pdf,"  Recent prompt-based approaches allow pretrained language models to achieve
strong performances on few-shot finetuning by reformulating downstream tasks as
a language modeling problem. In this work, we demonstrate that, despite its
advantages on low data regimes, finetuned prompt-based models for sentence pair
classification tasks still suffer from a common pitfall of adopting inference
heuristics based on lexical overlap, e.g., models incorrectly assuming a
sentence pair is of the same meaning because they consist of the same set of
words. Interestingly, we find that this particular inference heuristic is
significantly less present in the zero-shot evaluation of the prompt-based
model, indicating how finetuning can be destructive to useful knowledge learned
during the pretraining. We then show that adding a regularization that
preserves pretraining weights is effective in mitigating this destructive
tendency of few-shot finetuning. Our evaluation on three datasets demonstrates
promising improvements on the three corresponding challenge datasets used to
diagnose the inference heuristics.
"
Towards Zero-Label Language Learning,Zirui Wang,http://arxiv.org/pdf/2109.09193v1.pdf,2021-09-19,"['cs.cl', 'cs.lg']",2109.09193v1.pdf,"  This paper explores zero-label learning in Natural Language Processing (NLP),
whereby no human-annotated data is used anywhere during training and models are
trained purely on synthetic data. At the core of our framework is a novel
approach for better leveraging the powerful pretrained language models.
Specifically, inspired by the recent success of few-shot inference on GPT-3, we
present a training data creation procedure named Unsupervised Data Generation
(UDG), which leverages few-shot prompts to synthesize high-quality training
data without real human annotations. Our method enables zero-label learning as
we train task-specific models solely on the synthetic data, yet we achieve
better or comparable results from strong baseline models trained on
human-labeled data. Furthermore, when mixed with labeled data, our approach
serves as a highly effective data augmentation procedure, achieving new
state-of-the-art results on the SuperGLUE benchmark.
"
P4E: Few-Shot Event Detection as Prompt-Guided Identification and  Localization,Sha Li,http://arxiv.org/pdf/2202.07615v3.pdf,2022-02-15,['cs.cl'],2202.07615v3.pdf,"  We propose P4E, an identify-and-localize event detection framework that
integrates the best of few-shot prompting and structured prediction. Our
framework decomposes event detection into an identification task and a
localization task. For the identification task, which we formulate as
multi-label classification, we leverage cloze-based prompting to align our
objective with the pre-training task of language models, allowing our model to
quickly adapt to new event types. We then employ an event type-agnostic
sequence labeling model to localize the event trigger conditioned on the
identification output. This heterogeneous model design allows P4E to quickly
learn new event types without sacrificing the ability to make structured
predictions. Our experiments demonstrate the effectiveness of our proposed
design, and P4E shows superior performance for few-shot event detection on
benchmark datasets FewEvent and MAVEN and comparable performance to SOTA for
fully-supervised event detection on ACE.
"
Prompt-and-Rerank: A Method for Zero-Shot and Few-Shot Arbitrary Textual  Style Transfer with Small Language Models,Mirac Suzgun,http://arxiv.org/pdf/2205.11503v1.pdf,2022-05-23,['cs.cl'],2205.11503v1.pdf,"  We propose a method for arbitrary textual style transfer (TST)--the task of
transforming a text into any given style--utilizing general-purpose pre-trained
language models. Our method, Prompt-and-Rerank, is based on a mathematical
formulation of the TST task, decomposing it into three constituent components:
textual similarity, target style strength, and fluency. Specifically, our
method first uses zero-shot or few-shot prompting to obtain a set of candidate
generations in the target style, and then re-ranks these candidates according
to a combination of the three components above. Empirically, our method enables
small pre-trained language models to perform on par with state-of-the-art
large-scale models while consuming two orders of magnitude less compute and
memory. Finally, we conduct a systematic investigation of the effect of model
size and prompt design (e.g., prompt paraphrasing and delimiter-pair choice) on
style transfer quality across seven diverse textual style transfer datasets.
"
Bootstrapping Multilingual Semantic Parsers using Large Language Models,Abhijeet Awasthi,http://arxiv.org/pdf/2210.07313v2.pdf,2022-10-13,"['cs.cl', 'cs.lg']",2210.07313v2.pdf,"  Despite cross-lingual generalization demonstrated by pre-trained multilingual
models, the translate-train paradigm of transferring English datasets across
multiple languages remains to be a key mechanism for training task-specific
multilingual models. However, for many low-resource languages, the availability
of a reliable translation service entails significant amounts of costly
human-annotated translation pairs. Further, translation services may continue
to be brittle due to domain mismatch between task-specific input text and
general-purpose text used for training translation models. For multilingual
semantic parsing, we demonstrate the effectiveness and flexibility offered by
large language models (LLMs) for translating English datasets into several
languages via few-shot prompting. Through extensive comparisons on two public
datasets, MTOP and MASSIVE, spanning 50 languages and several domains, we show
that our method of translating data using LLMs outperforms a strong
translate-train baseline on 41 out of 50 languages. We study the key design
choices that enable more effective multilingual data translation via prompted
LLMs.
"
Prompting GPT-3 To Be Reliable,Chenglei Si,http://arxiv.org/pdf/2210.09150v2.pdf,2022-10-17,['cs.cl'],2210.09150v2.pdf,"  Large language models (LLMs) show impressive abilities via few-shot
prompting. Commercialized APIs such as OpenAI GPT-3 further increase their use
in real-world language applications. However, the crucial problem of how to
improve the reliability of GPT-3 is still under-explored. While reliability is
a broad and vaguely defined term, we decompose reliability into four main
facets that correspond to the existing framework of ML safety and are
well-recognized to be important: generalizability, social biases, calibration,
and factuality. Our core contribution is to establish simple and effective
prompts that improve GPT-3's reliability as it: 1) generalizes
out-of-distribution, 2) balances demographic distribution and uses natural
language instructions to reduce social biases, 3) calibrates output
probabilities, and 4) updates the LLM's factual knowledge and reasoning chains.
With appropriate prompts, GPT-3 is more reliable than smaller-scale supervised
models on all these facets. We release all processed datasets, evaluation
scripts, and model predictions. Our systematic empirical study not only sheds
new insights on the reliability of prompting LLMs, but more importantly, our
prompting strategies can help practitioners more reliably use LLMs like GPT-3.
"
Exploring The Landscape of Distributional Robustness for Question  Answering Models,Anas Awadalla,http://arxiv.org/pdf/2210.12517v1.pdf,2022-10-22,"['cs.cl', 'cs.lg']",2210.12517v1.pdf,"  We conduct a large empirical evaluation to investigate the landscape of
distributional robustness in question answering. Our investigation spans over
350 models and 16 question answering datasets, including a diverse set of
architectures, model sizes, and adaptation methods (e.g., fine-tuning, adapter
tuning, in-context learning, etc.). We find that, in many cases, model
variations do not affect robustness and in-distribution performance alone
determines out-of-distribution performance. Moreover, our findings indicate
that i) zero-shot and in-context learning methods are more robust to
distribution shifts than fully fine-tuned models; ii) few-shot prompt
fine-tuned models exhibit better robustness than few-shot fine-tuned span
prediction models; iii) parameter-efficient and robustness enhancing training
methods provide no significant robustness improvements. In addition, we
publicly release all evaluations to encourage researchers to further analyze
robustness trends for question answering models.
"
"""Covid vaccine is against Covid but Oxford vaccine is made at Oxford!""  Semantic Interpretation of Proper Noun Compounds",Keshav Kolluru,http://arxiv.org/pdf/2210.13039v1.pdf,2022-10-24,['cs.cl'],2210.13039v1.pdf,"  Proper noun compounds, e.g., ""Covid vaccine"", convey information in a
succinct manner (a ""Covid vaccine"" is a ""vaccine that immunizes against the
Covid disease""). These are commonly used in short-form domains, such as news
headlines, but are largely ignored in information-seeking applications. To
address this limitation, we release a new manually annotated dataset, ProNCI,
consisting of 22.5K proper noun compounds along with their free-form semantic
interpretations. ProNCI is 60 times larger than prior noun compound datasets
and also includes non-compositional examples, which have not been previously
explored. We experiment with various neural models for automatically generating
the semantic interpretations from proper noun compounds, ranging from few-shot
prompting to supervised learning, with varying degrees of knowledge about the
constituent nouns. We find that adding targeted knowledge, particularly about
the common noun, results in performance gains of upto 2.8%. Finally, we
integrate our model generated interpretations with an existing Open IE system
and observe an 7.5% increase in yield at a precision of 85%. The dataset and
code are available at https://github.com/dair-iitd/pronci.
"
Prompting PaLM for Translation: Assessing Strategies and Performance,David Vilar,http://arxiv.org/pdf/2211.09102v3.pdf,2022-11-16,['cs.cl'],2211.09102v3.pdf,"  Large language models (LLMs) that have been trained on multilingual but not
parallel text exhibit a remarkable ability to translate between languages. We
probe this ability in an in-depth study of the pathways language model (PaLM),
which has demonstrated the strongest machine translation (MT) performance among
similarly-trained LLMs to date. We investigate various strategies for choosing
translation examples for few-shot prompting, concluding that example quality is
the most important factor. Using optimized prompts, we revisit previous
assessments of PaLM's MT capabilities with more recent test sets, modern MT
metrics, and human evaluation, and find that its performance, while impressive,
still lags that of state-of-the-art supervised systems. We conclude by
providing an analysis of PaLM's MT output which reveals some interesting
properties and prospects for future work.
"
PartSLIP: Low-Shot Part Segmentation for 3D Point Clouds via Pretrained  Image-Language Models,Minghua Liu,http://arxiv.org/pdf/2212.01558v2.pdf,2022-12-03,"['cs.cv', 'cs.ro']",2212.01558v2.pdf,"  Generalizable 3D part segmentation is important but challenging in vision and
robotics. Training deep models via conventional supervised methods requires
large-scale 3D datasets with fine-grained part annotations, which are costly to
collect. This paper explores an alternative way for low-shot part segmentation
of 3D point clouds by leveraging a pretrained image-language model, GLIP, which
achieves superior performance on open-vocabulary 2D detection. We transfer the
rich knowledge from 2D to 3D through GLIP-based part detection on point cloud
rendering and a novel 2D-to-3D label lifting algorithm. We also utilize
multi-view 3D priors and few-shot prompt tuning to boost performance
significantly. Extensive evaluation on PartNet and PartNet-Mobility datasets
shows that our method enables excellent zero-shot 3D part segmentation. Our
few-shot version not only outperforms existing few-shot approaches by a large
margin but also achieves highly competitive results compared to the fully
supervised counterpart. Furthermore, we demonstrate that our method can be
directly applied to iPhone-scanned point clouds without significant domain
gaps.
"
Natural Language to Code Generation in Interactive Data Science  Notebooks,Pengcheng Yin,http://arxiv.org/pdf/2212.09248v1.pdf,2022-12-19,"['cs.cl', 'cs.se']",2212.09248v1.pdf,"  Computational notebooks, such as Jupyter notebooks, are interactive computing
environments that are ubiquitous among data scientists to perform data
wrangling and analytic tasks. To measure the performance of AI pair programmers
that automatically synthesize programs for those tasks given natural language
(NL) intents from users, we build ARCADE, a benchmark of 1082 code generation
problems using the pandas data analysis framework in data science notebooks.
ARCADE features multiple rounds of NL-to-code problems from the same notebook.
It requires a model to understand rich multi-modal contexts, such as existing
notebook cells and their execution states as well as previous turns of
interaction. To establish a strong baseline on this challenging task, we
develop PaChiNCo, a 62B code language model (LM) for Python computational
notebooks, which significantly outperforms public code LMs. Finally, we explore
few-shot prompting strategies to elicit better code with step-by-step
decomposition and NL explanation, showing the potential to improve the
diversity and explainability of model predictions.
"
LAMBADA: Backward Chaining for Automated Reasoning in Natural Language,Mehran Kazemi,http://arxiv.org/pdf/2212.13894v2.pdf,2022-12-20,"['cs.ai', 'cs.lg']",2212.13894v2.pdf,"  Remarkable progress has been made on automated reasoning with natural text,
by using Language Models (LMs) and methods such as Chain-of-Thought and
Selection-Inference. These techniques search for proofs in the forward
direction from axioms to the conclusion, which suffers from a combinatorial
explosion of the search space, and thus high failure rates for problems
requiring longer chains of reasoning. The classical automated reasoning
literature has shown that reasoning in the backward direction (i.e. from the
intended conclusion to supporting axioms) is significantly more efficient at
proof-finding. Importing this intuition into the LM setting, we develop a
Backward Chaining algorithm, called LAMBADA, that decomposes reasoning into
four sub-modules. These sub-modules are simply implemented by few-shot prompted
LM inference. We show that LAMBADA achieves sizable accuracy boosts over
state-of-the-art forward reasoning methods on challenging logical reasoning
datasets, particularly when deep and accurate proof chains are required.
"
Can GPT-3 Perform Statutory Reasoning?,Andrew Blair-Stanek,http://arxiv.org/pdf/2302.06100v2.pdf,2023-02-13,"['cs.cl', 'cs.ai']",2302.06100v2.pdf,"  Statutory reasoning is the task of reasoning with facts and statutes, which
are rules written in natural language by a legislature. It is a basic legal
skill. In this paper we explore the capabilities of the most capable GPT-3
model, text-davinci-003, on an established statutory-reasoning dataset called
SARA. We consider a variety of approaches, including dynamic few-shot
prompting, chain-of-thought prompting, and zero-shot prompting. While we
achieve results with GPT-3 that are better than the previous best published
results, we also identify several types of clear errors it makes. We
investigate why these errors happen. We discover that GPT-3 has imperfect prior
knowledge of the actual U.S. statutes on which SARA is based. More importantly,
we create simple synthetic statutes, which GPT-3 is guaranteed not to have seen
during training. We find GPT-3 performs poorly at answering straightforward
questions about these simple synthetic statutes.
"
STREET: A Multi-Task Structured Reasoning and Explanation Benchmark,Danilo Ribeiro,http://arxiv.org/pdf/2302.06729v1.pdf,2023-02-13,"['cs.cl', 'cs.ai', 'i.2.7; i.2.6']",2302.06729v1.pdf,"  We introduce STREET, a unified multi-task and multi-domain natural language
reasoning and explanation benchmark. Unlike most existing question-answering
(QA) datasets, we expect models to not only answer questions, but also produce
step-by-step structured explanations describing how premises in the question
are used to produce intermediate conclusions that can prove the correctness of
a certain answer. We perform extensive evaluation with popular language models
such as few-shot prompting GPT-3 and fine-tuned T5. We find that these models
still lag behind human performance when producing such structured reasoning
steps. We believe this work will provide a way for the community to better
train and test systems on multi-step reasoning and explanations in natural
language.
"
ADELT: Transpilation Between Deep Learning Frameworks,Linyuan Gong,http://arxiv.org/pdf/2303.03593v1.pdf,2023-03-07,"['cs.cl', 'cs.lg']",2303.03593v1.pdf,"  We propose Adversarial DEep Learning Transpiler (ADELT) for source-to-source
transpilation between deep learning frameworks. Unlike prior approaches, we
decouple the transpilation of code skeletons and the mapping of API keywords
(an API function name or a parameter name). ADELT transpile code skeletons
using few-shot prompting on big language models. Based on contextual embeddings
extracted by a BERT for code, we train aligned API embeddings in a
domain-adversarial setup, upon which we generate a dictionary for keyword
translation. The model is trained on our unlabeled DL corpus from web crawl
data, without using any hand-crafted rules and parallel data. Our method
outperforms state-of-the-art transpilers on multiple transpilation pairs
including PyTorch-Keras and PyTorch-MXNet by 15.9pts and 12.0pts in exact match
scores respectively.
"
Query2doc: Query Expansion with Large Language Models,Liang Wang,http://arxiv.org/pdf/2303.07678v2.pdf,2023-03-14,"['cs.ir', 'cs.cl']",2303.07678v2.pdf,"  This paper introduces a simple yet effective query expansion approach,
denoted as query2doc, to improve both sparse and dense retrieval systems. The
proposed method first generates pseudo-documents by few-shot prompting large
language models (LLMs), and then expands the query with generated
pseudo-documents. LLMs are trained on web-scale text corpora and are adept at
knowledge memorization. The pseudo-documents from LLMs often contain highly
relevant information that can aid in query disambiguation and guide the
retrievers. Experimental results demonstrate that query2doc boosts the
performance of BM25 by 3% to 15% on ad-hoc IR datasets, such as MS-MARCO and
TREC DL, without any model fine-tuning. Furthermore, our method also benefits
state-of-the-art dense retrievers in terms of both in-domain and out-of-domain
results.
"
How to Design Translation Prompts for ChatGPT: An Empirical Study,Yuan Gao,http://arxiv.org/pdf/2304.02182v2.pdf,2023-04-05,['cs.cl'],2304.02182v2.pdf,"  The recently released ChatGPT has demonstrated surprising abilities in
natural language understanding and natural language generation. Machine
translation relies heavily on the abilities of language understanding and
generation. Thus, in this paper, we explore how to assist machine translation
with ChatGPT. We adopt several translation prompts on a wide range of
translations. Our experimental results show that ChatGPT with designed
translation prompts can achieve comparable or better performance over
commercial translation systems for high-resource language translations. We
further evaluate the translation quality using multiple references, and ChatGPT
achieves superior performance compared to commercial systems. We also conduct
experiments on domain-specific translations, the final results show that
ChatGPT is able to comprehend the provided domain keyword and adjust
accordingly to output proper translations. At last, we perform few-shot prompts
that show consistent improvement across different base prompts. Our work
provides empirical evidence that ChatGPT still has great potential in
translations.
"
Boosted Prompt Ensembles for Large Language Models,Silviu Pitis,http://arxiv.org/pdf/2304.05970v1.pdf,2023-04-12,"['cs.cl', 'cs.lg']",2304.05970v1.pdf,"  Methods such as chain-of-thought prompting and self-consistency have pushed
the frontier of language model reasoning performance with no additional
training. To further improve performance, we propose a prompt ensembling method
for large language models, which uses a small dataset to construct a set of few
shot prompts that together comprise a ``boosted prompt ensemble''. The few shot
examples for each prompt are chosen in a stepwise fashion to be ``hard''
examples on which the previous step's ensemble is uncertain. We show that this
outperforms single-prompt output-space ensembles and bagged prompt-space
ensembles on the GSM8k and AQuA datasets, among others. We propose both
train-time and test-time versions of boosted prompting that use different
levels of available annotation and conduct a detailed empirical study of our
algorithm.
"
Multi-Party Chat: Conversational Agents in Group Settings with Humans  and Models,Jimmy Wei,http://arxiv.org/pdf/2304.13835v3.pdf,2023-04-26,"['cs.cl', 'cs.lg']",2304.13835v3.pdf,"  Current dialogue research primarily studies pairwise (two-party)
conversations, and does not address the everyday setting where more than two
speakers converse together. In this work, we both collect and evaluate
multi-party conversations to study this more general case. We use the LIGHT
environment to construct grounded conversations, where each participant has an
assigned character to role-play. We thus evaluate the ability of language
models to act as one or more characters in such conversations. Models require
two skills that pairwise-trained models appear to lack: (1) being able to
decide when to talk; (2) producing coherent utterances grounded on multiple
characters. We compare models trained on our new dataset to existing
pairwise-trained dialogue models, as well as large language models with
few-shot prompting. We find that our new dataset, MultiLIGHT, which we will
publicly release, can help bring significant improvements in the group setting.
"
Transferring Procedural Knowledge across Commonsense Tasks,Yifan Jiang,http://arxiv.org/pdf/2304.13867v2.pdf,2023-04-26,['cs.cl'],2304.13867v2.pdf,"  Stories about everyday situations are an essential part of human
communication, motivating the need to develop AI agents that can reliably
understand these stories. Despite the long list of supervised methods for story
completion and procedural understanding, current AI has no mechanisms to
automatically track and explain procedures in unseen stories. To bridge this
gap, we study the ability of AI models to transfer procedural knowledge to
novel narrative tasks in a transparent manner. We design LEAP: a comprehensive
framework that integrates state-of-the-art modeling architectures, training
regimes, and augmentation strategies based on both natural and synthetic
stories. To address the lack of densely annotated training data, we devise a
robust automatic labeler based on few-shot prompting to enhance the augmented
data. Our experiments with in- and out-of-domain tasks reveal insights into the
interplay of different architectures, training regimes, and augmentation
strategies. LEAP's labeler has a clear positive impact on out-of-domain
datasets, while the resulting dense annotation provides native explainability.
"
Explainable Verbal Reasoner Plus (EVR+): A Natural Language Reasoning  Framework that Supports Diverse Compositional Reasoning,Zhengzhong Liang,http://arxiv.org/pdf/2305.00061v1.pdf,2023-04-28,"['cs.cl', 'cs.ai']",2305.00061v1.pdf,"  Languages models have been successfully applied to a variety of reasoning
tasks in NLP, yet the language models still suffer from compositional
generalization. In this paper we present Explainable Verbal Reasoner Plus
(EVR+), a reasoning framework that enhances language models' compositional
reasoning ability by (1) allowing the model to explicitly generate and execute
symbolic operators, and (2) allowing the model to decompose a complex task into
several simpler ones in a flexible manner. Compared with its predecessor
Explainable Verbal Reasoner (EVR) and other previous approaches adopting
similar ideas, our framework supports more diverse types of reasoning such as
nested loops and different types of recursion. To evaluate our reasoning
framework, we build a synthetic dataset with five tasks that require
compositional reasoning. Results show that our reasoning framework can enhance
the language model's compositional generalization performance on the five
tasks, using a fine-tuned language model. We also discussed the possibility and
the challenges to combine our reasoning framework with a few-shot prompted
language model.
"
Revisiting Relation Extraction in the era of Large Language Models,Somin Wadhwa,http://arxiv.org/pdf/2305.05003v1.pdf,2023-05-08,['cs.cl'],2305.05003v1.pdf,"  Relation extraction (RE) is the core NLP task of inferring semantic
relationships between entities from text. Standard supervised RE techniques
entail training modules to tag tokens comprising entity spans and then predict
the relationship between them. Recent work has instead treated the problem as a
\emph{sequence-to-sequence} task, linearizing relations between entities as
target strings to be generated conditioned on the input. Here we push the
limits of this approach, using larger language models (GPT-3 and Flan-T5 large)
than considered in prior work and evaluating their performance on standard RE
tasks under varying levels of supervision. We address issues inherent to
evaluating generative approaches to RE by doing human evaluations, in lieu of
relying on exact matching. Under this refined evaluation, we find that: (1)
Few-shot prompting with GPT-3 achieves near SOTA performance, i.e., roughly
equivalent to existing fully supervised models; (2) Flan-T5 is not as capable
in the few-shot setting, but supervising and fine-tuning it with
Chain-of-Thought (CoT) style explanations (generated via GPT-3) yields SOTA
results. We release this model as a new baseline for RE tasks.
"
Generating medically-accurate summaries of patient-provider dialogue: A  multi-stage approach using large language models,Varun Nair,http://arxiv.org/pdf/2305.05982v1.pdf,2023-05-10,"['cs.cl', 'cs.ai', 'cs.lg']",2305.05982v1.pdf,"  A medical provider's summary of a patient visit serves several critical
purposes, including clinical decision-making, facilitating hand-offs between
providers, and as a reference for the patient. An effective summary is required
to be coherent and accurately capture all the medically relevant information in
the dialogue, despite the complexity of patient-generated language. Even minor
inaccuracies in visit summaries (for example, summarizing ""patient does not
have a fever"" when a fever is present) can be detrimental to the outcome of
care for the patient.
  This paper tackles the problem of medical conversation summarization by
discretizing the task into several smaller dialogue-understanding tasks that
are sequentially built upon. First, we identify medical entities and their
affirmations within the conversation to serve as building blocks. We study
dynamically constructing few-shot prompts for tasks by conditioning on relevant
patient information and use GPT-3 as the backbone for our experiments. We also
develop GPT-derived summarization metrics to measure performance against
reference summaries quantitatively. Both our human evaluation study and metrics
for medical correctness show that summaries generated using this approach are
clinically accurate and outperform the baseline approach of summarizing the
dialog in a zero-shot, single-prompt setting.
"
ZARA: Improving Few-Shot Self-Rationalization for Small Language Models,Wei-Lin Chen,http://arxiv.org/pdf/2305.07355v2.pdf,2023-05-12,['cs.cl'],2305.07355v2.pdf,"  Language models (LMs) that jointly generate end-task answers as well as
free-text rationales are known as self-rationalization models. Recent works
demonstrate great performance gain for self-rationalization by few-shot
prompting LMs with rationale-augmented exemplars. However, the ability to
benefit from explanations only emerges with large-scale LMs, which have poor
accessibility. In this work, we explore the less-studied setting of leveraging
explanations for small LMs to improve few-shot self-rationalization. We first
revisit the relationship between rationales and answers. Inspired by the
implicit mental process of how human beings assess explanations, we present a
novel approach, Zero-shot Augmentation of Rationale-Answer pairs (ZARA), to
automatically construct pseudo-parallel data for self-training by reducing the
problem of plausibility judgement to natural language inference. Experimental
results show ZARA achieves SOTA performance on the FEB benchmark, for both the
task accuracy and the explanation metric. In addition, we conduct human and
quantitative evaluation validating ZARA's ability to automatically identify
plausible and accurate rationale-answer pairs.
"
Natural Language Decomposition and Interpretation of Complex Utterances,Harsh Jhamtani,http://arxiv.org/pdf/2305.08677v1.pdf,2023-05-15,['cs.cl'],2305.08677v1.pdf,"  Natural language interfaces often require supervised data to translate user
requests into programs, database queries, or other structured intent
representations. During data collection, it can be difficult to anticipate and
formalize the full range of user needs -- for example, in a system designed to
handle simple requests (like $\textit{find my meetings tomorrow}$ or
$\textit{move my meeting with my manager to noon})$, users may also express
more elaborate requests (like $\textit{swap all my calls on Monday and
Tuesday}$). We introduce an approach for equipping a simple language-to-code
model to handle complex utterances via a process of hierarchical natural
language decomposition. Our approach uses a pre-trained language model to
decompose a complex utterance into a sequence of smaller natural language
steps, then interprets each step using the language-to-code model. To test our
approach, we collect and release DeCU -- a new NL-to-program benchmark to
evaluate Decomposition of Complex Utterances. Experiments show that the
proposed approach enables the interpretation of complex utterances with almost
no complex training data, while outperforming standard few-shot prompting
approaches.
"
Visualizing Linguistic Diversity of Text Datasets Synthesized by Large  Language Models,Emily Reif,http://arxiv.org/pdf/2305.11364v2.pdf,2023-05-19,"['cs.cl', 'cs.ai']",2305.11364v2.pdf,"  Large language models (LLMs) can be used to generate smaller, more refined
datasets via few-shot prompting for benchmarking, fine-tuning or other use
cases. However, understanding and evaluating these datasets is difficult, and
the failure modes of LLM-generated data are still not well understood.
Specifically, the data can be repetitive in surprising ways, not only
semantically but also syntactically and lexically. We present LinguisticLens, a
novel inter-active visualization tool for making sense of and analyzing
syntactic diversity of LLM-generated datasets. LinguisticLens clusters text
along syntactic, lexical, and semantic axes. It supports hierarchical
visualization of a text dataset, allowing users to quickly scan for an overview
and inspect individual examples. The live demo is available at
shorturl.at/zHOUV.
"
Improved Compositional Generalization by Generating Demonstrations for  Meta-Learning,Sam Spilsbury,http://arxiv.org/pdf/2305.13092v1.pdf,2023-05-22,['cs.cl'],2305.13092v1.pdf,"  Meta-learning and few-shot prompting are viable methods to induce certain
types of compositional behaviour. However, these methods can be very sensitive
to the choice of support examples used. Choosing good supports from the
training data for a given test query is already a difficult problem, but in
some cases solving this may not even be enough. We consider a grounded language
learning problem (gSCAN) where good support examples for certain test splits
might not even exist in the training data, or would be infeasible to search
for. We design an agent which instead generates possible supports which are
relevant to the test query and current state of the world, then uses these
supports via meta-learning to solve the test query. We show substantially
improved performance on a previously unsolved compositional behaviour split
without a loss of performance on other splits. Further experiments show that in
this case, searching for relevant demonstrations even with an oracle function
is not sufficient to attain good performance when using meta-learning.
"
SPARSEFIT: Few-shot Prompting with Sparse Fine-tuning for Jointly  Generating Predictions and Natural Language Explanations,Jesus Solano,http://arxiv.org/pdf/2305.13235v2.pdf,2023-05-22,"['cs.cl', 'cs.ai']",2305.13235v2.pdf,"  Explaining the decisions of neural models is crucial for ensuring their
trustworthiness at deployment time. Using Natural Language Explanations (NLEs)
to justify a model's predictions has recently gained increasing interest.
However, this approach usually demands large datasets of human-written NLEs for
the ground-truth answers, which are expensive and potentially infeasible for
some applications. For models to generate high-quality NLEs when only a few
NLEs are available, the fine-tuning of Pre-trained Language Models (PLMs) in
conjunction with prompt-based learning recently emerged. However, PLMs
typically have billions of parameters, making fine-tuning expensive. We propose
SparseFit, a sparse few-shot fine-tuning strategy that leverages discrete
prompts to jointly generate predictions and NLEs. We experiment with SparseFit
on the T5 model and four datasets and compare it against state-of-the-art
parameter-efficient fine-tuning techniques. We perform automatic and human
evaluations to assess the quality of the model-generated NLEs, finding that
fine-tuning only 6.8% of the model parameters leads to competitive results for
both the task performance and the quality of the NLEs.
"
Towards Legally Enforceable Hate Speech Detection for Public Forums,Chu Fei Luo,http://arxiv.org/pdf/2305.13677v2.pdf,2023-05-23,['cs.cl'],2305.13677v2.pdf,"  Hate speech causes widespread and deep-seated societal issues. Proper
enforcement of hate speech laws is key for protecting groups of people against
harmful and discriminatory language. However, determining what constitutes hate
speech is a complex task that is highly open to subjective interpretations.
Existing works do not align their systems with enforceable definitions of hate
speech, which can make their outputs inconsistent with the goals of regulators.
This research introduces a new perspective and task for enforceable hate speech
detection centred around legal definitions, and a dataset annotated on
violations of eleven possible definitions by legal experts. Given the challenge
of identifying clear, legally enforceable instances of hate speech, we augment
the dataset with expert-generated samples and an automatically mined challenge
set. We experiment with grounding the model decision in these definitions using
zero-shot and few-shot prompting. We then report results on several large
language models (LLMs). With this task definition, automatic hate speech
detection can be more closely aligned to enforceable laws, and hence assist in
more rigorous enforcement of legal protections against harmful speech in public
forums.
"
PEARL: Prompting Large Language Models to Plan and Execute Actions Over  Long Documents,Simeng Sun,http://arxiv.org/pdf/2305.14564v1.pdf,2023-05-23,['cs.cl'],2305.14564v1.pdf,"  Strategies such as chain-of-thought prompting improve the performance of
large language models (LLMs) on complex reasoning tasks by decomposing input
examples into intermediate steps. However, it remains unclear how to apply such
methods to reason over long input documents, in which both the decomposition
and the output of each intermediate step are non-trivial to obtain. In this
work, we propose PEARL, a prompting framework to improve reasoning over long
documents, which consists of three stages: action mining, plan formulation, and
plan execution. More specifically, given a question about a long document,
PEARL decomposes the question into a sequence of actions (e.g., SUMMARIZE,
FIND_EVENT, FIND_RELATION) and then executes them over the document to obtain
the answer. Each stage of PEARL is implemented via zero-shot or few-shot
prompting of LLMs (in our work, GPT-4) with minimal human input. We evaluate
PEARL on a challenging subset of the QuALITY dataset, which contains questions
that require complex reasoning over long narrative texts. PEARL outperforms
zero-shot and chain-of-thought prompting on this dataset, and ablation
experiments show that each stage of PEARL is critical to its performance.
Overall, PEARL is a first step towards leveraging LLMs to reason over long
documents.
"
Large Language Model Distillation Doesn't Need a Teacher,Ananya Harsh Jha,http://arxiv.org/pdf/2305.14864v1.pdf,2023-05-24,['cs.cl'],2305.14864v1.pdf,"  Knowledge distillation trains a smaller student model to match the output
distribution of a larger teacher to maximize the end-task performance under
computational constraints. However, existing literature on language model
distillation primarily focuses on compressing encoder-only models that are then
specialized by task-specific supervised finetuning. We need to rethink this
setup for more recent large language models with tens to hundreds of billions
of parameters. Task-specific finetuning is impractical at this scale, and model
performance is often measured using zero/few-shot prompting. Thus, in this
work, we advocate for task-agnostic zero-shot evaluated distillation for large
language models without access to end-task finetuning data. We propose a
teacher-free task-agnostic distillation method, which uses a truncated version
of the larger model for initialization, and continues pretraining this model
using a language modeling objective. Our teacher-free method shines in a
distillation regime where it is infeasible to fit both the student and teacher
into the GPU memory. Despite its simplicity, our method can effectively reduce
the model size by 50\%, matching or outperforming the vanilla distillation
method on perplexity and accuracy on 13 zero-shot end-tasks while being 1.5x
computationally efficient.
"
Revisiting non-English Text Simplification: A Unified Multilingual  Benchmark,Michael J. Ryan,http://arxiv.org/pdf/2305.15678v1.pdf,2023-05-25,"['cs.cl', 'cs.ai']",2305.15678v1.pdf,"  Recent advancements in high-quality, large-scale English resources have
pushed the frontier of English Automatic Text Simplification (ATS) research.
However, less work has been done on multilingual text simplification due to the
lack of a diverse evaluation benchmark that covers complex-simple sentence
pairs in many languages. This paper introduces the MultiSim benchmark, a
collection of 27 resources in 12 distinct languages containing over 1.7 million
complex-simple sentence pairs. This benchmark will encourage research in
developing more effective multilingual text simplification models and
evaluation metrics. Our experiments using MultiSim with pre-trained
multilingual language models reveal exciting performance improvements from
multilingual training in non-English settings. We observe strong performance
from Russian in zero-shot cross-lingual transfer to low-resource languages. We
further show that few-shot prompting with BLOOM-176b achieves comparable
quality to reference simplifications outperforming fine-tuned models in most
languages. We validate these findings through human evaluation.
"
Do GPTs Produce Less Literal Translations?,Vikas Raunak,http://arxiv.org/pdf/2305.16806v4.pdf,2023-05-26,"['cs.cl', 'cs.ai']",2305.16806v4.pdf,"  Large Language Models (LLMs) such as GPT-3 have emerged as general-purpose
language models capable of addressing many natural language generation or
understanding tasks. On the task of Machine Translation (MT), multiple works
have investigated few-shot prompting mechanisms to elicit better translations
from LLMs. However, there has been relatively little investigation on how such
translations differ qualitatively from the translations generated by standard
Neural Machine Translation (NMT) models. In this work, we investigate these
differences in terms of the literalness of translations produced by the two
systems. Using literalness measures involving word alignment and monotonicity,
we find that translations out of English (E-X) from GPTs tend to be less
literal, while exhibiting similar or better scores on MT quality metrics. We
demonstrate that this finding is borne out in human evaluations as well. We
then show that these differences are especially pronounced when translating
sentences that contain idiomatic expressions.
"
Log Parsing: How Far Can ChatGPT Go?,Van-Hoang Le,http://arxiv.org/pdf/2306.01590v2.pdf,2023-06-02,"['cs.se', 'cs.ai']",2306.01590v2.pdf,"  Software logs play an essential role in ensuring the reliability and
maintainability of large-scale software systems, as they are often the sole
source of runtime information. Log parsing, which converts raw log messages
into structured data, is an important initial step towards downstream log
analytics. In recent studies, ChatGPT, the current cutting-edge large language
model (LLM), has been widely applied to a wide range of software engineering
tasks. However, its performance in automated log parsing remains unclear. In
this paper, we evaluate ChatGPT's ability to undertake log parsing by
addressing two research questions. (1) Can ChatGPT effectively parse logs? (2)
How does ChatGPT perform with different prompting methods? Our results show
that ChatGPT can achieve promising results for log parsing with appropriate
prompts, especially with few-shot prompting. Based on our findings, we outline
several challenges and opportunities for ChatGPT-based log parsing.
"
Large Language Model Augmented Narrative Driven Recommendations,Sheshera Mysore,http://arxiv.org/pdf/2306.02250v2.pdf,2023-06-04,"['cs.ir', 'cs.cl']",2306.02250v2.pdf,"  Narrative-driven recommendation (NDR) presents an information access problem
where users solicit recommendations with verbose descriptions of their
preferences and context, for example, travelers soliciting recommendations for
points of interest while describing their likes/dislikes and travel
circumstances. These requests are increasingly important with the rise of
natural language-based conversational interfaces for search and recommendation
systems. However, NDR lacks abundant training data for models, and current
platforms commonly do not support these requests. Fortunately, classical
user-item interaction datasets contain rich textual data, e.g., reviews, which
often describe user preferences and context - this may be used to bootstrap
training for NDR models. In this work, we explore using large language models
(LLMs) for data augmentation to train NDR models. We use LLMs for authoring
synthetic narrative queries from user-item interactions with few-shot prompting
and train retrieval models for NDR on synthetic queries and user-item
interaction data. Our experiments demonstrate that this is an effective
strategy for training small-parameter retrieval models that outperform other
retrieval and LLM baselines for narrative-driven recommendation.
"
Enhancing In-Context Learning with Answer Feedback for Multi-Span  Question Answering,Zixian Huang,http://arxiv.org/pdf/2306.04508v1.pdf,2023-06-07,"['cs.cl', 'cs.ai']",2306.04508v1.pdf,"  Whereas the recent emergence of large language models (LLMs) like ChatGPT has
exhibited impressive general performance, it still has a large gap with
fully-supervised models on specific tasks such as multi-span question
answering. Previous researches found that in-context learning is an effective
approach to exploiting LLM, by using a few task-related labeled data as
demonstration examples to construct a few-shot prompt for answering new
questions. A popular implementation is to concatenate a few questions and their
correct answers through simple templates, informing LLM of the desired output.
In this paper, we propose a novel way of employing labeled data such that it
also informs LLM of some undesired output, by extending demonstration examples
with feedback about answers predicted by an off-the-shelf model, e.g., correct,
incorrect, or incomplete. Experiments on three multi-span question answering
datasets as well as a keyphrase extraction dataset show that our new prompting
strategy consistently improves LLM's in-context learning performance.
"
Product Information Extraction using ChatGPT,Alexander Brinkmann,http://arxiv.org/pdf/2306.14921v1.pdf,2023-06-23,"['cs.cl', 'cs.ir']",2306.14921v1.pdf,"  Structured product data in the form of attribute/value pairs is the
foundation of many e-commerce applications such as faceted product search,
product comparison, and product recommendation. Product offers often only
contain textual descriptions of the product attributes in the form of titles or
free text. Hence, extracting attribute/value pairs from textual product
descriptions is an essential enabler for e-commerce applications. In order to
excel, state-of-the-art product information extraction methods require large
quantities of task-specific training data. The methods also struggle with
generalizing to out-of-distribution attributes and attribute values that were
not a part of the training data. Due to being pre-trained on huge amounts of
text as well as due to emergent effects resulting from the model size, Large
Language Models like ChatGPT have the potential to address both of these
shortcomings. This paper explores the potential of ChatGPT for extracting
attribute/value pairs from product descriptions. We experiment with different
zero-shot and few-shot prompt designs. Our results show that ChatGPT achieves a
performance similar to a pre-trained language model but requires much smaller
amounts of training data and computation for fine-tuning.
"
SummQA at MEDIQA-Chat 2023:In-Context Learning with GPT-4 for Medical  Summarization,Yash Mathur,http://arxiv.org/pdf/2306.17384v1.pdf,2023-06-30,['cs.cl'],2306.17384v1.pdf,"  Medical dialogue summarization is challenging due to the unstructured nature
of medical conversations, the use of medical terminology in gold summaries, and
the need to identify key information across multiple symptom sets. We present a
novel system for the Dialogue2Note Medical Summarization tasks in the MEDIQA
2023 Shared Task. Our approach for section-wise summarization (Task A) is a
two-stage process of selecting semantically similar dialogues and using the
top-k similar dialogues as in-context examples for GPT-4. For full-note
summarization (Task B), we use a similar solution with k=1. We achieved 3rd
place in Task A (2nd among all teams), 4th place in Task B Division Wise
Summarization (2nd among all teams), 15th place in Task A Section Header
Classification (9th among all teams), and 8th place among all teams in Task B.
Our results highlight the effectiveness of few-shot prompting for this task,
though we also identify several weaknesses of prompting-based approaches. We
compare GPT-4 performance with several finetuned baselines. We find that GPT-4
summaries are more abstractive and shorter. We make our code publicly
available.
"
Building Cooperative Embodied Agents Modularly with Large Language  Models,Hongxin Zhang,http://arxiv.org/pdf/2307.02485v1.pdf,2023-07-05,"['cs.ai', 'cs.cl', 'cs.cv']",2307.02485v1.pdf,"  Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/.
"
MultiQG-TI: Towards Question Generation from Multi-modal Sources,Zichao Wang,http://arxiv.org/pdf/2307.04643v1.pdf,2023-07-07,"['cs.cl', 'cs.ai']",2307.04643v1.pdf,"  We study the new problem of automatic question generation (QG) from
multi-modal sources containing images and texts, significantly expanding the
scope of most of the existing work that focuses exclusively on QG from only
textual sources. We propose a simple solution for our new problem, called
MultiQG-TI, which enables a text-only question generator to process visual
input in addition to textual input. Specifically, we leverage an image-to-text
model and an optical character recognition model to obtain the textual
description of the image and extract any texts in the image, respectively, and
then feed them together with the input texts to the question generator. We only
fine-tune the question generator while keeping the other components fixed. On
the challenging ScienceQA dataset, we demonstrate that MultiQG-TI significantly
outperforms ChatGPT with few-shot prompting, despite having hundred-times less
trainable parameters. Additional analyses empirically confirm the necessity of
both visual and textual signals for QG and show the impact of various modeling
choices.
"
Why Is Prompt Tuning for Vision-Language Models Robust to Noisy Labels?,Cheng-En Wu,http://arxiv.org/pdf/2307.11978v1.pdf,2023-07-22,"['cs.cv', 'cs.ai', 'cs.lg']",2307.11978v1.pdf,"  Vision-language models such as CLIP learn a generic text-image embedding from
large-scale training data. A vision-language model can be adapted to a new
classification task through few-shot prompt tuning. We find that such a prompt
tuning process is highly robust to label noises. This intrigues us to study the
key reasons contributing to the robustness of the prompt tuning paradigm. We
conducted extensive experiments to explore this property and find the key
factors are: 1) the fixed classname tokens provide a strong regularization to
the optimization of the model, reducing gradients induced by the noisy samples;
2) the powerful pre-trained image-text embedding that is learned from diverse
and generic web data provides strong prior knowledge for image classification.
Further, we demonstrate that noisy zero-shot predictions from CLIP can be used
to tune its own prompt, significantly enhancing prediction accuracy in the
unsupervised setting. The code is available at https://github.com/CEWu/PTNL.
"
Analyzing Chain-of-Thought Prompting in Large Language Models via  Gradient-based Feature Attributions,Skyler Wu,http://arxiv.org/pdf/2307.13339v1.pdf,2023-07-25,"['cs.cl', 'cs.ai']",2307.13339v1.pdf,"  Chain-of-thought (CoT) prompting has been shown to empirically improve the
accuracy of large language models (LLMs) on various question answering tasks.
While understanding why CoT prompting is effective is crucial to ensuring that
this phenomenon is a consequence of desired model behavior, little work has
addressed this; nonetheless, such an understanding is a critical prerequisite
for responsible model deployment. We address this question by leveraging
gradient-based feature attribution methods which produce saliency scores that
capture the influence of input tokens on model output. Specifically, we probe
several open-source LLMs to investigate whether CoT prompting affects the
relative importances they assign to particular input tokens. Our results
indicate that while CoT prompting does not increase the magnitude of saliency
scores attributed to semantically relevant tokens in the prompt compared to
standard few-shot prompting, it increases the robustness of saliency scores to
question perturbations and variations in model output.
"
Low-Parameter Federated Learning with Large Language Models,Jingang Jiang,http://arxiv.org/pdf/2307.13896v1.pdf,2023-07-26,['cs.dc'],2307.13896v1.pdf,"  We study few-shot Natural Language Understanding (NLU) tasks with Large
Language Models (LLMs) in federated learning (FL) scenarios. It is a
challenging task due to limited labeled data and communication capacities in
FL, especially with mobile devices. Recent studies show LLMs can be prompted to
perform few-shot NLU tasks like sentiment analysis and arithmetic reasoning.
However, the huge sizes of LLMs result in high computation and communication
costs, making classical FL schemes impractical. To address these challenges, we
propose Low-Parameter Federated Learning (LP-FL). LP-FL combines few-shot
prompt learning from LLMs with efficient communication and federating
techniques. Our approach enables federated clients to assign soft labels to
unlabeled data using gradually learned knowledge from the global model. Through
iterative soft-label assigning, we continually expand the labeled set during
the FL process. Additionally, to reduce computation and communication costs,
LP-FL utilizes the Low-Rank Adaptation (LoRA) technique for compact learnable
parameter construction, efficient local model fine-tuning, and affordable
global model federation. LP-FL consistently outperforms Full-Parameter
Federated Learning (FP-FL) in sentiment analysis tasks across various FL
settings. Its resistance to overfitting allows LP-FL to equal or surpass
centralized training in few-shot scenarios.
"
Large Language Model Prompt Chaining for Long Legal Document  Classification,Dietrich Trautmann,http://arxiv.org/pdf/2308.04138v1.pdf,2023-08-08,['cs.cl'],2308.04138v1.pdf,"  Prompting is used to guide or steer a language model in generating an
appropriate response that is consistent with the desired outcome. Chaining is a
strategy used to decompose complex tasks into smaller, manageable components.
In this study, we utilize prompt chaining for extensive legal document
classification tasks, which present difficulties due to their intricate
domain-specific language and considerable length. Our approach begins with the
creation of a concise summary of the original document, followed by a semantic
search for related exemplar texts and their corresponding annotations from a
training corpus. Finally, we prompt for a label - based on the task - to
assign, by leveraging the in-context learning from the few-shot prompt. We
demonstrate that through prompt chaining, we can not only enhance the
performance over zero-shot, but also surpass the micro-F1 score achieved by
larger models, such as ChatGPT zero-shot, using smaller models.
"
FinEval: A Chinese Financial Domain Knowledge Evaluation Benchmark for  Large Language Models,Liwen Zhang,http://arxiv.org/pdf/2308.09975v1.pdf,2023-08-19,['cs.cl'],2308.09975v1.pdf,"  Large language models (LLMs) have demonstrated exceptional performance in
various natural language processing tasks, yet their efficacy in more
challenging and domain-specific tasks remains largely unexplored. This paper
presents FinEval, a benchmark specifically designed for the financial domain
knowledge in the LLMs. FinEval is a collection of high-quality multiple-choice
questions covering Finance, Economy, Accounting, and Certificate. It includes
4,661 questions spanning 34 different academic subjects. To ensure a
comprehensive model performance evaluation, FinEval employs a range of prompt
types, including zero-shot and few-shot prompts, as well as answer-only and
chain-of-thought prompts. Evaluating state-of-the-art Chinese and English LLMs
on FinEval, the results show that only GPT-4 achieved an accuracy close to 70%
in different prompt settings, indicating significant growth potential for LLMs
in the financial domain knowledge. Our work offers a more comprehensive
financial knowledge evaluation benchmark, utilizing data of mock exams and
covering a wide range of evaluated LLMs.
"
Diversity Measures: Domain-Independent Proxies for Failure in Language  Model Queries,Noel Ngu,http://arxiv.org/pdf/2308.11189v1.pdf,2023-08-22,"['cs.cl', 'cs.ai', 'cs.lg']",2308.11189v1.pdf,"  Error prediction in large language models often relies on domain-specific
information. In this paper, we present measures for quantification of error in
the response of a large language model based on the diversity of responses to a
given prompt - hence independent of the underlying application. We describe how
three such measures - based on entropy, Gini impurity, and centroid distance -
can be employed. We perform a suite of experiments on multiple datasets and
temperature settings to demonstrate that these measures strongly correlate with
the probability of failure. Additionally, we present empirical results
demonstrating how these measures can be applied to few-shot prompting,
chain-of-thought reasoning, and error detection.
"
Evaluating Large Language Models on Graphs: Performance Insights and  Comparative Analysis,Chang Liu,http://arxiv.org/pdf/2308.11224v2.pdf,2023-08-22,"['cs.ai', 'cs.cl']",2308.11224v2.pdf,"  Large Language Models (LLMs) have garnered considerable interest within both
academic and industrial. Yet, the application of LLMs to graph data remains
under-explored. In this study, we evaluate the capabilities of four LLMs in
addressing several analytical problems with graph data. We employ four distinct
evaluation metrics: Comprehension, Correctness, Fidelity, and Rectification.
Our results show that: 1) LLMs effectively comprehend graph data in natural
language and reason with graph topology. 2) GPT models can generate logical and
coherent results, outperforming alternatives in correctness. 3) All examined
LLMs face challenges in structural reasoning, with techniques like zero-shot
chain-of-thought and few-shot prompting showing diminished efficacy. 4) GPT
models often produce erroneous answers in multi-answer tasks, raising concerns
in fidelity. 5) GPT models exhibit elevated confidence in their outputs,
potentially hindering their rectification capacities. Notably, GPT-4 has
demonstrated the capacity to rectify responses from GPT-3.5-turbo and its own
previous iterations. The code is available at:
https://github.com/Ayame1006/LLMtoGraph.
"
Prompt2Model: Generating Deployable Models from Natural Language  Instructions,Vijay Viswanathan,http://arxiv.org/pdf/2308.12261v1.pdf,2023-08-23,['cs.cl'],2308.12261v1.pdf,"  Large language models (LLMs) enable system builders today to create competent
NLP systems through prompting, where they only need to describe the task in
natural language and provide a few examples. However, in other ways, LLMs are a
step backward from traditional special-purpose NLP models; they require
extensive computational resources for deployment and can be gated behind APIs.
In this paper, we propose Prompt2Model, a general-purpose method that takes a
natural language task description like the prompts provided to LLMs, and uses
it to train a special-purpose model that is conducive to deployment. This is
done through a multi-step process of retrieval of existing datasets and
pretrained models, dataset generation using LLMs, and supervised fine-tuning on
these retrieved and generated datasets. Over three tasks, we demonstrate that
given the same few-shot prompt as input, Prompt2Model trains models that
outperform the results of a strong LLM, gpt-3.5-turbo, by an average of 20%
while being up to 700 times smaller. We also show that this data can be used to
obtain reliable performance estimates of model performance, enabling model
developers to assess model reliability before deployment. Prompt2Model is
available open-source at https://github.com/neulab/prompt2model.
"
Prompt a Robot to Walk with Large Language Models,Yen-Jen Wang,http://arxiv.org/pdf/2309.09969v1.pdf,2023-09-18,"['cs.ro', 'cs.lg', 'cs.sy', 'eess.sy']",2309.09969v1.pdf,"  Large language models (LLMs) pre-trained on vast internet-scale data have
showcased remarkable capabilities across diverse domains. Recently, there has
been escalating interest in deploying LLMs for robotics, aiming to harness the
power of foundation models in real-world settings. However, this approach faces
significant challenges, particularly in grounding these models in the physical
world and in generating dynamic robot motions. To address these issues, we
introduce a novel paradigm in which we use few-shot prompts collected from the
physical environment, enabling the LLM to autoregressively generate low-level
control commands for robots without task-specific fine-tuning. Experiments
across various robots and environments validate that our method can effectively
prompt a robot to walk. We thus illustrate how LLMs can proficiently function
as low-level feedback controllers for dynamic motion control even in
high-dimensional robotic systems. The project website and source code can be
found at: https://prompt2walk.github.io/ .
"
SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language  Models,Shyam Sundar Kannan,http://arxiv.org/pdf/2309.10062v1.pdf,2023-09-18,['cs.ro'],2309.10062v1.pdf,"  In this work, we introduce SMART-LLM, an innovative framework designed for
embodied multi-robot task planning. SMART-LLM: Smart Multi-Agent Robot Task
Planning using Large Language Models (LLMs), harnesses the power of LLMs to
convert high-level task instructions provided as input into a multi-robot task
plan. It accomplishes this by executing a series of stages, including task
decomposition, coalition formation, and task allocation, all guided by
programmatic LLM prompts within the few-shot prompting paradigm. We create a
benchmark dataset designed for validating the multi-robot task planning
problem, encompassing four distinct categories of high-level instructions that
vary in task complexity. Our evaluation experiments span both simulation and
real-world scenarios, demonstrating that the proposed model can achieve
promising results for generating multi-robot task plans. The experimental
videos, code, and datasets from the work can be found at
https://sites.google.com/view/smart-llm/.
"
EchoPrompt: Instructing the Model to Rephrase Queries for Improved  In-context Learning,Rajasekhar Reddy Mekala,http://arxiv.org/pdf/2309.10687v2.pdf,2023-09-16,['cs.cl'],2309.10687v2.pdf,"  Language models are achieving impressive performance on various tasks by
aggressively adopting inference-time prompting techniques, such as zero-shot
and few-shot prompting. In this work, we introduce EchoPrompt, a simple yet
effective approach that prompts the model to rephrase its queries before
answering them. EchoPrompt is adapted for both zero-shot and few-shot
in-context learning with standard and chain-of-thought prompting. Experimental
results show that EchoPrompt yields substantial improvements across all these
settings for four families of causal language models. These improvements are
observed across various numerical reasoning (e.g. GSM8K, SVAMP), reading
comprehension (e.g. DROP), and logical reasoning (e.g. Coin Flipping) tasks. On
average, EchoPrompt improves the Zero-shot-CoT performance of code-davinci-002
by 5% in numerical tasks and 13% in reading comprehension tasks. We investigate
the factors contributing to EchoPrompt's effectiveness through ablation
studies, which reveal that both the original query and the model-generated
rephrased version are instrumental in its performance gains. Our empirical
results indicate that EchoPrompt is an effective technique that enhances
in-context learning performance. We recommend incorporating EchoPrompt into
various baseline prompting strategies to achieve performance boosts.
"
Self-Explanation Prompting Improves Dialogue Understanding in Large  Language Models,Haoyu Gao,http://arxiv.org/pdf/2309.12940v1.pdf,2023-09-22,"['cs.cl', 'cs.ai']",2309.12940v1.pdf,"  Task-oriented dialogue (TOD) systems facilitate users in executing various
activities via multi-turn dialogues, but Large Language Models (LLMs) often
struggle to comprehend these intricate contexts. In this study, we propose a
novel ""Self-Explanation"" prompting strategy to enhance the comprehension
abilities of LLMs in multi-turn dialogues. This task-agnostic approach requires
the model to analyze each dialogue utterance before task execution, thereby
improving performance across various dialogue-centric tasks. Experimental
results from six benchmark datasets confirm that our method consistently
outperforms other zero-shot prompts and matches or exceeds the efficacy of
few-shot prompts, demonstrating its potential as a powerful tool in enhancing
LLMs' comprehension in complex dialogue tasks.
"
Language Models as Knowledge Bases for Visual Word Sense Disambiguation,Anastasia Kritharoula,http://arxiv.org/pdf/2310.01960v1.pdf,2023-10-03,"['cs.cl', 'cs.ai']",2310.01960v1.pdf,"  Visual Word Sense Disambiguation (VWSD) is a novel challenging task that lies
between linguistic sense disambiguation and fine-grained multimodal retrieval.
The recent advancements in the development of visiolinguistic (VL) transformers
suggest some off-the-self implementations with encouraging results, which
however we argue that can be further improved. To this end, we propose some
knowledge-enhancement techniques towards improving the retrieval performance of
VL transformers via the usage of Large Language Models (LLMs) as Knowledge
Bases. More specifically, knowledge stored in LLMs is retrieved with the help
of appropriate prompts in a zero-shot manner, achieving performance
advancements. Moreover, we convert VWSD to a purely textual question-answering
(QA) problem by considering generated image captions as multiple-choice
candidate answers. Zero-shot and few-shot prompting strategies are leveraged to
explore the potential of such a transformation, while Chain-of-Thought (CoT)
prompting in the zero-shot setting is able to reveal the internal reasoning
steps an LLM follows to select the appropriate candidate. In total, our
presented approach is the first one to analyze the merits of exploiting
knowledge stored in LLMs in different ways to solve WVSD.
"
Can Large Language Models be Good Path Planners? A Benchmark and  Investigation on Spatial-temporal Reasoning,Mohamed Aghzal,http://arxiv.org/pdf/2310.03249v1.pdf,2023-10-05,['cs.cl'],2310.03249v1.pdf,"  Large language models (LLMs) have achieved remarkable success across a wide
spectrum of tasks; however, they still face limitations in scenarios that
demand long-term planning and spatial reasoning. To facilitate this line of
research, in this work, we propose a new benchmark, termed $\textbf{P}$ath
$\textbf{P}$lanning from $\textbf{N}$atural $\textbf{L}$anguage
($\textbf{PPNL}$). Our benchmark evaluates LLMs' spatial-temporal reasoning by
formulating ''path planning'' tasks that require an LLM to navigate to target
locations while avoiding obstacles and adhering to constraints. Leveraging this
benchmark, we systematically investigate LLMs including GPT-4 via different
few-shot prompting methodologies and BART and T5 of various sizes via
fine-tuning. Our experimental results show the promise of few-shot GPT-4 in
spatial reasoning, when it is prompted to reason and act interleavedly,
although it still fails to make long-term temporal reasoning. In contrast,
while fine-tuned LLMs achieved impressive results on in-distribution reasoning
tasks, they struggled to generalize to larger environments or environments with
more obstacles.
"
Towards Informative Few-Shot Prompt with Maximum Information Gain for  In-Context Learning,Hongfu Liu,http://arxiv.org/pdf/2310.08923v1.pdf,2023-10-13,['cs.cl'],2310.08923v1.pdf,"  Large Language models (LLMs) possess the capability to engage In-context
Learning (ICL) by leveraging a few demonstrations pertaining to a new
downstream task as conditions. However, this particular learning paradigm
suffers from high instability stemming from substantial variances induced by
factors such as the input distribution of selected examples, their ordering,
and prompt formats. In this work, we demonstrate that even when all these
factors are held constant, the random selection of examples still results in
high variance. Consequently, we aim to explore the informative ability of data
examples by quantifying the Information Gain (IG) obtained in prediction after
observing a given example candidate. Then we propose to sample those with
maximum IG. Additionally, we identify the presence of template bias, which can
lead to unfair evaluations of IG during the sampling process. To mitigate this
bias, we introduce Calibration Before Sampling strategy. The experimental
results illustrate that our proposed method can yield an average relative
improvement of 14.3% across six classification tasks using three LLMs.
"
Ecologically Valid Explanations for Label Variation in NLI,Nan-Jiang Jiang,http://arxiv.org/pdf/2310.13850v1.pdf,2023-10-20,['cs.cl'],2310.13850v1.pdf,"  Human label variation, or annotation disagreement, exists in many natural
language processing (NLP) tasks, including natural language inference (NLI). To
gain direct evidence of how NLI label variation arises, we build LiveNLI, an
English dataset of 1,415 ecologically valid explanations (annotators explain
the NLI labels they chose) for 122 MNLI items (at least 10 explanations per
item). The LiveNLI explanations confirm that people can systematically vary on
their interpretation and highlight within-label variation: annotators sometimes
choose the same label for different reasons. This suggests that explanations
are crucial for navigating label interpretations in general. We few-shot prompt
large language models to generate explanations but the results are
inconsistent: they sometimes produces valid and informative explanations, but
it also generates implausible ones that do not support the label, highlighting
directions for improvement.
"
API-Assisted Code Generation for Question Answering on Varied Table  Structures,Yihan Cao,http://arxiv.org/pdf/2310.14687v1.pdf,2023-10-23,"['cs.cl', 'cs.ai']",2310.14687v1.pdf,"  A persistent challenge to table question answering (TableQA) by generating
executable programs has been adapting to varied table structures, typically
requiring domain-specific logical forms. In response, this paper introduces a
unified TableQA framework that: (1) provides a unified representation for
structured tables as multi-index Pandas data frames, (2) uses Python as a
powerful querying language, and (3) uses few-shot prompting to translate NL
questions into Python programs, which are executable on Pandas data frames.
Furthermore, to answer complex relational questions with extended program
functionality and external knowledge, our framework allows customized APIs that
Python programs can call. We experiment with four TableQA datasets that involve
tables of different structures -- relational, multi-table, and hierarchical
matrix shapes -- and achieve prominent improvements over past state-of-the-art
systems. In ablation studies, we (1) show benefits from our multi-index
representation and APIs over baselines that use only an LLM, and (2)
demonstrate that our approach is modular and can incorporate additional APIs.
"
Tree of Clarifications: Answering Ambiguous Questions with  Retrieval-Augmented Large Language Models,Gangwoo Kim,http://arxiv.org/pdf/2310.14696v1.pdf,2023-10-23,['cs.cl'],2310.14696v1.pdf,"  Questions in open-domain question answering are often ambiguous, allowing
multiple interpretations. One approach to handling them is to identify all
possible interpretations of the ambiguous question (AQ) and to generate a
long-form answer addressing them all, as suggested by Stelmakh et al., (2022).
While it provides a comprehensive response without bothering the user for
clarification, considering multiple dimensions of ambiguity and gathering
corresponding knowledge remains a challenge. To cope with the challenge, we
propose a novel framework, Tree of Clarifications (ToC): It recursively
constructs a tree of disambiguations for the AQ -- via few-shot prompting
leveraging external knowledge -- and uses it to generate a long-form answer.
ToC outperforms existing baselines on ASQA in a few-shot setup across the
metrics, while surpassing fully-supervised baselines trained on the whole
training set in terms of Disambig-F1 and Disambig-ROUGE. Code is available at
https://github.com/gankim/tree-of-clarifications.
"
Dissecting In-Context Learning of Translations in GPTs,Vikas Raunak,http://arxiv.org/pdf/2310.15987v1.pdf,2023-10-24,"['cs.cl', 'cs.ai']",2310.15987v1.pdf,"  Most of the recent work in leveraging Large Language Models (LLMs) such as
GPT-3 for Machine Translation (MT) has focused on selecting the few-shot
samples for prompting. In this work, we try to better understand the role of
demonstration attributes for the in-context learning of translations through
perturbations of high-quality, in-domain demonstrations. We find that
asymmetric perturbation of the source-target mappings yield vastly different
results. We show that the perturbation of the source side has surprisingly
little impact, while target perturbation can drastically reduce translation
quality, suggesting that it is the output text distribution that provides the
most important learning signal during in-context learning of translations. We
propose a method named Zero-Shot-Context to add this signal automatically in
Zero-Shot prompting. We demonstrate that it improves upon the zero-shot
translation performance of GPT-3, even making it competitive with few-shot
prompted translations.
"
Extraction of Atypical Aspects from Customer Reviews: Datasets and  Experiments with Language Models,Smita Nannaware,http://arxiv.org/pdf/2311.02702v1.pdf,2023-11-05,"['cs.cl', 'cs.ai']",2311.02702v1.pdf,"  A restaurant dinner may become a memorable experience due to an unexpected
aspect enjoyed by the customer, such as an origami-making station in the
waiting area. If aspects that are atypical for a restaurant experience were
known in advance, they could be leveraged to make recommendations that have the
potential to engender serendipitous experiences, further increasing user
satisfaction. Although relatively rare, whenever encountered, atypical aspects
often end up being mentioned in reviews due to their memorable quality.
Correspondingly, in this paper we introduce the task of detecting atypical
aspects in customer reviews. To facilitate the development of extraction
models, we manually annotate benchmark datasets of reviews in three domains -
restaurants, hotels, and hair salons, which we use to evaluate a number of
language models, ranging from fine-tuning the instruction-based text-to-text
transformer Flan-T5 to zero-shot and few-shot prompting of GPT-3.5.
"
SQLPrompt: In-Context Text-to-SQL with Minimal Labeled Data,Ruoxi Sun,http://arxiv.org/pdf/2311.02883v1.pdf,2023-11-06,['cs.cl'],2311.02883v1.pdf,"  Text-to-SQL aims to automate the process of generating SQL queries on a
database from natural language text. In this work, we propose ""SQLPrompt"",
tailored to improve the few-shot prompting capabilities of Text-to-SQL for
Large Language Models (LLMs). Our methods include innovative prompt design,
execution-based consistency decoding strategy which selects the SQL with the
most consistent execution outcome among other SQL proposals, and a method that
aims to improve performance by diversifying the SQL proposals during
consistency selection with different prompt designs (""MixPrompt"") and
foundation models (""MixLLMs""). We show that \emph{SQLPrompt} outperforms
previous approaches for in-context learning with few labeled data by a large
margin, closing the gap with finetuning state-of-the-art with thousands of
labeled data.
"
OLaLa: Ontology Matching with Large Language Models,Sven Hertling,http://arxiv.org/pdf/2311.03837v1.pdf,2023-11-07,"['cs.ir', 'cs.cl']",2311.03837v1.pdf,"  Ontology (and more generally: Knowledge Graph) Matching is a challenging task
where information in natural language is one of the most important signals to
process. With the rise of Large Language Models, it is possible to incorporate
this knowledge in a better way into the matching pipeline. A number of
decisions still need to be taken, e.g., how to generate a prompt that is useful
to the model, how information in the KG can be formulated in prompts, which
Large Language Model to choose, how to provide existing correspondences to the
model, how to generate candidates, etc. In this paper, we present a prototype
that explores these questions by applying zero-shot and few-shot prompting with
multiple open Large Language Models to different tasks of the Ontology
Alignment Evaluation Initiative (OAEI). We show that with only a handful of
examples and a well-designed prompt, it is possible to achieve results that are
en par with supervised matching systems which use a much larger portion of the
ground truth.
"
Jurassic is (almost) All You Need: Few-Shot Meaning-to-Text Generation  for Open-Domain Dialogue,Lena Reed,http://arxiv.org/pdf/2110.08094v2.pdf,2021-10-15,['cs.cl'],2110.08094v2.pdf,"  One challenge with open-domain dialogue systems is the need to produce
truthful, high-quality responses on any topic. We aim to improve the quality
and coverage of Athena, an Alexa Prize dialogue system. We experiment with
few-shot prompt-based learning, comparing GPT-Neo to Jurassic-1, for the
movies, music, TV, sports, and video game domains, both within and
cross-domain, with different prompt set sizes (2, 3, 10), formats, and meaning
representations consisting of either sets of WikiData KG triples, or dialogue
acts. Our evaluation uses BLEURT and human metrics, and shows that with 10-shot
prompting, Athena-Jurassic's performance is significantly better for coherence
and semantic accuracy. Experiments with 2-shot cross-domain prompts results in
a huge performance drop for Athena-GPT-Neo, whose semantic accuracy falls to
0.41, and whose untrue hallucination rate increases to 12%. Experiments with
dialogue acts for video games show that with 10-shot prompting, both models
learn to control dialogue acts, but Athena-Jurassic has significantly higher
coherence, and only 4% untrue hallucinations. Our results suggest that
Athena-Jurassic produces high enough quality outputs to be useful in live
systems with real users. To our knowledge, these are the first results
demonstrating that few-shot semantic prompt-based learning can create NLGs that
generalize to new domains, and produce high-quality, semantically-controlled,
conversational responses directly from meaning representations.
"
Code as Policies: Language Model Programs for Embodied Control,Jacky Liang,http://arxiv.org/pdf/2209.07753v4.pdf,2022-09-16,['cs.ro'],2209.07753v4.pdf,"  Large language models (LLMs) trained on code completion have been shown to be
capable of synthesizing simple Python programs from docstrings [1]. We find
that these code-writing LLMs can be re-purposed to write robot policy code,
given natural language commands. Specifically, policy code can express
functions or feedback loops that process perception outputs (e.g.,from object
detectors [2], [3]) and parameterize control primitive APIs. When provided as
input several example language commands (formatted as comments) followed by
corresponding policy code (via few-shot prompting), LLMs can take in new
commands and autonomously re-compose API calls to generate new policy code
respectively. By chaining classic logic structures and referencing third-party
libraries (e.g., NumPy, Shapely) to perform arithmetic, LLMs used in this way
can write robot policies that (i) exhibit spatial-geometric reasoning, (ii)
generalize to new instructions, and (iii) prescribe precise values (e.g.,
velocities) to ambiguous descriptions (""faster"") depending on context (i.e.,
behavioral commonsense). This paper presents code as policies: a robot-centric
formulation of language model generated programs (LMPs) that can represent
reactive policies (e.g., impedance controllers), as well as waypoint-based
policies (vision-based pick and place, trajectory-based control), demonstrated
across multiple real robot platforms. Central to our approach is prompting
hierarchical code-gen (recursively defining undefined functions), which can
write more complex code and also improves state-of-the-art to solve 39.8% of
problems on the HumanEval [1] benchmark. Code and videos are available at
https://code-as-policies.github.io
"
Spotlight: Mobile UI Understanding using Vision-Language Models with a  Focus,Gang Li,http://arxiv.org/pdf/2209.14927v4.pdf,2022-09-29,"['cs.cv', 'cs.hc', 'cs.lg']",2209.14927v4.pdf,"  Mobile UI understanding is important for enabling various interaction tasks
such as UI automation and accessibility. Previous mobile UI modeling often
depends on the view hierarchy information of a screen, which directly provides
the structural data of the UI, with the hope to bypass challenging tasks of
visual modeling from screen pixels. However, view hierarchies are not always
available, and are often corrupted with missing object descriptions or
misaligned structure information. As a result, despite the use of view
hierarchies could offer short-term gains, it may ultimately hinder the
applicability and performance of the model. In this paper, we propose
Spotlight, a vision-only approach for mobile UI understanding. Specifically, we
enhance a vision-language model that only takes the screenshot of the UI and a
region of interest on the screen -- the focus -- as the input. This general
architecture of Spotlight is easily scalable and capable of performing a range
of UI modeling tasks. Our experiments show that our model establishes SoTA
results on several representative UI tasks and outperforms previous methods
that use both screenshots and view hierarchies as inputs. Furthermore, we
explore multi-task learning and few-shot prompting capacities of the proposed
models, demonstrating promising results in the multi-task learning direction.
"
Grounding Language with Visual Affordances over Unstructured Data,Oier Mees,http://arxiv.org/pdf/2210.01911v3.pdf,2022-10-04,"['cs.ro', 'cs.ai', 'cs.cl', 'cs.cv', 'cs.lg']",2210.01911v3.pdf,"  Recent works have shown that Large Language Models (LLMs) can be applied to
ground natural language to a wide variety of robot skills. However, in
practice, learning multi-task, language-conditioned robotic skills typically
requires large-scale data collection and frequent human intervention to reset
the environment or help correcting the current policies. In this work, we
propose a novel approach to efficiently learn general-purpose
language-conditioned robot skills from unstructured, offline and reset-free
data in the real world by exploiting a self-supervised visuo-lingual affordance
model, which requires annotating as little as 1% of the total data with
language. We evaluate our method in extensive experiments both in simulated and
real-world robotic tasks, achieving state-of-the-art performance on the
challenging CALVIN benchmark and learning over 25 distinct visuomotor
manipulation tasks with a single policy in the real world. We find that when
paired with LLMs to break down abstract natural language instructions into
subgoals via few-shot prompting, our method is capable of completing
long-horizon, multi-tier tasks in the real world, while requiring an order of
magnitude less data than previous approaches. Code and videos are available at
http://hulc2.cs.uni-freiburg.de
"
MAPL: Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for  Vision-Language Few-Shot Prompting,Oscar Mañas,http://arxiv.org/pdf/2210.07179v2.pdf,2022-10-13,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",2210.07179v2.pdf,"  Large pre-trained models have proved to be remarkable zero- and
(prompt-based) few-shot learners in unimodal vision and language tasks. We
propose MAPL, a simple and parameter-efficient method that reuses frozen
pre-trained unimodal models and leverages their strong generalization
capabilities in multimodal vision-language (VL) settings. MAPL learns a
lightweight mapping between the representation spaces of unimodal models using
aligned image-text data, and can generalize to unseen VL tasks from just a few
in-context examples. The small number of trainable parameters makes MAPL
effective at low-data and in-domain learning. Moreover, MAPL's modularity
enables easy extension to other pre-trained models. Extensive experiments on
several visual question answering and image captioning benchmarks show that
MAPL achieves superior or competitive performance compared to similar methods
while training orders of magnitude fewer parameters. MAPL can be trained in
just a few hours using modest computational resources and public datasets. We
release our code and pre-trained model weights at
https://github.com/mair-lab/mapl.
"
Model ensemble instead of prompt fusion: a sample-specific knowledge  transfer method for few-shot prompt tuning,Xiangyu Peng,http://arxiv.org/pdf/2210.12587v3.pdf,2022-10-23,['cs.cl'],2210.12587v3.pdf,"  Prompt tuning approaches, which learn task-specific soft prompts for a
downstream task conditioning on frozen pre-trained models, have attracted
growing interest due to its parameter efficiency. With large language models
and sufficient training data, prompt tuning performs comparably to full-model
tuning. However, with limited training samples in few-shot settings, prompt
tuning fails to match the performance of full-model fine-tuning. In this work,
we focus on improving the few-shot performance of prompt tuning by transferring
knowledge from soft prompts of source tasks. Recognizing the good
generalization capabilities of ensemble methods in low-data regime, we first
experiment and show that a simple ensemble of model predictions based on
different source prompts, outperforms existing multi-prompt knowledge transfer
approaches such as source prompt fusion in the few-shot setting. Motivated by
this observation, we further investigate model ensembles and propose
Sample-specific Ensemble of Source Models (SESoM). SESoM learns to adjust the
contribution of each source model for each target sample separately when
ensembling source model outputs. Through this way, SESoM inherits the superior
generalization of model ensemble approaches and simultaneously captures the
sample-specific competence of each source prompt. We conduct experiments across
a diverse set of eight NLP tasks using models of different scales (T5-{base,
large, XL}) and find that SESoM consistently outperforms the existing models of
the same as well as larger parametric scale by a large margin.
"
Are Hard Examples also Harder to Explain? A Study with Human and  Model-Generated Explanations,Swarnadeep Saha,http://arxiv.org/pdf/2211.07517v1.pdf,2022-11-14,"['cs.cl', 'cs.ai']",2211.07517v1.pdf,"  Recent work on explainable NLP has shown that few-shot prompting can enable
large pretrained language models (LLMs) to generate grammatical and factual
natural language explanations for data labels. In this work, we study the
connection between explainability and sample hardness by investigating the
following research question - ""Are LLMs and humans equally good at explaining
data labels for both easy and hard samples?"" We answer this question by first
collecting human-written explanations in the form of generalizable commonsense
rules on the task of Winograd Schema Challenge (Winogrande dataset). We compare
these explanations with those generated by GPT-3 while varying the hardness of
the test samples as well as the in-context samples. We observe that (1) GPT-3
explanations are as grammatical as human explanations regardless of the
hardness of the test samples, (2) for easy examples, GPT-3 generates highly
supportive explanations but human explanations are more generalizable, and (3)
for hard examples, human explanations are significantly better than GPT-3
explanations both in terms of label-supportiveness and generalizability
judgements. We also find that hardness of the in-context examples impacts the
quality of GPT-3 explanations. Finally, we show that the supportiveness and
generalizability aspects of human explanations are also impacted by sample
hardness, although by a much smaller margin than models. Supporting code and
data are available at https://github.com/swarnaHub/ExplanationHardness
"
Crowd Score: A Method for the Evaluation of Jokes using Large Language  Model AI Voters as Judges,Fabricio Goes,http://arxiv.org/pdf/2212.11214v1.pdf,2022-12-21,['cs.ai'],2212.11214v1.pdf,"  This paper presents the Crowd Score, a novel method to assess the funniness
of jokes using large language models (LLMs) as AI judges. Our method relies on
inducing different personalities into the LLM and aggregating the votes of the
AI judges into a single score to rate jokes. We validate the votes using an
auditing technique that checks if the explanation for a particular vote is
reasonable using the LLM. We tested our methodology on 52 jokes in a crowd of
four AI voters with different humour types: affiliative, self-enhancing,
aggressive and self-defeating. Our results show that few-shot prompting leads
to better results than zero-shot for the voting question. Personality induction
showed that aggressive and self-defeating voters are significantly more
inclined to find more jokes funny of a set of aggressive/self-defeating jokes
than the affiliative and self-enhancing voters. The Crowd Score follows the
same trend as human judges by assigning higher scores to jokes that are also
considered funnier by human judges. We believe that our methodology could be
applied to other creative domains such as story, poetry, slogans, etc. It could
both help the adoption of a flexible and accurate standard approach to compare
different work in the CC community under a common metric and by minimizing
human participation in assessing creative artefacts, it could accelerate the
prototyping of creative artefacts and reduce the cost of hiring human
participants to rate creative artefacts.
"
CodeLMSec Benchmark: Systematically Evaluating and Finding Security  Vulnerabilities in Black-Box Code Language Models,Hossein Hajipour,http://arxiv.org/pdf/2302.04012v2.pdf,2023-02-08,"['cs.cr', 'cs.ai', 'cs.cl', 'cs.lg', 'cs.se']",2302.04012v2.pdf,"  Large language models (LLMs) for automatic code generation have achieved
breakthroughs in several programming tasks. Their advances in competition-level
programming problems have made them an essential pillar of AI-assisted pair
programming, and tools such as GitHub Copilot have emerged as part of the daily
programming workflow used by millions of developers. The training data for
these models is usually collected from the Internet (e.g., from open-source
repositories) and is likely to contain faults and security vulnerabilities.
This unsanitized training data can cause the language models to learn these
vulnerabilities and propagate them during the code generation procedure. While
these models have been extensively assessed for their ability to produce
functionally correct programs, there remains a lack of comprehensive
investigations and benchmarks addressing the security aspects of these models.
  In this work, we propose a method to systematically study the security issues
of code language models to assess their susceptibility to generating vulnerable
code. To this end, we introduce the first approach to automatically find
generated code that contains vulnerabilities in black-box code generation
models. To achieve this, we present an approach to approximate inversion of the
black-box code generation models based on few-shot prompting. We evaluate the
effectiveness of our approach by examining code language models in generating
high-risk security weaknesses. Furthermore, we establish a collection of
diverse non-secure prompts for various vulnerability scenarios using our
method. This dataset forms a benchmark for evaluating and comparing the
security weaknesses in code language models.
"
ART: Automatic multi-step reasoning and tool-use for large language  models,Bhargavi Paranjape,http://arxiv.org/pdf/2303.09014v1.pdf,2023-03-16,['cs.cl'],2303.09014v1.pdf,"  Large language models (LLMs) can perform complex reasoning in few- and
zero-shot settings by generating intermediate chain of thought (CoT) reasoning
steps. Further, each reasoning step can rely on external tools to support
computation beyond the core LLM capabilities (e.g. search/running code). Prior
work on CoT prompting and tool use typically requires hand-crafting
task-specific demonstrations and carefully scripted interleaving of model
generations with tool use. We introduce Automatic Reasoning and Tool-use (ART),
a framework that uses frozen LLMs to automatically generate intermediate
reasoning steps as a program. Given a new task to solve, ART selects
demonstrations of multi-step reasoning and tool use from a task library. At
test time, ART seamlessly pauses generation whenever external tools are called,
and integrates their output before resuming generation. ART achieves a
substantial improvement over few-shot prompting and automatic CoT on unseen
tasks in the BigBench and MMLU benchmarks, and matches performance of
hand-crafted CoT prompts on a majority of these tasks. ART is also extensible,
and makes it easy for humans to improve performance by correcting errors in
task-specific programs or incorporating new tools, which we demonstrate by
drastically improving performance on select tasks with minimal human
intervention.
"
Fairness-guided Few-shot Prompting for Large Language Models,Huan Ma,http://arxiv.org/pdf/2303.13217v3.pdf,2023-03-23,"['cs.cl', 'cs.ai']",2303.13217v3.pdf,"  Large language models have demonstrated surprising ability to perform
in-context learning, i.e., these models can be directly applied to solve
numerous downstream tasks by conditioning on a prompt constructed by a few
input-output examples. However, prior research has shown that in-context
learning can suffer from high instability due to variations in training
examples, example order, and prompt formats. Therefore, the construction of an
appropriate prompt is essential for improving the performance of in-context
learning. In this paper, we revisit this problem from the view of predictive
bias. Specifically, we introduce a metric to evaluate the predictive bias of a
fixed prompt against labels or a given attributes. Then we empirically show
that prompts with higher bias always lead to unsatisfactory predictive quality.
Based on this observation, we propose a novel search strategy based on the
greedy search to identify the near-optimal prompt for improving the performance
of in-context learning. We perform comprehensive experiments with
state-of-the-art mainstream models such as GPT-3 on various downstream tasks.
Our results indicate that our method can enhance the model's in-context
learning performance in an effective and interpretable manner.
"
Is ChatGPT a Good Recommender? A Preliminary Study,Junling Liu,http://arxiv.org/pdf/2304.10149v3.pdf,2023-04-20,['cs.ir'],2304.10149v3.pdf,"  Recommendation systems have witnessed significant advancements and have been
widely used over the past decades. However, most traditional recommendation
methods are task-specific and therefore lack efficient generalization ability.
Recently, the emergence of ChatGPT has significantly advanced NLP tasks by
enhancing the capabilities of conversational models. Nonetheless, the
application of ChatGPT in the recommendation domain has not been thoroughly
investigated. In this paper, we employ ChatGPT as a general-purpose
recommendation model to explore its potential for transferring extensive
linguistic and world knowledge acquired from large-scale corpora to
recommendation scenarios. Specifically, we design a set of prompts and evaluate
ChatGPT's performance on five recommendation scenarios. Unlike traditional
recommendation methods, we do not fine-tune ChatGPT during the entire
evaluation process, relying only on the prompts themselves to convert
recommendation tasks into natural language tasks. Further, we explore the use
of few-shot prompting to inject interaction information that contains user
potential interest to help ChatGPT better understand user needs and interests.
Comprehensive experimental results on Amazon Beauty dataset show that ChatGPT
has achieved promising results in certain tasks and is capable of reaching the
baseline level in others. We conduct human evaluations on two
explainability-oriented tasks to more accurately evaluate the quality of
contents generated by different models. And the human evaluations show ChatGPT
can truly understand the provided information and generate clearer and more
reasonable results. We hope that our study can inspire researchers to further
explore the potential of language models like ChatGPT to improve recommendation
performance and contribute to the advancement of the recommendation systems
field.
"
Language Models Don't Always Say What They Think: Unfaithful  Explanations in Chain-of-Thought Prompting,Miles Turpin,http://arxiv.org/pdf/2305.04388v1.pdf,2023-05-07,"['cs.cl', 'cs.ai']",2305.04388v1.pdf,"  Large Language Models (LLMs) can achieve strong performance on many tasks by
producing step-by-step reasoning before giving a final output, often referred
to as chain-of-thought reasoning (CoT). It is tempting to interpret these CoT
explanations as the LLM's process for solving a task. However, we find that CoT
explanations can systematically misrepresent the true reason for a model's
prediction. We demonstrate that CoT explanations can be heavily influenced by
adding biasing features to model inputs -- e.g., by reordering the
multiple-choice options in a few-shot prompt to make the answer always ""(A)"" --
which models systematically fail to mention in their explanations. When we bias
models toward incorrect answers, they frequently generate CoT explanations
supporting those answers. This causes accuracy to drop by as much as 36% on a
suite of 13 tasks from BIG-Bench Hard, when testing with GPT-3.5 from OpenAI
and Claude 1.0 from Anthropic. On a social-bias task, model explanations
justify giving answers in line with stereotypes without mentioning the
influence of these social biases. Our findings indicate that CoT explanations
can be plausible yet misleading, which risks increasing our trust in LLMs
without guaranteeing their safety. CoT is promising for explainability, but our
results highlight the need for targeted efforts to evaluate and improve
explanation faithfulness.
"
Skill-Based Few-Shot Selection for In-Context Learning,Shengnan An,http://arxiv.org/pdf/2305.14210v2.pdf,2023-05-23,"['cs.cl', 'cs.ai']",2305.14210v2.pdf,"  In-context learning is the paradigm that adapts large language models to
downstream tasks by providing a few examples. Few-shot selection -- selecting
appropriate examples for each test instance separately -- is important for
in-context learning. In this paper, we propose Skill-KNN, a skill-based
few-shot selection method for in-context learning. The key advantages of
Skill-KNN include: (1) it addresses the problem that existing methods based on
pre-trained embeddings can be easily biased by surface natural language
features that are not important for the target task; (2) it does not require
training or fine-tuning of any models, making it suitable for frequently
expanding or changing example banks. The key insight is to optimize the inputs
fed into the embedding model, rather than tuning the model itself. Technically,
Skill-KNN generates the skill-based descriptions for each test case and
candidate example by utilizing a pre-processing few-shot prompting, thus
eliminating unimportant surface features. Experimental results across five
cross-domain semantic parsing datasets and six backbone models show that
Skill-KNN significantly outperforms existing methods.
"
USB: A Unified Summarization Benchmark Across Tasks and Domains,Kundan Krishna,http://arxiv.org/pdf/2305.14296v1.pdf,2023-05-23,"['cs.cl', 'cs.lg']",2305.14296v1.pdf,"  An abundance of datasets exist for training and evaluating models on the task
of summary generation.However, these datasets are often derived heuristically,
and lack sufficient annotations to support research into all aspects of
summarization, such as evidence extraction and controllable summarization. We
introduce a benchmark comprising 8 tasks that require multi-dimensional
understanding of summarization, e.g., surfacing evidence for a summary,
assessing its correctness, and gauging its relevance to different topics. We
compare various methods on this benchmark and discover that on multiple tasks,
moderately-sized fine-tuned models consistently outperform much larger few-shot
prompted language models. For factuality related tasks, we also evaluate
existing heuristics to create training data and find that training on them
performs worse than training on $20\times$ less human-labeled data. Our
benchmark consists of data from 6 different domains, allowing us to study
cross-domain performance of trained models. We find that for some tasks, the
amount of training data matters more than the domain where it comes from, while
for other tasks training specifically on data from the target domain, even if
limited, is more beneficial. Our work fulfills the need for a well-annotated
summarization benchmark with diverse tasks, and provides useful insights about
the impact of the quality, size and domain of training data.
"
Self-Polish: Enhance Reasoning in Large Language Models via Problem  Refinement,Zhiheng Xi,http://arxiv.org/pdf/2305.14497v1.pdf,2023-05-23,"['cs.cl', 'cs.ai']",2305.14497v1.pdf,"  Prompting methods such as Chain-of-Thought (CoT) have shed new light on
enhancing the reasoning capabilities of large language models, and researchers
have extensively explored the generation process of rationales and answers.
However, they have overlooked the potential challenges posed by the poor
quality of reasoning problems, which may influence the reasoning performance
significantly. In this work, we propose Self-Polish (SP), a novel method that
facilitates the model's problem-solving process by prompting them to
progressively refine the given problems to be more comprehensible and solvable.
Specifically, the method teaches models to eliminate irrelevant information,
rearrange the logic structure and organize local conditions into new ones
parallelly. SP is orthogonal to all other prompting methods, making it
convenient to integrate with state-of-the-art techniques for further
improvement. We conduct thorough experiments on five benchmarks to illustrate
the effectiveness of the proposed method. For example, with Text-davinci-003,
our method boosts the performance of standard few-shot prompting by $8.0\%$ on
GSM8K and $17.8\%$ on MultiArith; it also improves the performance of CoT by
$6.0\%$ on GSM8K and $6.0\%$ on MathQA, respectively. Furthermore, our method
also showcases impressive performance on robustness evaluation.
"
SciFix: Outperforming GPT3 on Scientific Factual Error Correction,Dhananjay Ashok,http://arxiv.org/pdf/2305.14707v2.pdf,2023-05-24,"['cs.cl', 'cs.ai', 'cs.lg']",2305.14707v2.pdf,"  Due to the prohibitively high cost of creating error correction datasets,
most Factual Claim Correction methods rely on a powerful verification model to
guide the correction process. This leads to a significant drop in performance
in domains like scientific claims, where good verification models do not always
exist. In this work, we introduce SciFix, a scientific claim correction system
that does not require a verifier but can outperform existing methods by a
considerable margin -- achieving correction accuracy of 84% on the SciFact
dataset, 77% on SciFact-Open and 72% on the CovidFact dataset, compared to next
best accuracies of 7%, 5%, and 15% on the same datasets respectively. Our
method leverages the power of prompting with LLMs during training to create a
richly annotated dataset that can be used for fully supervised training and
regularization. We additionally use a claim-aware decoding procedure to improve
the quality of corrected claims. Our method outperforms the very LLM that was
used to generate the annotated dataset -- with Few-Shot Prompting on GPT3.5
achieving 58%, 61%, and 64% on the respective datasets, a consistently lower
correction accuracy, despite using nearly 800 times as many parameters as our
model.
"
LaFTer: Label-Free Tuning of Zero-shot Classifier using Language and  Unlabeled Image Collections,M. Jehanzeb Mirza,http://arxiv.org/pdf/2305.18287v2.pdf,2023-05-29,"['cs.cv', 'cs.cl']",2305.18287v2.pdf,"  Recently, large-scale pre-trained Vision and Language (VL) models have set a
new state-of-the-art (SOTA) in zero-shot visual classification enabling
open-vocabulary recognition of potentially unlimited set of categories defined
as simple language prompts. However, despite these great advances, the
performance of these zeroshot classifiers still falls short of the results of
dedicated (closed category set) classifiers trained with supervised fine
tuning. In this paper we show, for the first time, how to reduce this gap
without any labels and without any paired VL data, using an unlabeled image
collection and a set of texts auto-generated using a Large Language Model (LLM)
describing the categories of interest and effectively substituting labeled
visual instances of those categories. Using our label-free approach, we are
able to attain significant performance improvements over the zero-shot
performance of the base VL model and other contemporary methods and baselines
on a wide variety of datasets, demonstrating absolute improvement of up to
11.7% (3.8% on average) in the label-free setting. Moreover, despite our
approach being label-free, we observe 1.3% average gains over leading few-shot
prompting baselines that do use 5-shot supervision.
"
"Better patching using LLM prompting, via Self-Consistency",Toufique Ahmed,http://arxiv.org/pdf/2306.00108v2.pdf,2023-05-31,"['cs.se', 'cs.lg']",2306.00108v2.pdf,"  Large Language models (LLMs) can be induced to solve non-trivial problems
with ""few-shot"" prompts including illustrative problem-solution examples. Now
if the few-shots also include ""chain of thought"" (CoT) explanations, which are
of the form problem-explanation-solution, LLMs will generate a ""explained""
solution, and perform even better. Recently an exciting, substantially better
technique, self-consistency [1] (S-C) has emerged, based on the intuition that
there are many plausible explanations for the right solution; when the LLM is
sampled repeatedly to generate a pool of explanation-solution pairs, for a
given problem, the most frequently occurring solutions in the pool (ignoring
the explanations) tend to be even more likely to be correct! Unfortunately, the
use of this highly-performant S-C (or even CoT) approach in software
engineering settings is hampered by the lack of explanations; most software
datasets lack explanations. In this paper, we describe an application of the
S-C approach to program repair, using the commit log on the fix as the
explanation, only in the illustrative few-shots. We achieve state-of-the art
results, beating previous approaches to prompting-based program repair, on the
MODIT dataset; we also find evidence suggesting that the correct commit
messages are helping the LLM learn to produce better patches.
"
Large Language Models as Tax Attorneys: A Case Study in Legal  Capabilities Emergence,John J. Nay,http://arxiv.org/pdf/2306.07075v1.pdf,2023-06-12,"['cs.cl', 'cs.ai', 'cs.cy']",2306.07075v1.pdf,"  Better understanding of Large Language Models' (LLMs) legal analysis
abilities can contribute to improving the efficiency of legal services,
governing artificial intelligence, and leveraging LLMs to identify
inconsistencies in law. This paper explores LLM capabilities in applying tax
law. We choose this area of law because it has a structure that allows us to
set up automated validation pipelines across thousands of examples, requires
logical reasoning and maths skills, and enables us to test LLM capabilities in
a manner relevant to real-world economic lives of citizens and companies. Our
experiments demonstrate emerging legal understanding capabilities, with
improved performance in each subsequent OpenAI model release. We experiment
with retrieving and utilising the relevant legal authority to assess the impact
of providing additional legal context to LLMs. Few-shot prompting, presenting
examples of question-answer pairs, is also found to significantly enhance the
performance of the most advanced model, GPT-4. The findings indicate that LLMs,
particularly when combined with prompting enhancements and the correct legal
texts, can perform at high levels of accuracy but not yet at expert tax lawyer
levels. As LLMs continue to advance, their ability to reason about law
autonomously could have significant implications for the legal profession and
AI governance.
"
DIFFender: Diffusion-Based Adversarial Defense against Patch Attacks,Caixin Kang,http://arxiv.org/pdf/2306.09124v2.pdf,2023-06-15,"['cs.cv', 'cs.ai', 'cs.cr', 'cs.lg']",2306.09124v2.pdf,"  Adversarial attacks, particularly patch attacks, pose significant threats to
the robustness and reliability of deep learning models. Developing reliable
defenses against patch attacks is crucial for real-world applications, yet
current research in this area is not satisfactory. In this paper, we propose
DIFFender, a novel defense method that leverages a text-guided diffusion model
to defend against adversarial patches. DIFFender includes two main stages:
patch localization and patch restoration. In the localization stage, we find
and exploit an intriguing property of the diffusion model to effectively
identify the locations of adversarial patches. In the restoration stage, we
employ the diffusion model to reconstruct the adversarial regions in the images
while preserving the integrity of the visual content. Importantly, these two
stages are carefully guided by a unified diffusion model, thus we can utilize
the close interaction between them to improve the whole defense performance.
Moreover, we propose a few-shot prompt-tuning algorithm to fine-tune the
diffusion model, enabling the pre-trained diffusion model to easily adapt to
the defense task. We conduct extensive experiments on the image classification
and face recognition tasks, demonstrating that our proposed method exhibits
superior robustness under strong adaptive attacks and generalizes well across
various scenarios, diverse classifiers, and multiple patch attack methods.
"
Teaching Arithmetic to Small Transformers,Nayoung Lee,http://arxiv.org/pdf/2307.03381v1.pdf,2023-07-07,['cs.lg'],2307.03381v1.pdf,"  Large language models like GPT-4 exhibit emergent capabilities across
general-purpose tasks, such as basic arithmetic, when trained on extensive text
data, even though these tasks are not explicitly encoded by the unsupervised,
next-token prediction objective. This study investigates how small
transformers, trained from random initialization, can efficiently learn
arithmetic operations such as addition, multiplication, and elementary
functions like square root, using the next-token prediction objective. We first
demonstrate that conventional training data is not the most effective for
arithmetic learning, and simple formatting changes can significantly improve
accuracy. This leads to sharp phase transitions as a function of training data
scale, which, in some cases, can be explained through connections to low-rank
matrix completion. Building on prior work, we then train on chain-of-thought
style data that includes intermediate step results. Even in the complete
absence of pretraining, this approach significantly and simultaneously improves
accuracy, sample complexity, and convergence speed. We also study the interplay
between arithmetic and text data during training and examine the effects of
few-shot prompting, pretraining, and model scale. Additionally, we discuss
length generalization challenges. Our work highlights the importance of
high-quality, instructive data that considers the particular characteristics of
the next-word prediction objective for rapidly eliciting arithmetic
capabilities.
"
Controllable Generation of Dialogue Acts for Dialogue Systems via  Few-Shot Response Generation and Ranking,Angela Ramirez,http://arxiv.org/pdf/2307.14440v1.pdf,2023-07-26,['cs.cl'],2307.14440v1.pdf,"  Dialogue systems need to produce responses that realize multiple types of
dialogue acts (DAs) with high semantic fidelity. In the past, natural language
generators (NLGs) for dialogue were trained on large parallel corpora that map
from a domain-specific DA and its semantic attributes to an output utterance.
Recent work shows that pretrained language models (LLMs) offer new
possibilities for controllable NLG using prompt-based learning. Here we develop
a novel few-shot overgenerate-and-rank approach that achieves the controlled
generation of DAs. We compare eight few-shot prompt styles that include a novel
method of generating from textual pseudo-references using a textual style
transfer approach. We develop six automatic ranking functions that identify
outputs with both the correct DA and high semantic accuracy at generation time.
We test our approach on three domains and four LLMs. To our knowledge, this is
the first work on NLG for dialogue that automatically ranks outputs using both
DA and attribute accuracy. For completeness, we compare our results to
fine-tuned few-shot models trained with 5 to 100 instances per DA. Our results
show that several prompt settings achieve perfect DA accuracy, and near perfect
semantic accuracy (99.81%) and perform better than few-shot fine-tuning.
"
Contextual Biasing of Named-Entities with Large Language Models,Chuanneng Sun,http://arxiv.org/pdf/2309.00723v2.pdf,2023-09-01,"['cs.cl', 'cs.ai', 'cs.lg', 'cs.sd', 'eess.as', '68t10', 'i.2.7']",2309.00723v2.pdf,"  This paper studies contextual biasing with Large Language Models (LLMs),
where during second-pass rescoring additional contextual information is
provided to a LLM to boost Automatic Speech Recognition (ASR) performance. We
propose to leverage prompts for a LLM without fine tuning during rescoring
which incorporate a biasing list and few-shot examples to serve as additional
information when calculating the score for the hypothesis. In addition to
few-shot prompt learning, we propose multi-task training of the LLM to predict
both the entity class and the next token. To improve the efficiency for
contextual biasing and to avoid exceeding LLMs' maximum sequence lengths, we
propose dynamic prompting, where we select the most likely class using the
class tag prediction, and only use entities in this class as contexts for next
token prediction. Word Error Rate (WER) evaluation is performed on i) an
internal calling, messaging, and dictation dataset, and ii) the SLUE-Voxpopuli
dataset. Results indicate that biasing lists and few-shot examples can achieve
17.8% and 9.6% relative improvement compared to first pass ASR, and that
multi-task training and dynamic prompting can achieve 20.0% and 11.3% relative
WER improvement, respectively.
"
MindAgent: Emergent Gaming Interaction,Ran Gong,http://arxiv.org/pdf/2309.09971v2.pdf,2023-09-18,"['cs.ai', 'cs.hc', 'cs.ma']",2309.09971v2.pdf,"  Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora.
"
DSPy: Compiling Declarative Language Model Calls into Self-Improving  Pipelines,Omar Khattab,http://arxiv.org/pdf/2310.03714v1.pdf,2023-10-05,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",2310.03714v1.pdf,"  The ML community is rapidly exploring techniques for prompting language
models (LMs) and for stacking them into pipelines that solve complex tasks.
Unfortunately, existing LM pipelines are typically implemented using hard-coded
""prompt templates"", i.e. lengthy strings discovered via trial and error. Toward
a more systematic approach for developing and optimizing LM pipelines, we
introduce DSPy, a programming model that abstracts LM pipelines as text
transformation graphs, i.e. imperative computational graphs where LMs are
invoked through declarative modules. DSPy modules are parameterized, meaning
they can learn (by creating and collecting demonstrations) how to apply
compositions of prompting, finetuning, augmentation, and reasoning techniques.
We design a compiler that will optimize any DSPy pipeline to maximize a given
metric. We conduct two case studies, showing that succinct DSPy programs can
express and optimize sophisticated LM pipelines that reason about math word
problems, tackle multi-hop retrieval, answer complex questions, and control
agent loops. Within minutes of compiling, a few lines of DSPy allow GPT-3.5 and
llama2-13b-chat to self-bootstrap pipelines that outperform standard few-shot
prompting (generally by over 25% and 65%, respectively) and pipelines with
expert-created demonstrations (by up to 5-46% and 16-40%, respectively). On top
of that, DSPy programs compiled to open and relatively small LMs like
770M-parameter T5 and llama2-13b-chat are competitive with approaches that rely
on expert-written prompt chains for proprietary GPT-3.5. DSPy is available at
https://github.com/stanfordnlp/dspy
"
InterroLang: Exploring NLP Models and Datasets through Dialogue-based  Explanations,Nils Feldhus,http://arxiv.org/pdf/2310.05592v2.pdf,2023-10-09,"['cs.cl', 'cs.ai', 'cs.hc']",2310.05592v2.pdf,"  While recently developed NLP explainability methods let us open the black box
in various ways (Madsen et al., 2022), a missing ingredient in this endeavor is
an interactive tool offering a conversational interface. Such a dialogue system
can help users explore datasets and models with explanations in a
contextualized manner, e.g. via clarification or follow-up questions, and
through a natural language interface. We adapt the conversational explanation
framework TalkToModel (Slack et al., 2022) to the NLP domain, add new
NLP-specific operations such as free-text rationalization, and illustrate its
generalizability on three NLP tasks (dialogue act classification, question
answering, hate speech detection). To recognize user queries for explanations,
we evaluate fine-tuned and few-shot prompting models and implement a novel
Adapter-based approach. We then conduct two user studies on (1) the perceived
correctness and helpfulness of the dialogues, and (2) the simulatability, i.e.
how objectively helpful dialogical explanations are for humans in figuring out
the model's predicted label when it's not shown. We found rationalization and
feature attribution were helpful in explaining the model behavior. Moreover,
users could more reliably predict the model outcome based on an explanation
dialogue rather than one-off explanations.
"
FireAct: Toward Language Agent Fine-tuning,Baian Chen,http://arxiv.org/pdf/2310.05915v1.pdf,2023-10-09,"['cs.cl', 'cs.ai', 'cs.lg']",2310.05915v1.pdf,"  Recent efforts have augmented language models (LMs) with external tools or
environments, leading to the development of language agents that can reason and
act. However, most of these agents rely on few-shot prompting techniques with
off-the-shelf LMs. In this paper, we investigate and argue for the overlooked
direction of fine-tuning LMs to obtain language agents. Using a setup of
question answering (QA) with a Google search API, we explore a variety of base
LMs, prompting methods, fine-tuning data, and QA tasks, and find language
agents are consistently improved after fine-tuning their backbone LMs. For
example, fine-tuning Llama2-7B with 500 agent trajectories generated by GPT-4
leads to a 77% HotpotQA performance increase. Furthermore, we propose FireAct,
a novel approach to fine-tuning LMs with trajectories from multiple tasks and
prompting methods, and show having more diverse fine-tuning data can further
improve agents. Along with other findings regarding scaling effects,
robustness, generalization, efficiency and cost, our work establishes
comprehensive benefits of fine-tuning LMs for agents, and provides an initial
set of experimental designs, insights, as well as open questions toward
language agent fine-tuning.
"
Steering Large Language Models for Machine Translation with Finetuning  and In-Context Learning,Duarte M. Alves,http://arxiv.org/pdf/2310.13448v1.pdf,2023-10-20,['cs.cl'],2310.13448v1.pdf,"  Large language models (LLMs) are a promising avenue for machine translation
(MT). However, current LLM-based MT systems are brittle: their effectiveness
highly depends on the choice of few-shot examples and they often require extra
post-processing due to overgeneration. Alternatives such as finetuning on
translation instructions are computationally expensive and may weaken
in-context learning capabilities, due to overspecialization. In this paper, we
provide a closer look at this problem. We start by showing that adapter-based
finetuning with LoRA matches the performance of traditional finetuning while
reducing the number of training parameters by a factor of 50. This method also
outperforms few-shot prompting and eliminates the need for post-processing or
in-context examples. However, we show that finetuning generally degrades
few-shot performance, hindering adaptation capabilities. Finally, to obtain the
best of both worlds, we propose a simple approach that incorporates few-shot
examples during finetuning. Experiments on 10 language pairs show that our
proposed approach recovers the original few-shot capabilities while keeping the
added benefits of finetuning.
"
On Bilingual Lexicon Induction with Large Language Models,Yaoyiran Li,http://arxiv.org/pdf/2310.13995v1.pdf,2023-10-21,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",2310.13995v1.pdf,"  Bilingual Lexicon Induction (BLI) is a core task in multilingual NLP that
still, to a large extent, relies on calculating cross-lingual word
representations. Inspired by the global paradigm shift in NLP towards Large
Language Models (LLMs), we examine the potential of the latest generation of
LLMs for the development of bilingual lexicons. We ask the following research
question: Is it possible to prompt and fine-tune multilingual LLMs (mLLMs) for
BLI, and how does this approach compare against and complement current BLI
approaches? To this end, we systematically study 1) zero-shot prompting for
unsupervised BLI and 2) few-shot in-context prompting with a set of seed
translation pairs, both without any LLM fine-tuning, as well as 3) standard
BLI-oriented fine-tuning of smaller LLMs. We experiment with 18 open-source
text-to-text mLLMs of different sizes (from 0.3B to 13B parameters) on two
standard BLI benchmarks covering a range of typologically diverse languages.
Our work is the first to demonstrate strong BLI capabilities of text-to-text
mLLMs. The results reveal that few-shot prompting with in-context examples from
nearest neighbours achieves the best performance, establishing new
state-of-the-art BLI scores for many language pairs. We also conduct a series
of in-depth analyses and ablation studies, providing more insights on BLI with
(m)LLMs, also along with their limitations.
"
An Early Evaluation of GPT-4V(ision),Yang Wu,http://arxiv.org/pdf/2310.16534v1.pdf,2023-10-25,"['cs.cl', 'cs.cv']",2310.16534v1.pdf,"  In this paper, we evaluate different abilities of GPT-4V including visual
understanding, language understanding, visual puzzle solving, and understanding
of other modalities such as depth, thermal, video, and audio. To estimate
GPT-4V's performance, we manually construct 656 test instances and carefully
evaluate the results of GPT-4V. The highlights of our findings are as follows:
(1) GPT-4V exhibits impressive performance on English visual-centric benchmarks
but fails to recognize simple Chinese texts in the images; (2) GPT-4V shows
inconsistent refusal behavior when answering questions related to sensitive
traits such as gender, race, and age; (3) GPT-4V obtains worse results than
GPT-4 (API) on language understanding tasks including general language
understanding benchmarks and visual commonsense knowledge evaluation
benchmarks; (4) Few-shot prompting can improve GPT-4V's performance on both
visual understanding and language understanding; (5) GPT-4V struggles to find
the nuances between two similar images and solve the easy math picture puzzles;
(6) GPT-4V shows non-trivial performance on the tasks of similar modalities to
image, such as video and thermal. Our experimental results reveal the ability
and limitations of GPT-4V and we hope our paper can provide some insights into
the application and research of GPT-4V.
"
"""You Are An Expert Linguistic Annotator"": Limits of LLMs as Analyzers of  Abstract Meaning Representation",Allyson Ettinger,http://arxiv.org/pdf/2310.17793v1.pdf,2023-10-26,"['cs.cl', 'cs.ai']",2310.17793v1.pdf,"  Large language models (LLMs) show amazing proficiency and fluency in the use
of language. Does this mean that they have also acquired insightful linguistic
knowledge about the language, to an extent that they can serve as an ""expert
linguistic annotator""? In this paper, we examine the successes and limitations
of the GPT-3, ChatGPT, and GPT-4 models in analysis of sentence meaning
structure, focusing on the Abstract Meaning Representation (AMR; Banarescu et
al. 2013) parsing formalism, which provides rich graphical representations of
sentence meaning structure while abstracting away from surface forms. We
compare models' analysis of this semantic structure across two settings: 1)
direct production of AMR parses based on zero- and few-shot prompts, and 2)
indirect partial reconstruction of AMR via metalinguistic natural language
queries (e.g., ""Identify the primary event of this sentence, and the predicate
corresponding to that event.""). Across these settings, we find that models can
reliably reproduce the basic format of AMR, and can often capture core event,
argument, and modifier structure -- however, model outputs are prone to
frequent and major errors, and holistic analysis of parse acceptability shows
that even with few-shot demonstrations, models have virtually 0% success in
producing fully accurate parses. Eliciting natural language responses produces
similar patterns of errors. Overall, our findings indicate that these models
out-of-the-box can capture aspects of semantic structure, but there remain key
limitations in their ability to support fully accurate semantic analyses or
parses.
"
Style-Aware Radiology Report Generation with RadGraph and Few-Shot  Prompting,Benjamin Yan,http://arxiv.org/pdf/2310.17811v2.pdf,2023-10-26,"['cs.ai', 'cs.cl']",2310.17811v2.pdf,"  Automatically generated reports from medical images promise to improve the
workflow of radiologists. Existing methods consider an image-to-report modeling
task by directly generating a fully-fledged report from an image. However, this
conflates the content of the report (e.g., findings and their attributes) with
its style (e.g., format and choice of words), which can lead to clinically
inaccurate reports. To address this, we propose a two-step approach for
radiology report generation. First, we extract the content from an image; then,
we verbalize the extracted content into a report that matches the style of a
specific radiologist. For this, we leverage RadGraph -- a graph representation
of reports -- together with large language models (LLMs). In our quantitative
evaluations, we find that our approach leads to beneficial performance. Our
human evaluation with clinical raters highlights that the AI-generated reports
are indistinguishably tailored to the style of individual radiologist despite
leveraging only a few examples as context.
"
Multi-lingual Evaluation of Code Generation Models,Ben Athiwaratkun,http://arxiv.org/pdf/2210.14868v3.pdf,2022-10-26,"['cs.lg', 'cs.cl']",2210.14868v3.pdf,"  We present new benchmarks on evaluation code generation models: MBXP and
Multilingual HumanEval, and MathQA-X. These datasets cover over 10 programming
languages and are generated using a scalable conversion framework that
transpiles prompts and test cases from the original Python datasets into the
corresponding data in the target language. Using these benchmarks, we are able
to assess the performance of code generation models in a multi-lingual fashion,
and discovered generalization ability of language models on out-of-domain
languages, advantages of multi-lingual models over mono-lingual, the ability of
few-shot prompting to teach the model new languages, and zero-shot translation
abilities even on mono-lingual settings. Furthermore, we use our code
generation model to perform large-scale bootstrapping to obtain synthetic
canonical solutions in several languages, which can be used for other
code-related evaluations such as code insertion, robustness, or summarization
tasks. Overall, our benchmarks represents a significant step towards a deeper
understanding of language models' code generation abilities. We publicly
release our code and datasets at https://github.com/amazon-research/mxeval.
"
PAL: Program-aided Language Models,Luyu Gao,http://arxiv.org/pdf/2211.10435v2.pdf,2022-11-18,"['cs.cl', 'cs.ai']",2211.10435v2.pdf,"  Large language models (LLMs) have recently demonstrated an impressive ability
to perform arithmetic and symbolic reasoning tasks, when provided with a few
examples at test time (""few-shot prompting""). Much of this success can be
attributed to prompting methods such as ""chain-of-thought'', which employ LLMs
for both understanding the problem description by decomposing it into steps, as
well as solving each step of the problem. While LLMs seem to be adept at this
sort of step-by-step decomposition, LLMs often make logical and arithmetic
mistakes in the solution part, even when the problem is decomposed correctly.
In this paper, we present Program-Aided Language models (PAL): a novel approach
that uses the LLM to read natural language problems and generate programs as
the intermediate reasoning steps, but offloads the solution step to a runtime
such as a Python interpreter. With PAL, decomposing the natural language
problem into runnable steps remains the only learning task for the LLM, while
solving is delegated to the interpreter. We demonstrate this synergy between a
neural LLM and a symbolic interpreter across 13 mathematical, symbolic, and
algorithmic reasoning tasks from BIG-Bench Hard and other benchmarks. In all
these natural language reasoning tasks, generating code using an LLM and
reasoning using a Python interpreter leads to more accurate results than much
larger models. For example, PAL using Codex achieves state-of-the-art few-shot
accuracy on the GSM8K benchmark of math word problems, surpassing PaLM-540B
which uses chain-of-thought by absolute 15% top-1. Our code and data are
publicly available at http://reasonwithpal.com/ .
"
Learning Performance-Improving Code Edits,Alexander Shypula,http://arxiv.org/pdf/2302.07867v4.pdf,2023-02-15,"['cs.se', 'cs.ai', 'cs.lg', 'cs.pf']",2302.07867v4.pdf,"  With the waning of Moore's law, optimizing program performance has become a
major focus of software research. However, high-level optimizations such as API
and algorithm changes remain elusive due to the difficulty of understanding the
semantics of code. Simultaneously, pretrained large language models (LLMs) have
demonstrated strong capabilities at solving a wide range of programming tasks.
To that end, we introduce a framework for adapting LLMs to high-level program
optimization. First, we curate a dataset of performance-improving edits made by
human programmers of over 77K competitive C++ programming submission pairs,
accompanied by extensive unit tests. A major challenge is the significant
variability of measuring performance on commodity hardware, which can lead to
spurious ""improvements"". To isolate and reliably evaluate the impact of program
optimizations, we design an environment based on the gem5 full system
simulator, the de facto simulator used in academia and industry. Next, we
propose a broad range of adaptation strategies for code optimization; for
prompting, these include retrieval-based few-shot prompting and
chain-of-thought, and for finetuning, these include performance-conditioned
generation and synthetic data augmentation based on self-play. A combination of
these techniques achieves an average speedup of 5.65X on CodeLlama-13B and
6.86X on GPT-3.5, surpassing the best human performance (4.06X). We find our
proposed performance-conditioned generation is particularly effective at
improving performance as well as increasing the fraction of optimized programs.
"
Large Language Models for User Interest Journeys,Konstantina Christakopoulou,http://arxiv.org/pdf/2305.15498v1.pdf,2023-05-24,"['cs.cl', 'cs.ai', 'cs.ir']",2305.15498v1.pdf,"  Large language models (LLMs) have shown impressive capabilities in natural
language understanding and generation. Their potential for deeper user
understanding and improved personalized user experience on recommendation
platforms is, however, largely untapped. This paper aims to address this gap.
Recommender systems today capture users' interests through encoding their
historical activities on the platforms. The generated user representations are
hard to examine or interpret. On the other hand, if we were to ask people about
interests they pursue in their life, they might talk about their hobbies, like
I just started learning the ukulele, or their relaxation routines, e.g., I like
to watch Saturday Night Live, or I want to plant a vertical garden. We argue,
and demonstrate through extensive experiments, that LLMs as foundation models
can reason through user activities, and describe their interests in nuanced and
interesting ways, similar to how a human would.
  We define interest journeys as the persistent and overarching user interests,
in other words, the non-transient ones. These are the interests that we believe
will benefit most from the nuanced and personalized descriptions. We introduce
a framework in which we first perform personalized extraction of interest
journeys, and then summarize the extracted journeys via LLMs, using techniques
like few-shot prompting, prompt-tuning and fine-tuning. Together, our results
in prompting LLMs to name extracted user journeys in a large-scale industrial
platform demonstrate great potential of these models in providing deeper, more
interpretable, and controllable user understanding. We believe LLM powered user
understanding can be a stepping stone to entirely new user experiences on
recommendation platforms that are journey-aware, assistive, and enabling
frictionless conversation down the line.
"
Passive learning of active causal strategies in agents and language  models,Andrew Kyle Lampinen,http://arxiv.org/pdf/2305.16183v2.pdf,2023-05-25,"['cs.lg', 'cs.ai', 'cs.cl']",2305.16183v2.pdf,"  What can be learned about causality and experimentation from passive data?
This question is salient given recent successes of passively-trained language
models in interactive domains such as tool use. Passive learning is inherently
limited. However, we show that purely passive learning can in fact allow an
agent to learn generalizable strategies for determining and using causal
structures, as long as the agent can intervene at test time. We formally
illustrate that learning a strategy of first experimenting, then seeking goals,
can allow generalization from passive learning in principle. We then show
empirically that agents trained via imitation on expert data can indeed
generalize at test time to infer and use causal links which are never present
in the training data; these agents can also generalize experimentation
strategies to novel variable sets never observed in training. We then show that
strategies for causal intervention and exploitation can be generalized from
passive data even in a more complex environment with high-dimensional
observations, with the support of natural language explanations. Explanations
can even allow passive learners to generalize out-of-distribution from
perfectly-confounded training data. Finally, we show that language models,
trained only on passive next-word prediction, can generalize causal
intervention strategies from a few-shot prompt containing examples of
experimentation, together with explanations and reasoning. These results
highlight the surprising power of passive learning of active causal strategies,
and may help to understand the behaviors and capabilities of language models.
"
Tool Documentation Enables Zero-Shot Tool-Usage with Large Language  Models,Cheng-Yu Hsieh,http://arxiv.org/pdf/2308.00675v1.pdf,2023-08-01,"['cs.cl', 'cs.ai', 'cs.cv', 'cs.lg']",2308.00675v1.pdf,"  Today, large language models (LLMs) are taught to use new tools by providing
a few demonstrations of the tool's usage. Unfortunately, demonstrations are
hard to acquire, and can result in undesirable biased usage if the wrong
demonstration is chosen. Even in the rare scenario that demonstrations are
readily available, there is no principled selection protocol to determine how
many and which ones to provide. As tasks grow more complex, the selection
search grows combinatorially and invariably becomes intractable. Our work
provides an alternative to demonstrations: tool documentation. We advocate the
use of tool documentation, descriptions for the individual tool usage, over
demonstrations. We substantiate our claim through three main empirical findings
on 6 tasks across both vision and language modalities. First, on existing
benchmarks, zero-shot prompts with only tool documentation are sufficient for
eliciting proper tool usage, achieving performance on par with few-shot
prompts. Second, on a newly collected realistic tool-use dataset with hundreds
of available tool APIs, we show that tool documentation is significantly more
valuable than demonstrations, with zero-shot documentation significantly
outperforming few-shot without documentation. Third, we highlight the benefits
of tool documentations by tackling image generation and video tracking using
just-released unseen state-of-the-art models as tools. Finally, we highlight
the possibility of using tool documentation to automatically enable new
applications: by using nothing more than the documentation of GroundingDino,
Stable Diffusion, XMem, and SAM, LLMs can re-invent the functionalities of the
just-released Grounded-SAM and Track Anything models.
"
MathAttack: Attacking Large Language Models Towards Math Solving Ability,Zihao Zhou,http://arxiv.org/pdf/2309.01686v1.pdf,2023-09-04,['cs.cl'],2309.01686v1.pdf,"  With the boom of Large Language Models (LLMs), the research of solving Math
Word Problem (MWP) has recently made great progress. However, there are few
studies to examine the security of LLMs in math solving ability. Instead of
attacking prompts in the use of LLMs, we propose a MathAttack model to attack
MWP samples which are closer to the essence of security in solving math
problems. Compared to traditional text adversarial attack, it is essential to
preserve the mathematical logic of original MWPs during the attacking. To this
end, we propose logical entity recognition to identify logical entries which
are then frozen. Subsequently, the remaining text are attacked by adopting a
word-level attacker. Furthermore, we propose a new dataset RobustMath to
evaluate the robustness of LLMs in math solving ability. Extensive experiments
on our RobustMath and two another math benchmark datasets GSM8K and MultiAirth
show that MathAttack could effectively attack the math solving ability of LLMs.
In the experiments, we observe that (1) Our adversarial samples from
higher-accuracy LLMs are also effective for attacking LLMs with lower accuracy
(e.g., transfer from larger to smaller-size LLMs, or from few-shot to zero-shot
prompts); (2) Complex MWPs (such as more solving steps, longer text, more
numbers) are more vulnerable to attack; (3) We can improve the robustness of
LLMs by using our adversarial samples in few-shot prompts. Finally, we hope our
practice and observation can serve as an important attempt towards enhancing
the robustness of LLMs in math solving ability. We will release our code and
dataset.
"
MentaLLaMA: Interpretable Mental Health Analysis on Social Media with  Large Language Models,Kailai Yang,http://arxiv.org/pdf/2309.13567v2.pdf,2023-09-24,['cs.cl'],2309.13567v2.pdf,"  With the development of web technology, social media texts are becoming a
rich source for automatic mental health analysis. As traditional discriminative
methods bear the problem of low interpretability, the recent large language
models have been explored for interpretable mental health analysis on social
media, which aims to provide detailed explanations along with predictions. The
results show that ChatGPT can generate approaching-human explanations for its
correct classifications. However, LLMs still achieve unsatisfactory
classification performance in a zero-shot/few-shot manner. Domain-specific
finetuning is an effective solution, but faces 2 challenges: 1) lack of
high-quality training data. 2) no open-source LLMs for interpretable mental
health analysis were released to lower the finetuning cost. To alleviate these
problems, we build the first multi-task and multi-source interpretable mental
health instruction (IMHI) dataset on social media, with 105K data samples. The
raw social media data are collected from 10 existing sources covering 8 mental
health analysis tasks. We use expert-written few-shot prompts and collected
labels to prompt ChatGPT and obtain explanations from its responses. To ensure
the reliability of the explanations, we perform strict automatic and human
evaluations on the correctness, consistency, and quality of generated data.
Based on the IMHI dataset and LLaMA2 foundation models, we train MentalLLaMA,
the first open-source LLM series for interpretable mental health analysis with
instruction-following capability. We also evaluate the performance of
MentalLLaMA on the IMHI evaluation benchmark with 10 test sets, where their
correctness for making predictions and the quality of explanations are
examined. The results show that MentalLLaMA approaches state-of-the-art
discriminative methods in correctness and generates high-quality explanations.
"
FreshLLMs: Refreshing Large Language Models with Search Engine  Augmentation,Tu Vu,http://arxiv.org/pdf/2310.03214v1.pdf,2023-10-05,['cs.cl'],2310.03214v1.pdf,"  Most large language models (LLMs) are trained once and never updated; thus,
they lack the ability to dynamically adapt to our ever-changing world. In this
work, we perform a detailed study of the factuality of LLM-generated text in
the context of answering questions that test current world knowledge.
Specifically, we introduce FreshQA, a novel dynamic QA benchmark encompassing a
diverse range of question and answer types, including questions that require
fast-changing world knowledge as well as questions with false premises that
need to be debunked. We benchmark a diverse array of both closed and
open-source LLMs under a two-mode evaluation procedure that allows us to
measure both correctness and hallucination. Through human evaluations involving
more than 50K judgments, we shed light on limitations of these models and
demonstrate significant room for improvement: for instance, all models
(regardless of model size) struggle on questions that involve fast-changing
knowledge and false premises. Motivated by these results, we present
FreshPrompt, a simple few-shot prompting method that substantially boosts the
performance of an LLM on FreshQA by incorporating relevant and up-to-date
information retrieved from a search engine into the prompt. Our experiments
show that FreshPrompt outperforms both competing search engine-augmented
prompting methods such as Self-Ask (Press et al., 2022) as well as commercial
systems such as Perplexity.AI. Further analysis of FreshPrompt reveals that
both the number of retrieved evidences and their order play a key role in
influencing the correctness of LLM-generated answers. Additionally, instructing
the LLM to generate concise and direct answers helps reduce hallucination
compared to encouraging more verbose answers. To facilitate future work, we
release FreshQA at github.com/freshllms/freshqa and commit to updating it at
regular intervals.
"
A Comprehensive Survey on Pretrained Foundation Models: A History from  BERT to ChatGPT,Ce Zhou,http://arxiv.org/pdf/2302.09419v3.pdf,2023-02-18,"['cs.ai', 'cs.cl', 'cs.lg']",2302.09419v3.pdf,"  Pretrained Foundation Models (PFMs) are regarded as the foundation for
various downstream tasks with different data modalities. A PFM (e.g., BERT,
ChatGPT, and GPT-4) is trained on large-scale data which provides a reasonable
parameter initialization for a wide range of downstream applications. BERT
learns bidirectional encoder representations from Transformers, which are
trained on large datasets as contextual language models. Similarly, the
generative pretrained transformer (GPT) method employs Transformers as the
feature extractor and is trained using an autoregressive paradigm on large
datasets. Recently, ChatGPT shows promising success on large language models,
which applies an autoregressive language model with zero shot or few shot
prompting. The remarkable achievements of PFM have brought significant
breakthroughs to various fields of AI. Numerous studies have proposed different
methods, raising the demand for an updated survey. This study provides a
comprehensive review of recent research advancements, challenges, and
opportunities for PFMs in text, image, graph, as well as other data modalities.
The review covers the basic components and existing pretraining methods used in
natural language processing, computer vision, and graph learning. Additionally,
it explores advanced PFMs used for different data modalities and unified PFMs
that consider data quality and quantity. The review also discusses research
related to the fundamentals of PFMs, such as model efficiency and compression,
security, and privacy. Finally, the study provides key implications, future
research directions, challenges, and open problems in the field of PFMs.
Overall, this survey aims to shed light on the research of the PFMs on
scalability, security, logical reasoning ability, cross-domain learning
ability, and the user-friendly interactive ability for artificial general
intelligence.
"
Short Answer Grading Using One-shot Prompting and Text Similarity  Scoring Model,Su-Youn Yoon,http://arxiv.org/pdf/2305.18638v1.pdf,2023-05-29,"['cs.cl', 'i.2.7']",2305.18638v1.pdf,"  In this study, we developed an automated short answer grading (ASAG) model
that provided both analytic scores and final holistic scores. Short answer
items typically consist of multiple sub-questions, and providing an analytic
score and the text span relevant to each sub-question can increase the
interpretability of the automated scores. Furthermore, they can be used to
generate actionable feedback for students. Despite these advantages, most
studies have focused on predicting only holistic scores due to the difficulty
in constructing dataset with manual annotations. To address this difficulty, we
used large language model (LLM)-based one-shot prompting and a text similarity
scoring model with domain adaptation using small manually annotated dataset.
The accuracy and quadratic weighted kappa of our model were 0.67 and 0.71 on a
subset of the publicly available ASAG dataset. The model achieved a substantial
improvement over the majority baseline.
"
DePlot: One-shot visual language reasoning by plot-to-table translation,Fangyu Liu,http://arxiv.org/pdf/2212.10505v2.pdf,2022-12-20,"['cs.cl', 'cs.ai', 'cs.cv']",2212.10505v2.pdf,"  Visual language such as charts and plots is ubiquitous in the human world.
Comprehending plots and charts requires strong reasoning skills. Prior
state-of-the-art (SOTA) models require at least tens of thousands of training
examples and their reasoning capabilities are still much limited, especially on
complex human-written queries. This paper presents the first one-shot solution
to visual language reasoning. We decompose the challenge of visual language
reasoning into two steps: (1) plot-to-text translation, and (2) reasoning over
the translated text. The key in this method is a modality conversion module,
named as DePlot, which translates the image of a plot or chart to a linearized
table. The output of DePlot can then be directly used to prompt a pretrained
large language model (LLM), exploiting the few-shot reasoning capabilities of
LLMs. To obtain DePlot, we standardize the plot-to-table task by establishing
unified task formats and metrics, and train DePlot end-to-end on this task.
DePlot can then be used off-the-shelf together with LLMs in a plug-and-play
fashion. Compared with a SOTA model finetuned on more than >28k data points,
DePlot+LLM with just one-shot prompting achieves a 24.0% improvement over
finetuned SOTA on human-written queries from the task of chart QA.
"
CHAI-DT: A Framework for Prompting Conversational Generative AI Agents  to Actively Participate in Co-Creation,Brandon Harwood,http://arxiv.org/pdf/2305.03852v1.pdf,2023-05-05,"['cs.hc', 'cs.ai']",2305.03852v1.pdf,"  This paper explores the potential for utilizing generative AI models in
group-focused co-creative frameworks to enhance problem solving and ideation in
business innovation and co-creation contexts, and proposes a novel prompting
technique for conversational generative AI agents which employ methods inspired
by traditional 'human-to-human' facilitation and instruction to enable active
contribution to Design Thinking, a co-creative framework. Through experiments
using this prompting technique, we gather evidence that conversational
generative transformers (i.e. ChatGPT) have the capability to contribute
context-specific, useful, and creative input into Design Thinking activities.
We also discuss the potential benefits, limitations, and risks associated with
using generative AI models in co-creative ideation and provide recommendations
for future research.
"
AceCoder: Utilizing Existing Code to Enhance Code Generation,Jia Li,http://arxiv.org/pdf/2303.17780v3.pdf,2023-03-31,"['cs.se', 'cs.ai']",2303.17780v3.pdf,"  Large Language Models (LLMs) have shown great success in code generation.
LLMs take as the input a prompt and output the code. A key question is how to
make prompts (i.e., Prompting Techniques). Existing prompting techniques are
designed for natural language generation and have low accuracy in code
generation.
  In this paper, we propose a new prompting technique named AceCoder. Our
motivation is that code generation meets two unique challenges (i.e.,
requirement understanding and code implementation). AceCoder contains two novel
mechanisms (i.e., guided code generation and example retrieval) to solve these
challenges. (1) Guided code generation asks LLMs first to analyze requirements
and output an intermediate preliminary (e.g., test cases). The preliminary is
used to clarify requirements and tell LLMs ""what to write"". (2) Example
retrieval selects similar programs as examples in prompts, which provide lots
of relevant content (e.g., algorithms, APIs) and teach LLMs ""how to write"". We
apply AceCoder to three LLMs (e.g., Codex) and evaluate it on three public
benchmarks using the Pass@k. Results show that AceCoder can significantly
improve the performance of LLMs on code generation. (1) In terms of Pass@1,
AceCoder outperforms the state-of-the-art baseline by up to 56.4% in MBPP,
70.7% in MBJP, and 88.4% in MBJSP. (2) AceCoder is effective in LLMs with
different sizes (i.e., 6B to 13B) and different languages (i.e., Python, Java,
and JavaScript). (3) Human evaluation shows human developers prefer programs
from AceCoder.
"
Compositional Semantic Parsing with Large Language Models,Andrew Drozdov,http://arxiv.org/pdf/2209.15003v2.pdf,2022-09-29,"['cs.cl', 'cs.ai']",2209.15003v2.pdf,"  Humans can reason compositionally when presented with new tasks. Previous
research shows that appropriate prompting techniques enable large language
models (LLMs) to solve artificial compositional generalization tasks such as
SCAN. In this work, we identify additional challenges in more realistic
semantic parsing tasks with larger vocabulary and refine these prompting
techniques to address them. Our best method is based on least-to-most
prompting: it decomposes the problem using prompting-based syntactic parsing,
then uses this decomposition to select appropriate exemplars and to
sequentially generate the semantic parse. This method allows us to set a new
state of the art for CFQ while requiring only 1% of the training data used by
traditional approaches. Due to the general nature of our approach, we expect
similar efforts will lead to new results in other tasks and domains, especially
for knowledge-intensive applications.
"
EvEntS ReaLM: Event Reasoning of Entity States via Language Models,Evangelia Spiliopoulou,http://arxiv.org/pdf/2211.05392v1.pdf,2022-11-10,['cs.cl'],2211.05392v1.pdf,"  This paper investigates models of event implications. Specifically, how well
models predict entity state-changes, by targeting their understanding of
physical attributes. Nominally, Large Language models (LLM) have been exposed
to procedural knowledge about how objects interact, yet our benchmarking shows
they fail to reason about the world. Conversely, we also demonstrate that
existing approaches often misrepresent the surprising abilities of LLMs via
improper task encodings and that proper model prompting can dramatically
improve performance of reported baseline results across multiple tasks. In
particular, our results indicate that our prompting technique is especially
useful for unseen attributes (out-of-domain) or when only limited data is
available.
"
GEMBA-MQM: Detecting Translation Quality Error Spans with GPT-4,Tom Kocmi,http://arxiv.org/pdf/2310.13988v1.pdf,2023-10-21,['cs.cl'],2310.13988v1.pdf,"  This paper introduces GEMBA-MQM, a GPT-based evaluation metric designed to
detect translation quality errors, specifically for the quality estimation
setting without the need for human reference translations. Based on the power
of large language models (LLM), GEMBA-MQM employs a fixed three-shot prompting
technique, querying the GPT-4 model to mark error quality spans. Compared to
previous works, our method has language-agnostic prompts, thus avoiding the
need for manual prompt preparation for new languages.
  While preliminary results indicate that GEMBA-MQM achieves state-of-the-art
accuracy for system ranking, we advise caution when using it in academic works
to demonstrate improvements over other methods due to its dependence on the
proprietary, black-box GPT model.
"
Utilizing Language Models for Energy Load Forecasting,Hao Xue,http://arxiv.org/pdf/2310.17788v1.pdf,2023-10-26,"['cs.ai', 'cs.cl']",2310.17788v1.pdf,"  Energy load forecasting plays a crucial role in optimizing resource
allocation and managing energy consumption in buildings and cities. In this
paper, we propose a novel approach that leverages language models for energy
load forecasting. We employ prompting techniques to convert energy consumption
data into descriptive sentences, enabling fine-tuning of language models. By
adopting an autoregressive generating approach, our proposed method enables
predictions of various horizons of future energy load consumption. Through
extensive experiments on real-world datasets, we demonstrate the effectiveness
and accuracy of our proposed method. Our results indicate that utilizing
language models for energy load forecasting holds promise for enhancing energy
efficiency and facilitating intelligent decision-making in energy systems.
"
Eliciting Topic Hierarchies from Large Language Models,Grace Li,http://arxiv.org/pdf/2310.19275v1.pdf,2023-10-30,['cs.hc'],2310.19275v1.pdf,"  Finding topics to write about can be a mentally demanding process. However,
topic hierarchies can help writers explore topics of varying levels of
specificity. In this paper, we use large language models (LLMs) to help
construct topic hierarchies. Although LLMs have access to such knowledge, it
can be difficult to elicit due to issues of specificity, scope, and repetition.
We designed and tested three different prompting techniques to find one that
maximized accuracy. We found that prepending the general topic area to a prompt
yielded the most accurate results with 85% accuracy. We discuss applications of
this research including STEM writing, education, and content creation.
"
Structured Chain-of-Thought Prompting for Code Generation,Jia Li,http://arxiv.org/pdf/2305.06599v3.pdf,2023-05-11,"['cs.se', 'cs.cl']",2305.06599v3.pdf,"  Large Language Models (LLMs) (e.g., ChatGPT) have shown impressive
performance in code generation. LLMs take prompts as inputs, and
Chain-of-Thought (CoT) prompting is the state-of-the-art prompting technique.
CoT prompting asks LLMs first to generate CoTs (i.e., intermediate natural
language reasoning steps) and then output the code. However, CoT prompting is
designed for natural language generation and has low accuracy in code
generation.
  In this paper, we propose Structured CoTs (SCoTs) and present a novel
prompting technique for code generation, named SCoT prompting. Our motivation
is source code contains rich structural information and any code can be
composed of three program structures (i.e., sequence, branch, and loop
structures). Intuitively, structured intermediate reasoning steps make for
structured source code. Thus, we ask LLMs to use program structures to build
CoTs, obtaining SCoTs. Then, LLMs generate the final code based on SCoTs.
Compared to CoT prompting, SCoT prompting explicitly constrains LLMs to think
about how to solve requirements from the view of source code and further the
performance of LLMs in code generation. We apply SCoT prompting to two LLMs
(i.e., ChatGPT and Codex) and evaluate it on three benchmarks (i.e., HumanEval,
MBPP, and MBCPP). (1) SCoT prompting outperforms the state-of-the-art baseline
- CoT prompting by up to 13.79% in Pass@1. (2) Human evaluation shows human
developers prefer programs from SCoT prompting. (3) SCoT prompting is robust to
examples and achieves substantial improvements.
"
The Impact of AI in Physics Education: A Comprehensive Review from GCSE  to University Levels,Will Yeadon,http://arxiv.org/pdf/2309.05163v1.pdf,2023-09-10,['physics.ed-ph'],2309.05163v1.pdf,"  With the rapid evolution of Artificial Intelligence (AI), its potential
implications for higher education have become a focal point of interest. This
study delves into the capabilities of AI in Physics Education and offers
actionable AI policy recommendations. Using a Large Language Model (LLM), we
assessed its ability to answer 1337 Physics exam questions spanning GCSE,
A-Level, and Introductory University curricula. We employed various AI
prompting techniques: Zero Shot, In Context Learning, and Confirmatory
Checking, which merges Chain of Thought reasoning with Reflection. The AI's
proficiency varied across academic levels: it scored an average of 83.4% on
GCSE, 63.8% on A-Level, and 37.4% on university-level questions, with an
overall average of 59.9% using the most effective prompting technique. In a
separate test, the LLM's accuracy on 5000 mathematical operations was found to
decrease as the number of digits increased. Furthermore, when evaluated as a
marking tool, the LLM's concordance with human markers averaged at 50.8%, with
notable inaccuracies in marking straightforward questions, like
multiple-choice. Given these results, our recommendations underscore caution:
while current LLMs can consistently perform well on Physics questions at
earlier educational stages, their efficacy diminishes with advanced content and
complex calculations. LLM outputs often showcase novel methods not in the
syllabus, excessive verbosity, and miscalculations in basic arithmetic. This
suggests that at university, there's no substantial threat from LLMs for
non-invigilated Physics questions. However, given the LLMs' considerable
proficiency in writing Physics essays and coding abilities, non-invigilated
examinations of these skills in Physics are highly vulnerable to automated
completion by LLMs. This vulnerability also extends to Physics questions
pitched at lower academic levels.
"
HELP ME THINK: A Simple Prompting Strategy for Non-experts to Create  Customized Content with Models,Swaroop Mishra,http://arxiv.org/pdf/2208.08232v2.pdf,2022-08-17,"['cs.cl', 'cs.ai', 'cs.cv', 'cs.hc', 'cs.lg']",2208.08232v2.pdf,"  Controlling the text generated by language models and customizing the content
has been a long-standing challenge. Existing prompting techniques proposed in
pursuit of providing control are task-specific and lack generality; this
provides overwhelming choices for non-expert users to find a suitable method
for their task. The effort associated with those techniques, such as in writing
examples, explanations, instructions, etc. further limits their adoption among
non-expert users. In this paper, we propose a simple prompting strategy HELP ME
THINK where we encourage GPT3 to help non-expert users by asking a set of
relevant questions and leveraging user answers to execute the task. We
demonstrate the efficacy of our technique HELP ME THINK on a variety of tasks.
Specifically, we focus on tasks that are hard for average humans and require
significant thinking to perform. We hope our work will encourage the
development of unconventional ways to harness the power of large language
models.
"
Enabling Conversational Interaction with Mobile UI using Large Language  Models,Bryan Wang,http://arxiv.org/pdf/2209.08655v2.pdf,2022-09-18,"['cs.hc', 'cs.ai']",2209.08655v2.pdf,"  Conversational agents show the promise to allow users to interact with mobile
devices using language. However, to perform diverse UI tasks with natural
language, developers typically need to create separate datasets and models for
each specific task, which is expensive and effort-consuming. Recently,
pre-trained large language models (LLMs) have been shown capable of
generalizing to various downstream tasks when prompted with a handful of
examples from the target task. This paper investigates the feasibility of
enabling versatile conversational interactions with mobile UIs using a single
LLM. We designed prompting techniques to adapt an LLM to mobile UIs. We
experimented with four important modeling tasks that address various scenarios
in conversational interaction. Our method achieved competitive performance on
these challenging tasks without requiring dedicated datasets and training,
offering a lightweight and generalizable approach to enable language-based
mobile interaction.
"
Teaching Algorithmic Reasoning via In-context Learning,Hattie Zhou,http://arxiv.org/pdf/2211.09066v1.pdf,2022-11-15,"['cs.lg', 'cs.ai', 'cs.cl']",2211.09066v1.pdf,"  Large language models (LLMs) have shown increasing in-context learning
capabilities through scaling up model and data size. Despite this progress,
LLMs are still unable to solve algorithmic reasoning problems. While providing
a rationale with the final answer has led to further improvements in multi-step
reasoning problems, Anil et al. 2022 showed that even simple algorithmic
reasoning tasks such as parity are far from solved. In this work, we identify
and study four key stages for successfully teaching algorithmic reasoning to
LLMs: (1) formulating algorithms as skills, (2) teaching multiple skills
simultaneously (skill accumulation), (3) teaching how to combine skills (skill
composition) and (4) teaching how to use skills as tools. We show that it is
possible to teach algorithmic reasoning to LLMs via in-context learning, which
we refer to as algorithmic prompting. We evaluate our approach on a variety of
arithmetic and quantitative reasoning tasks, and demonstrate significant boosts
in performance over existing prompting techniques. In particular, for long
parity, addition, multiplication and subtraction, we achieve an error reduction
of approximately 10x, 9x, 5x and 2x respectively compared to the best available
baselines.
"
Understanding Stereotypes in Language Models: Towards Robust Measurement  and Zero-Shot Debiasing,Justus Mattern,http://arxiv.org/pdf/2212.10678v1.pdf,2022-12-20,"['cs.cl', 'cs.lg']",2212.10678v1.pdf,"  Generated texts from large pretrained language models have been shown to
exhibit a variety of harmful, human-like biases about various demographics.
These findings prompted large efforts aiming to understand and measure such
effects, with the goal of providing benchmarks that can guide the development
of techniques mitigating these stereotypical associations. However, as recent
research has pointed out, the current benchmarks lack a robust experimental
setup, consequently hindering the inference of meaningful conclusions from
their evaluation metrics. In this paper, we extend these arguments and
demonstrate that existing techniques and benchmarks aiming to measure
stereotypes tend to be inaccurate and consist of a high degree of experimental
noise that severely limits the knowledge we can gain from benchmarking language
models based on them. Accordingly, we propose a new framework for robustly
measuring and quantifying biases exhibited by generative language models.
Finally, we use this framework to investigate GPT-3's occupational gender bias
and propose prompting techniques for mitigating these biases without the need
for fine-tuning.
"
Image To Tree with Recursive Prompting,James Batten,http://arxiv.org/pdf/2301.00447v1.pdf,2023-01-01,"['cs.cv', 'cs.lg']",2301.00447v1.pdf,"  Extracting complex structures from grid-based data is a common key step in
automated medical image analysis. The conventional solution to recovering
tree-structured geometries typically involves computing the minimal cost path
through intermediate representations derived from segmentation masks. However,
this methodology has significant limitations in the context of projective
imaging of tree-structured 3D anatomical data such as coronary arteries, since
there are often overlapping branches in the 2D projection. In this work, we
propose a novel approach to predicting tree connectivity structure which
reformulates the task as an optimization problem over individual steps of a
recursive process. We design and train a two-stage model which leverages the
UNet and Transformer architectures and introduces an image-based prompting
technique. Our proposed method achieves compelling results on a pair of
synthetic datasets, and outperforms a shortest-path baseline.
"
Large Language Models Can Be Easily Distracted by Irrelevant Context,Freda Shi,http://arxiv.org/pdf/2302.00093v3.pdf,2023-01-31,"['cs.cl', 'cs.ai']",2302.00093v3.pdf,"  Large language models have achieved impressive performance on various natural
language processing tasks. However, so far they have been evaluated primarily
on benchmarks where all information in the input context is relevant for
solving the task. In this work, we investigate the distractibility of large
language models, i.e., how the model problem-solving accuracy can be influenced
by irrelevant context. In particular, we introduce Grade-School Math with
Irrelevant Context (GSM-IC), an arithmetic reasoning dataset with irrelevant
information in the problem description. We use this benchmark to measure the
distractibility of cutting-edge prompting techniques for large language models,
and find that the model performance is dramatically decreased when irrelevant
information is included. We also identify several approaches for mitigating
this deficiency, such as decoding with self-consistency and adding to the
prompt an instruction that tells the language model to ignore the irrelevant
information.
"
Synthetic Prompting: Generating Chain-of-Thought Demonstrations for  Large Language Models,Zhihong Shao,http://arxiv.org/pdf/2302.00618v1.pdf,2023-02-01,['cs.cl'],2302.00618v1.pdf,"  Large language models can perform various reasoning tasks by using
chain-of-thought prompting, which guides them to find answers through
step-by-step demonstrations. However, the quality of the prompts depends on the
demonstrations given to the models, and creating many of them by hand is
costly. We introduce Synthetic prompting, a method that leverages a few
handcrafted examples to prompt the model to generate more examples by itself,
and selects effective demonstrations to elicit better reasoning. Our method
alternates between a backward and forward process to generate new examples. The
backward process generates a question that match a sampled reasoning chain, so
that the question is solvable and clear. The forward process produces a more
detailed reasoning chain for the question, improving the quality of the
example. We evaluate our method on numerical, symbolic, and algorithmic
reasoning tasks, and show that it outperforms existing prompting techniques.
"
Language-Specific Representation of Emotion-Concept Knowledge Causally  Supports Emotion Inference,Ming Li,http://arxiv.org/pdf/2302.09582v4.pdf,2023-02-19,"['cs.ai', 'cs.cl']",2302.09582v4.pdf,"  Understanding how language supports emotion inference remains a topic of
debate in emotion science. The present study investigated whether
language-derived emotion-concept knowledge would causally support emotion
inference by manipulating the language-specific knowledge representations in
large language models. Using the prompt technique, 14 attributes of emotion
concepts were found to be represented by distinct artificial neuron
populations. By manipulating these attribute-related neurons, the majority of
the emotion inference tasks showed performance deterioration compared to random
manipulations. The attribute-specific performance deterioration was related to
the importance of different attributes in human mental space. Our findings
provide causal evidence in support of a language-based mechanism for emotion
inference and highlight the contributions of emotion-concept knowledge.
"
MathPrompter: Mathematical Reasoning using Large Language Models,Shima Imani,http://arxiv.org/pdf/2303.05398v1.pdf,2023-03-04,"['cs.cl', 'cs.ai']",2303.05398v1.pdf,"  Large Language Models (LLMs) have limited performance when solving arithmetic
reasoning tasks and often provide incorrect answers. Unlike natural language
understanding, math problems typically have a single correct answer, making the
task of generating accurate solutions more challenging for LLMs. To the best of
our knowledge, we are not aware of any LLMs that indicate their level of
confidence in their responses which fuels a trust deficit in these models
impeding their adoption. To address this deficiency, we propose `MathPrompter',
a technique that improves performance of LLMs on arithmetic problems along with
increased reliance in the predictions. MathPrompter uses the Zero-shot
chain-of-thought prompting technique to generate multiple Algebraic expressions
or Python functions to solve the same math problem in different ways and
thereby raise the confidence level in the output results. This is in contrast
to other prompt based CoT methods, where there is no check on the validity of
the intermediate steps followed. Our technique improves over state-of-the-art
on the MultiArith dataset ($78.7\%\rightarrow92.5\%$) evaluated using 175B
parameter GPT-based LLM.
"
Zero-shot Temporal Relation Extraction with ChatGPT,Chenhan Yuan,http://arxiv.org/pdf/2304.05454v1.pdf,2023-04-11,"['cs.cl', 'cs.ai']",2304.05454v1.pdf,"  The goal of temporal relation extraction is to infer the temporal relation
between two events in the document. Supervised models are dominant in this
task. In this work, we investigate ChatGPT's ability on zero-shot temporal
relation extraction. We designed three different prompt techniques to break
down the task and evaluate ChatGPT. Our experiments show that ChatGPT's
performance has a large gap with that of supervised methods and can heavily
rely on the design of prompts. We further demonstrate that ChatGPT can infer
more small relation classes correctly than supervised methods. The current
shortcomings of ChatGPT on temporal relation extraction are also discussed in
this paper. We found that ChatGPT cannot keep consistency during temporal
inference and it fails in actively long-dependency temporal inference.
"
An Empirical Study on the Robustness of the Segment Anything Model (SAM),Yuqing Wang,http://arxiv.org/pdf/2305.06422v2.pdf,2023-05-10,['cs.cv'],2305.06422v2.pdf,"  The Segment Anything Model (SAM) is a foundation model for general image
segmentation. Although it exhibits impressive performance predominantly on
natural images, understanding its robustness against various image
perturbations and domains is critical for real-world applications where such
challenges frequently arise. In this study we conduct a comprehensive
robustness investigation of SAM under diverse real-world conditions. Our
experiments encompass a wide range of image perturbations. Our experimental
results demonstrate that SAM's performance generally declines under perturbed
images, with varying degrees of vulnerability across different perturbations.
By customizing prompting techniques and leveraging domain knowledge based on
the unique characteristics of each dataset, the model's resilience to these
perturbations can be enhanced, addressing dataset-specific challenges. This
work sheds light on the limitations and strengths of SAM in real-world
applications, promoting the development of more robust and versatile image
segmentation solutions.
"
SCITAB: A Challenging Benchmark for Compositional Reasoning and Claim  Verification on Scientific Tables,Xinyuan Lu,http://arxiv.org/pdf/2305.13186v3.pdf,2023-05-22,"['cs.cl', 'cs.ai']",2305.13186v3.pdf,"  Current scientific fact-checking benchmarks exhibit several shortcomings,
such as biases arising from crowd-sourced claims and an over-reliance on
text-based evidence. We present SCITAB, a challenging evaluation dataset
consisting of 1.2K expert-verified scientific claims that 1) originate from
authentic scientific publications and 2) require compositional reasoning for
verification. The claims are paired with evidence-containing scientific tables
annotated with labels. Through extensive evaluations, we demonstrate that
SCITAB poses a significant challenge to state-of-the-art models, including
table-based pretraining models and large language models. All models except
GPT-4 achieved performance barely above random guessing. Popular prompting
techniques, such as Chain-of-Thought, do not achieve much performance gains on
SCITAB. Our analysis uncovers several unique challenges posed by SCITAB,
including table grounding, claim ambiguity, and compositional reasoning. Our
codes and data are publicly available at https://github.com/XinyuanLu00/SciTab.
"
Unraveling ChatGPT: A Critical Analysis of AI-Generated Goal-Oriented  Dialogues and Annotations,Tiziano Labruna,http://arxiv.org/pdf/2305.14556v1.pdf,2023-05-23,"['cs.cl', 'cs.ai']",2305.14556v1.pdf,"  Large pre-trained language models have exhibited unprecedented capabilities
in producing high-quality text via prompting techniques. This fact introduces
new possibilities for data collection and annotation, particularly in
situations where such data is scarce, complex to gather, expensive, or even
sensitive. In this paper, we explore the potential of these models to generate
and annotate goal-oriented dialogues, and conduct an in-depth analysis to
evaluate their quality. Our experiments employ ChatGPT, and encompass three
categories of goal-oriented dialogues (task-oriented, collaborative, and
explanatory), two generation modes (interactive and one-shot), and two
languages (English and Italian). Based on extensive human-based evaluations, we
demonstrate that the quality of generated dialogues and annotations is on par
with those generated by humans.
"
StudentEval: A Benchmark of Student-Written Prompts for Large Language  Models of Code,Hannah McLean Babe,http://arxiv.org/pdf/2306.04556v1.pdf,2023-06-07,"['cs.lg', 'cs.hc', 'cs.se']",2306.04556v1.pdf,"  Code LLMs are being rapidly deployed and there is evidence that they can make
professional programmers more productive. Current benchmarks for code
generation measure whether models generate correct programs given an expert
prompt. In this paper, we present a new benchmark containing multiple prompts
per problem, written by a specific population of non-expert prompters:
beginning programmers. StudentEval contains 1,749 prompts for 48 problems,
written by 80 students who have only completed one semester of Python
programming. Our students wrote these prompts while working interactively with
a Code LLM, and we observed very mixed success rates. We use StudentEval to
evaluate 5 Code LLMs and find that StudentEval is a better discriminator of
model performance than existing benchmarks. We analyze the prompts and find
significant variation in students' prompting techniques. We also find that
nondeterministic LLM sampling could mislead students into thinking that their
prompts are more (or less) effective than they actually are, which has
implications for how to teach with Code LLMs.
"
Knowledge-Prompted Estimator: A Novel Approach to Explainable Machine  Translation Assessment,Hao Yang,http://arxiv.org/pdf/2306.07486v1.pdf,2023-06-13,['cs.cl'],2306.07486v1.pdf,"  Cross-lingual Machine Translation (MT) quality estimation plays a crucial
role in evaluating translation performance. GEMBA, the first MT quality
assessment metric based on Large Language Models (LLMs), employs one-step
prompting to achieve state-of-the-art (SOTA) in system-level MT quality
estimation; however, it lacks segment-level analysis. In contrast,
Chain-of-Thought (CoT) prompting outperforms one-step prompting by offering
improved reasoning and explainability. In this paper, we introduce
Knowledge-Prompted Estimator (KPE), a CoT prompting method that combines three
one-step prompting techniques, including perplexity, token-level similarity,
and sentence-level similarity. This method attains enhanced performance for
segment-level estimation compared with previous deep learning models and
one-step prompting approaches. Furthermore, supplementary experiments on
word-level visualized alignment demonstrate that our KPE method significantly
improves token alignment compared with earlier models and provides better
interpretability for MT quality estimation. Code will be released upon
publication.
"
Questioning the Survey Responses of Large Language Models,Ricardo Dominguez-Olmedo,http://arxiv.org/pdf/2306.07951v2.pdf,2023-06-13,['cs.cl'],2306.07951v2.pdf,"  As large language models increase in capability, researchers have started to
conduct surveys of all kinds on these models with varying scientific
motivations. In this work, we examine what we can learn from language models'
survey responses on the basis of the well-established American Community Survey
(ACS) by the U.S. Census Bureau. Using a de-facto standard multiple-choice
prompting technique and evaluating 40 different language models, hundreds of
thousands of times each on questions from the ACS, we systematically establish
two dominant patterns. First, models have significant position and labeling
biases, for example, towards survey responses labeled with the letter ""A"".
Second, when adjusting for labeling biases through randomized answer ordering,
models across the board trend towards uniformly random survey responses. In
fact, binary classifiers can almost perfectly differentiate between models'
responses to the ACS and the responses of the US census. Taken together, our
findings suggest caution in treating survey responses from language models as
equivalent to those of human populations at present time.
"
Investigating Prompting Techniques for Zero- and Few-Shot Visual  Question Answering,Rabiul Awal,http://arxiv.org/pdf/2306.09996v1.pdf,2023-06-16,"['cs.cv', 'cs.cl']",2306.09996v1.pdf,"  Visual question answering (VQA) is a challenging task that requires the
ability to comprehend and reason with visual information. While recent
vision-language models have made strides, they continue to struggle with
zero-shot VQA, particularly in handling complex compositional questions and
adapting to new domains i.e. knowledge-based reasoning. This paper explores the
use of various prompting strategies, focusing on the BLIP2 model, to enhance
zero-shot VQA performance. We conduct a comprehensive investigation across
several VQA datasets, examining the effectiveness of different question
templates, the role of few-shot exemplars, the impact of chain-of-thought (CoT)
reasoning, and the benefits of incorporating image captions as additional
visual cues. Despite the varied outcomes, our findings demonstrate that
carefully designed question templates and the integration of additional visual
cues, like image captions, can contribute to improved VQA performance,
especially when used in conjunction with few-shot examples. However, we also
identify a limitation in the use of chain-of-thought rationalization, which
negatively affects VQA accuracy. Our study thus provides critical insights into
the potential of prompting for improving zero-shot VQA performance.
"
Extracting Multi-valued Relations from Language Models,Sneha Singhania,http://arxiv.org/pdf/2307.03122v2.pdf,2023-07-06,['cs.cl'],2307.03122v2.pdf,"  The widespread usage of latent language representations via pre-trained
language models (LMs) suggests that they are a promising source of structured
knowledge. However, existing methods focus only on a single object per
subject-relation pair, even though often multiple objects are correct. To
overcome this limitation, we analyze these representations for their potential
to yield materialized multi-object relational knowledge. We formulate the
problem as a rank-then-select task. For ranking candidate objects, we evaluate
existing prompting techniques and propose new ones incorporating domain
knowledge. Among the selection methods, we find that choosing objects with a
likelihood above a learned relation-specific threshold gives a 49.5% F1 score.
Our results highlight the difficulty of employing LMs for the multi-valued
slot-filling task and pave the way for further research on extracting
relational knowledge from latent language representations.
"
Prompts Should not be Seen as Secrets: Systematically Measuring Prompt  Extraction Attack Success,Yiming Zhang,http://arxiv.org/pdf/2307.06865v1.pdf,2023-07-13,"['cs.cl', 'cs.ai']",2307.06865v1.pdf,"  The generations of large language models are commonly controlled through
prompting techniques, where a user's query to the model is prefixed with a
prompt that aims to guide the model's behaviour on the query. The prompts used
by companies to guide their models are often treated as secrets, to be hidden
from the user making the query. They have even been treated as commodities to
be bought and sold. However, there has been anecdotal evidence showing that the
prompts can be extracted by a user even when they are kept secret. In this
paper, we present a framework for systematically measuring the success of
prompt extraction attacks. In experiments with multiple sources of prompts and
multiple underlying language models, we find that simple text-based attacks can
in fact reveal prompts with high probability.
"
Leveraging Large Language Models to Generate Answer Set Programs,Adam Ishay,http://arxiv.org/pdf/2307.07699v1.pdf,2023-07-15,"['cs.ai', 'cs.cl', 'cs.sc']",2307.07699v1.pdf,"  Large language models (LLMs), such as GPT-3 and GPT-4, have demonstrated
exceptional performance in various natural language processing tasks and have
shown the ability to solve certain reasoning problems. However, their reasoning
capabilities are limited and relatively shallow, despite the application of
various prompting techniques. In contrast, formal logic is adept at handling
complex reasoning, but translating natural language descriptions into formal
logic is a challenging task that non-experts struggle with. This paper proposes
a neuro-symbolic method that combines the strengths of large language models
and answer set programming. Specifically, we employ an LLM to transform natural
language descriptions of logic puzzles into answer set programs. We carefully
design prompts for an LLM to convert natural language descriptions into answer
set programs in a step by step manner. Surprisingly, with just a few in-context
learning examples, LLMs can generate reasonably complex answer set programs.
The majority of errors made are relatively simple and can be easily corrected
by humans, thus enabling LLMs to effectively assist in the creation of answer
set programs.
"
Fixing Rust Compilation Errors using LLMs,Pantazis Deligiannis,http://arxiv.org/pdf/2308.05177v1.pdf,2023-08-09,"['cs.se', 'cs.pl']",2308.05177v1.pdf,"  The Rust programming language, with its safety guarantees, has established
itself as a viable choice for low-level systems programming language over the
traditional, unsafe alternatives like C/C++. These guarantees come from a
strong ownership-based type system, as well as primitive support for features
like closures, pattern matching, etc., that make the code more concise and
amenable to reasoning. These unique Rust features also pose a steep learning
curve for programmers.
  This paper presents a tool called RustAssistant that leverages the emergent
capabilities of Large Language Models (LLMs) to automatically suggest fixes for
Rust compilation errors. RustAssistant uses a careful combination of prompting
techniques as well as iteration with an LLM to deliver high accuracy of fixes.
RustAssistant is able to achieve an impressive peak accuracy of roughly 74% on
real-world compilation errors in popular open-source Rust repositories. We plan
to release our dataset of Rust compilation errors to enable further research.
"
The Devil is in the Errors: Leveraging Large Language Models for  Fine-grained Machine Translation Evaluation,Patrick Fernandes,http://arxiv.org/pdf/2308.07286v1.pdf,2023-08-14,"['cs.cl', 'cs.lg']",2308.07286v1.pdf,"  Automatic evaluation of machine translation (MT) is a critical tool driving
the rapid iterative development of MT systems. While considerable progress has
been made on estimating a single scalar quality score, current metrics lack the
informativeness of more detailed schemes that annotate individual errors, such
as Multidimensional Quality Metrics (MQM). In this paper, we help fill this gap
by proposing AutoMQM, a prompting technique which leverages the reasoning and
in-context learning capabilities of large language models (LLMs) and asks them
to identify and categorize errors in translations. We start by evaluating
recent LLMs, such as PaLM and PaLM-2, through simple score prediction
prompting, and we study the impact of labeled data through in-context learning
and finetuning. We then evaluate AutoMQM with PaLM-2 models, and we find that
it improves performance compared to just prompting for scores (with
particularly large gains for larger models) while providing interpretability
through error spans that align with human annotations.
"
Boosting Logical Reasoning in Large Language Models through a New  Framework: The Graph of Thought,Bin Lei,http://arxiv.org/pdf/2308.08614v1.pdf,2023-08-16,"['cs.lg', 'cs.ai', 'cs.cl']",2308.08614v1.pdf,"  Recent advancements in large-scale models, such as GPT-4, have showcased
remarkable capabilities in addressing standard queries. However, when facing
complex problems that require multi-step logical reasoning, their accuracy
dramatically decreases. Current research has explored the realm of
\textit{prompting engineering} to bolster the inferential capacities of these
models. Our paper unveils a pioneering prompting technique, dubbed
\textit{Graph of Thoughts (GoT)}. Through testing on a trio of escalating
challenges: the 24-point game, resolution of high-degree polynomial equations,
and derivation of formulas for recursive sequences, our method outperformed
GPT-4, achieving accuracy improvements of $89.7\%$, $86\%$, and $56\%$ for each
respective task. Moreover, when juxtaposed with the state-of-the-art (SOTA)
prompting method, \textit{Tree of Thought (ToT)}, our approach registered an
average accuracy boost of $23\%$, $24\%$, and $15\%$.
"
DevGPT: Studying Developer-ChatGPT Conversations,Tao Xiao,http://arxiv.org/pdf/2309.03914v1.pdf,2023-08-31,['cs.se'],2309.03914v1.pdf,"  The emergence of large language models (LLMs) such as ChatGPT has disrupted
the landscape of software development. Many studies are investigating the
quality of responses generated by ChatGPT, the efficacy of various prompting
techniques, and its comparative performance in programming contests, to name a
few examples. Yet, we know very little about how ChatGPT is actually used by
software developers. What questions do developers present to ChatGPT? What are
the dynamics of these interactions? What is the backdrop against which these
conversations are held, and how do the conversations feedback into the
artifacts of their work? To close this gap, we introduce DevGPT, a curated
dataset which encompasses 17,913 prompts and ChatGPT's responses including
11,751 code snippets, coupled with the corresponding software development
artifacts -- ranging from source code, commits, issues, pull requests, to
discussions and Hacker News threads -- to enable the analysis of the context
and implications of these developer interactions with ChatGPT.
"
Generative Speech Recognition Error Correction with Large Language  Models and Task-Activating Prompting,Chao-Han Huck Yang,http://arxiv.org/pdf/2309.15649v2.pdf,2023-09-27,"['cs.cl', 'cs.ai', 'cs.lg', 'cs.sd', 'eess.as']",2309.15649v2.pdf,"  We explore the ability of large language models (LLMs) to act as speech
recognition post-processors that perform rescoring and error correction. Our
first focus is on instruction prompting to let LLMs perform these task without
fine-tuning, for which we evaluate different prompting schemes, both zero- and
few-shot in-context learning, and a novel task activation prompting method that
combines causal instructions and demonstration to increase its context windows.
Next, we show that rescoring only by in-context learning with frozen LLMs
achieves results that are competitive with rescoring by domain-tuned LMs, using
a pretrained first-pass recognition system and rescoring output on two
out-of-domain tasks (ATIS and WSJ). By combining prompting techniques with
fine-tuning we achieve error rates below the N-best oracle level, showcasing
the generalization power of the LLMs.
"
UPAR: A Kantian-Inspired Prompting Framework for Enhancing Large  Language Model Capabilities,Hejia Geng,http://arxiv.org/pdf/2310.01441v1.pdf,2023-09-30,"['cs.cl', 'cs.ai']",2310.01441v1.pdf,"  Large Language Models (LLMs) have demonstrated impressive inferential
capabilities, with numerous research endeavors devoted to enhancing this
capacity through prompting. Despite these efforts, a unified epistemological
foundation is still conspicuously absent. Drawing inspiration from Kant's a
priori philosophy, we propose the UPAR prompting framework, designed to emulate
the structure of human cognition within LLMs. The UPAR framework is delineated
into four phases: ""Understand"", ""Plan"", ""Act"", and ""Reflect"", enabling the
extraction of structured information from complex contexts, prior planning of
solutions, execution according to plan, and self-reflection. This structure
significantly augments the explainability and accuracy of LLM inference,
producing a human-understandable and inspectable inferential trajectory.
Furthermore, our work offers an epistemological foundation for existing
prompting techniques, allowing for a possible systematic integration of these
methods. With GPT-4, our approach elevates the accuracy from COT baseline of
22.92% to 58.33% in a challenging subset of GSM8K, and from 67.91% to 75.40% in
the causal judgment task.
"
Take a Step Back: Evoking Reasoning via Abstraction in Large Language  Models,Huaixiu Steven Zheng,http://arxiv.org/pdf/2310.06117v1.pdf,2023-10-09,"['cs.lg', 'cs.ai', 'cs.cl']",2310.06117v1.pdf,"  We present Step-Back Prompting, a simple prompting technique that enables
LLMs to do abstractions to derive high-level concepts and first principles from
instances containing specific details. Using the concepts and principles to
guide the reasoning steps, LLMs significantly improve their abilities in
following a correct reasoning path towards the solution. We conduct experiments
of Step-Back Prompting with PaLM-2L models and observe substantial performance
gains on a wide range of challenging reasoning-intensive tasks including STEM,
Knowledge QA, and Multi-Hop Reasoning. For instance, Step-Back Prompting
improves PaLM-2L performance on MMLU Physics and Chemistry by 7% and 11%,
TimeQA by 27%, and MuSiQue by 7%.
"
POSQA: Probe the World Models of LLMs with Size Comparisons,Chang Shu,http://arxiv.org/pdf/2310.13394v1.pdf,2023-10-20,"['cs.cl', 'cs.ai', 'cs.cy']",2310.13394v1.pdf,"  Embodied language comprehension emphasizes that language understanding is not
solely a matter of mental processing in the brain but also involves
interactions with the physical and social environment. With the explosive
growth of Large Language Models (LLMs) and their already ubiquitous presence in
our daily lives, it is becoming increasingly necessary to verify their
real-world understanding. Inspired by cognitive theories, we propose POSQA: a
Physical Object Size Question Answering dataset with simple size comparison
questions to examine the extremity and analyze the potential mechanisms of the
embodied comprehension of the latest LLMs.
  We show that even the largest LLMs today perform poorly under the zero-shot
setting. We then push their limits with advanced prompting techniques and
external knowledge augmentation. Furthermore, we investigate whether their
real-world comprehension primarily derives from contextual information or
internal weights and analyse the impact of prompt formats and report bias of
different objects. Our results show that real-world understanding that LLMs
shaped from textual data can be vulnerable to deception and confusion by the
surface form of prompts, which makes it less aligned with human behaviours.
"
MuSR: Testing the Limits of Chain-of-thought with Multistep Soft  Reasoning,Zayne Sprague,http://arxiv.org/pdf/2310.16049v1.pdf,2023-10-24,['cs.cl'],2310.16049v1.pdf,"  While large language models (LLMs) equipped with techniques like
chain-of-thought prompting have demonstrated impressive capabilities, they
still fall short in their ability to reason robustly in complex settings.
However, evaluating LLM reasoning is challenging because system capabilities
continue to grow while benchmark datasets for tasks like logical deduction have
remained static. We introduce MuSR, a dataset for evaluating language models on
multistep soft reasoning tasks specified in a natural language narrative. This
dataset has two crucial features. First, it is created through a novel
neurosymbolic synthetic-to-natural generation algorithm, enabling the
construction of complex reasoning instances that challenge GPT-4 (e.g., murder
mysteries roughly 1000 words in length) and which can be scaled further as more
capable LLMs are released. Second, our dataset instances are free text
narratives corresponding to real-world domains of reasoning; this makes it
simultaneously much more challenging than other synthetically-crafted
benchmarks while remaining realistic and tractable for human annotators to
solve with high accuracy. We evaluate a range of LLMs and prompting techniques
on this dataset and characterize the gaps that remain for techniques like
chain-of-thought to perform robust reasoning.
"
"Supercharging academic writing with generative AI: framework,  techniques, and caveats",Zhicheng Lin,http://arxiv.org/pdf/2310.17143v1.pdf,2023-10-26,"['cs.cy', 'cs.cl']",2310.17143v1.pdf,"  Academic writing is an indispensable yet laborious part of the research
enterprise. This Perspective maps out principles and methods for using
generative artificial intelligence (AI), specifically large language models
(LLMs), to elevate the quality and efficiency of academic writing. We introduce
a human-AI collaborative framework that delineates the rationale (why), process
(how), and nature (what) of AI engagement in writing. The framework pinpoints
both short-term and long-term reasons for engagement and their underlying
mechanisms (e.g., cognitive offloading and imaginative stimulation). It reveals
the role of AI throughout the writing process, conceptualized through a
two-stage model for human-AI collaborative writing, and the nature of AI
assistance in writing, represented through a model of writing-assistance types
and levels. Building on this framework, we describe effective prompting
techniques for incorporating AI into the writing routine (outlining, drafting,
and editing) as well as strategies for maintaining rigorous scholarship,
adhering to varied journal policies, and avoiding overreliance on AI.
Ultimately, the prudent integration of AI into academic writing can ease the
communication burden, empower authors, accelerate discovery, and promote
diversity in science.
"
Little Giants: Exploring the Potential of Small LLMs as Evaluation  Metrics in Summarization in the Eval4NLP 2023 Shared Task,Neema Kotonya,http://arxiv.org/pdf/2311.00686v1.pdf,2023-11-01,['cs.cl'],2311.00686v1.pdf,"  This paper describes and analyzes our participation in the 2023 Eval4NLP
shared task, which focuses on assessing the effectiveness of prompt-based
techniques to empower Large Language Models to handle the task of quality
estimation, particularly in the context of evaluating machine translations and
summaries. We conducted systematic experiments with various prompting
techniques, including standard prompting, prompts informed by annotator
instructions, and innovative chain-of-thought prompting. In addition, we
integrated these approaches with zero-shot and one-shot learning methods to
maximize the efficacy of our evaluation procedures. Our work reveals that
combining these approaches using a ""small"", open source model (orca_mini_v3_7B)
yields competitive results.
"
Can Large Language Models Design Accurate Label Functions?,Naiqing Guan,http://arxiv.org/pdf/2311.00739v1.pdf,2023-11-01,"['cs.cl', 'cs.db', 'cs.lg', 'h.2.8; i.5.4']",2311.00739v1.pdf,"  Programmatic weak supervision methodologies facilitate the expedited labeling
of extensive datasets through the use of label functions (LFs) that encapsulate
heuristic data sources. Nonetheless, the creation of precise LFs necessitates
domain expertise and substantial endeavors. Recent advances in pre-trained
language models (PLMs) have exhibited substantial potential across diverse
tasks. However, the capacity of PLMs to autonomously formulate accurate LFs
remains an underexplored domain. In this research, we address this gap by
introducing DataSculpt, an interactive framework that harnesses PLMs for the
automated generation of LFs. Within DataSculpt, we incorporate an array of
prompting techniques, instance selection strategies, and LF filtration methods
to explore the expansive design landscape. Ultimately, we conduct a thorough
assessment of DataSculpt's performance on 12 real-world datasets, encompassing
a range of tasks. This evaluation unveils both the strengths and limitations of
contemporary PLMs in LF design.
"
Prompting as Probing: Using Language Models for Knowledge Base  Construction,Dimitrios Alivanistos,http://arxiv.org/pdf/2208.11057v3.pdf,2022-08-23,"['cs.cl', 'cs.ai']",2208.11057v3.pdf,"  Language Models (LMs) have proven to be useful in various downstream
applications, such as summarisation, translation, question answering and text
classification. LMs are becoming increasingly important tools in Artificial
Intelligence, because of the vast quantity of information they can store. In
this work, we present ProP (Prompting as Probing), which utilizes GPT-3, a
large Language Model originally proposed by OpenAI in 2020, to perform the task
of Knowledge Base Construction (KBC). ProP implements a multi-step approach
that combines a variety of prompting techniques to achieve this. Our results
show that manual prompt curation is essential, that the LM must be encouraged
to give answer sets of variable lengths, in particular including empty answer
sets, that true/false questions are a useful device to increase precision on
suggestions generated by the LM, that the size of the LM is a crucial factor,
and that a dictionary of entity aliases improves the LM score. Our evaluation
study indicates that these proposed techniques can substantially enhance the
quality of the final predictions: ProP won track 2 of the LM-KBC competition,
outperforming the baseline by 36.4 percentage points. Our implementation is
available on https://github.com/HEmile/iswc-challenge.
"
Large Language Models are Pretty Good Zero-Shot Video Game Bug Detectors,Mohammad Reza Taesiri,http://arxiv.org/pdf/2210.02506v1.pdf,2022-10-05,"['cs.cl', 'cs.se']",2210.02506v1.pdf,"  Video game testing requires game-specific knowledge as well as common sense
reasoning about the events in the game. While AI-driven agents can satisfy the
first requirement, it is not yet possible to meet the second requirement
automatically. Therefore, video game testing often still relies on manual
testing, and human testers are required to play the game thoroughly to detect
bugs. As a result, it is challenging to fully automate game testing. In this
study, we explore the possibility of leveraging the zero-shot capabilities of
large language models for video game bug detection. By formulating the bug
detection problem as a question-answering task, we show that large language
models can identify which event is buggy in a sequence of textual descriptions
of events from a game. To this end, we introduce the GameBugDescriptions
benchmark dataset, which consists of 167 buggy gameplay videos and a total of
334 question-answer pairs across 8 games. We extensively evaluate the
performance of six models across the OPT and InstructGPT large language model
families on our benchmark dataset. Our results show promising results for
employing language models to detect video game bugs. With the proper prompting
technique, we could achieve an accuracy of 70.66%, and on some video games, up
to 78.94%. Our code, evaluation data and the benchmark can be found on
https://asgaardlab.github.io/LLMxBugs
"
Boosting Low-Data Instance Segmentation by Unsupervised Pre-training  with Saliency Prompt,Hao Li,http://arxiv.org/pdf/2302.01171v1.pdf,2023-02-02,"['cs.cv', 'cs.ai']",2302.01171v1.pdf,"  Recently, inspired by DETR variants, query-based end-to-end instance
segmentation (QEIS) methods have outperformed CNN-based models on large-scale
datasets. Yet they would lose efficacy when only a small amount of training
data is available since it's hard for the crucial queries/kernels to learn
localization and shape priors. To this end, this work offers a novel
unsupervised pre-training solution for low-data regimes. Inspired by the recent
success of the Prompting technique, we introduce a new pre-training method that
boosts QEIS models by giving Saliency Prompt for queries/kernels. Our method
contains three parts: 1) Saliency Masks Proposal is responsible for generating
pseudo masks from unlabeled images based on the saliency mechanism. 2)
Prompt-Kernel Matching transfers pseudo masks into prompts and injects the
corresponding localization and shape priors to the best-matched kernels. 3)
Kernel Supervision is applied to supply supervision at the kernel level for
robust learning. From a practical perspective, our pre-training method helps
QEIS models achieve a similar convergence speed and comparable performance with
CNN-based models in low-data regimes. Experimental results show that our method
significantly boosts several QEIS models on three datasets. Code will be made
available.
"
One-Shot Labeling for Automatic Relevance Estimation,Sean MacAvaney,http://arxiv.org/pdf/2302.11266v2.pdf,2023-02-22,['cs.ir'],2302.11266v2.pdf,"  Dealing with unjudged documents (""holes"") in relevance assessments is a
perennial problem when evaluating search systems with offline experiments.
Holes can reduce the apparent effectiveness of retrieval systems during
evaluation and introduce biases in models trained with incomplete data. In this
work, we explore whether large language models can help us fill such holes to
improve offline evaluations. We examine an extreme, albeit common, evaluation
setting wherein only a single known relevant document per query is available
for evaluation. We then explore various approaches for predicting the relevance
of unjudged documents with respect to a query and the known relevant document,
including nearest neighbor, supervised, and prompting techniques. We find that
although the predictions of these One-Shot Labelers (1SL) frequently disagree
with human assessments, the labels they produce yield a far more reliable
ranking of systems than the single labels do alone. Specifically, the strongest
approaches can consistently reach system ranking correlations of over 0.86 with
the full rankings over a variety of measures. Meanwhile, the approach
substantially increases the reliability of t-tests due to filling holes in
relevance assessments, giving researchers more confidence in results they find
to be significant. Alongside this work, we release an easy-to-use software
package to enable the use of 1SL for evaluation of other ad-hoc collections or
systems.
"
Are Large Language Models Ready for Healthcare? A Comparative Study on  Clinical Language Understanding,Yuqing Wang,http://arxiv.org/pdf/2304.05368v3.pdf,2023-04-09,"['cs.cl', 'cs.ai']",2304.05368v3.pdf,"  Large language models (LLMs) have made significant progress in various
domains, including healthcare. However, the specialized nature of clinical
language understanding tasks presents unique challenges and limitations that
warrant further investigation. In this study, we conduct a comprehensive
evaluation of state-of-the-art LLMs, namely GPT-3.5, GPT-4, and Bard, within
the realm of clinical language understanding tasks. These tasks span a diverse
range, including named entity recognition, relation extraction, natural
language inference, semantic textual similarity, document classification, and
question-answering. We also introduce a novel prompting strategy,
self-questioning prompting (SQP), tailored to enhance LLMs' performance by
eliciting informative questions and answers pertinent to the clinical scenarios
at hand. Our evaluation underscores the significance of task-specific learning
strategies and prompting techniques for improving LLMs' effectiveness in
healthcare-related tasks. Additionally, our in-depth error analysis on the
challenging relation extraction task offers valuable insights into error
distribution and potential avenues for improvement using SQP. Our study sheds
light on the practical implications of employing LLMs in the specialized domain
of healthcare, serving as a foundation for future research and the development
of potential applications in healthcare settings.
"
Multi-Prompt with Depth Partitioned Cross-Modal Learning,Yingjie Tian,http://arxiv.org/pdf/2305.06221v3.pdf,2023-05-10,"['cs.cv', 'cs.ai']",2305.06221v3.pdf,"  In recent years, soft prompt learning methods have been proposed to fine-tune
large-scale vision-language pre-trained models for various downstream tasks.
These methods typically combine learnable textual tokens with class tokens as
input for models with frozen parameters. However, they often employ a single
prompt to describe class contexts, failing to capture categories' diverse
attributes adequately. This study introduces the Partitioned Multi-modal Prompt
(PMPO), a multi-modal prompting technique that extends the soft prompt from a
single learnable prompt to multiple prompts. Our method divides the visual
encoder depths and connects learnable prompts to the separated visual depths,
enabling different prompts to capture the hierarchical contextual depths of
visual representations. Furthermore, to maximize the advantages of multi-prompt
learning, we incorporate prior information from manually designed templates and
learnable multi-prompts, thus improving the generalization capabilities of our
approach. We evaluate the effectiveness of our approach on three challenging
tasks: new class generalization, cross-dataset evaluation, and domain
generalization. For instance, our method achieves a $79.28$ harmonic mean,
averaged over 11 diverse image recognition datasets ($+7.62$ compared to CoOp),
demonstrating significant competitiveness compared to state-of-the-art
prompting methods.
"
ONCE: Boosting Content-based Recommendation with Both Open- and  Closed-source Large Language Models,Qijiong Liu,http://arxiv.org/pdf/2305.06566v4.pdf,2023-05-11,"['cs.ir', 'cs.cl']",2305.06566v4.pdf,"  Personalized content-based recommender systems have become indispensable
tools for users to navigate through the vast amount of content available on
platforms like daily news websites and book recommendation services. However,
existing recommenders face significant challenges in understanding the content
of items. Large language models (LLMs), which possess deep semantic
comprehension and extensive knowledge from pretraining, have proven to be
effective in various natural language processing tasks. In this study, we
explore the potential of leveraging both open- and closed-source LLMs to
enhance content-based recommendation. With open-source LLMs, we utilize their
deep layers as content encoders, enriching the representation of content at the
embedding level. For closed-source LLMs, we employ prompting techniques to
enrich the training data at the token level. Through comprehensive experiments,
we demonstrate the high effectiveness of both types of LLMs and show the
synergistic relationship between them. Notably, we observed a significant
relative improvement of up to 19.32% compared to existing state-of-the-art
recommendation models. These findings highlight the immense potential of both
open- and closed-source of LLMs in enhancing content-based recommendation
systems. We will make our code and LLM-generated data available for other
researchers to reproduce our results.
"
OPT-R: Exploring the Role of Explanations in Finetuning and Prompting  for Reasoning Skills of Large Language Models,Badr AlKhamissi,http://arxiv.org/pdf/2305.12001v2.pdf,2023-05-19,['cs.cl'],2305.12001v2.pdf,"  In this paper, we conduct a thorough investigation into the reasoning
capabilities of Large Language Models (LLMs), focusing specifically on the Open
Pretrained Transformers (OPT) models as a representative of such models. Our
study entails finetuning three different sizes of OPT on a carefully curated
reasoning corpus, resulting in two sets of finetuned models: OPT-R, finetuned
without explanations, and OPT-RE, finetuned with explanations. We then evaluate
all models on 57 out-of-domain tasks drawn from the SUPER-NATURALINSTRUCTIONS
benchmark, covering 26 distinct reasoning skills, utilizing three prompting
techniques. Through a comprehensive grid of 27 configurations and 6,156 test
evaluations, we investigate the dimensions of finetuning, prompting, and scale
to understand the role of explanations on different reasoning skills. Our
findings reveal that having explanations in the fewshot exemplar has no
significant impact on the model's performance when the model is finetuned,
while positively affecting the non-finetuned counterpart. Moreover, we observe
a slight yet consistent increase in classification accuracy as we incorporate
explanations during prompting and finetuning, respectively. Finally, we offer
insights on which skills benefit the most from incorporating explanations
during finetuning and prompting, such as Numerical (+20.4%) and Analogical
(+13.9%) reasoning, as well as skills that exhibit negligible or negative
effects.
"
The Utility of Large Language Models and Generative AI for Education  Research,Andrew Katz,http://arxiv.org/pdf/2305.18125v1.pdf,2023-05-29,['cs.hc'],2305.18125v1.pdf,"  The use of natural language processing (NLP) techniques in engineering
education can provide valuable insights into the underlying processes involved
in generating text. While accessing these insights can be labor-intensive if
done manually, recent advances in NLP and large language models have made it a
realistic option for individuals. This study explores and evaluates a
combination of clustering, summarization, and prompting techniques to analyze
over 1,000 student essays in which students discussed their career interests.
The specific assignment prompted students to define and explain their career
goals as engineers. Using text embedding representations of student responses,
we clustered the responses together to identify thematically similar statements
from students. The clustered responses were then summarized to quickly identify
career interest themes. We also used a set of a priori codes about career
satisfaction and sectors to demonstrate an alternative approach to using these
generative text models to analyze student writing. The results of this study
demonstrate the feasibility and usefulness of NLP techniques in engineering
education research. By automating the initial analysis of student essays,
researchers and educators can more efficiently and accurately identify key
themes and patterns in student writing. The methods presented in this paper
have broader applications for engineering education and research purposes
beyond analyzing student essays. By explaining these methods to the engineering
education community, readers can utilize them in their own contexts.
"
Fine-Grained Visual Prompting,Lingfeng Yang,http://arxiv.org/pdf/2306.04356v1.pdf,2023-06-07,['cs.cv'],2306.04356v1.pdf,"  Vision-Language Models (VLMs), such as CLIP, have demonstrated impressive
zero-shot transfer capabilities in image-level visual perception. However,
these models have shown limited performance in instance-level tasks that demand
precise localization and recognition. Previous works have suggested that
incorporating visual prompts, such as colorful boxes or circles, can improve
the ability of models to recognize objects of interest. Nonetheless, compared
to language prompting, visual prompting designs are rarely explored. Existing
approaches, which employ coarse visual cues such as colorful boxes or circles,
often result in sub-optimal performance due to the inclusion of irrelevant and
noisy pixels. In this paper, we carefully study the visual prompting designs by
exploring more fine-grained markings, such as segmentation masks and their
variations. In addition, we introduce a new zero-shot framework that leverages
pixel-level annotations acquired from a generalist segmentation model for
fine-grained visual prompting. Consequently, our investigation reveals that a
straightforward application of blur outside the target mask, referred to as the
Blur Reverse Mask, exhibits exceptional effectiveness. This proposed prompting
strategy leverages the precise mask annotations to reduce focus on weakly
related regions while retaining spatial coherence between the target and the
surrounding background. Our Fine-Grained Visual Prompting (FGVP) demonstrates
superior performance in zero-shot comprehension of referring expressions on the
RefCOCO, RefCOCO+, and RefCOCOg benchmarks. It outperforms prior methods by an
average margin of 3.0% to 4.6%, with a maximum improvement of 12.5% on the
RefCOCO+ testA subset. The part detection experiments conducted on the PACO
dataset further validate the preponderance of FGVP over existing visual
prompting techniques. Code and models will be made available.
"
The FormAI Dataset: Generative AI in Software Security Through the Lens  of Formal Verification,Norbert Tihanyi,http://arxiv.org/pdf/2307.02192v2.pdf,2023-07-05,"['cs.db', 'cs.ai']",2307.02192v2.pdf,"  This paper presents the FormAI dataset, a large collection of 112, 000
AI-generated compilable and independent C programs with vulnerability
classification. We introduce a dynamic zero-shot prompting technique
constructed to spawn diverse programs utilizing Large Language Models (LLMs).
The dataset is generated by GPT-3.5-turbo and comprises programs with varying
levels of complexity. Some programs handle complicated tasks like network
management, table games, or encryption, while others deal with simpler tasks
like string manipulation. Every program is labeled with the vulnerabilities
found within the source code, indicating the type, line number, and vulnerable
function name. This is accomplished by employing a formal verification method
using the Efficient SMT-based Bounded Model Checker (ESBMC), which uses model
checking, abstract interpretation, constraint programming, and satisfiability
modulo theories to reason over safety/security properties in programs. This
approach definitively detects vulnerabilities and offers a formal model known
as a counterexample, thus eliminating the possibility of generating false
positive reports. We have associated the identified vulnerabilities with Common
Weakness Enumeration (CWE) numbers. We make the source code available for the
112, 000 programs, accompanied by a separate file containing the
vulnerabilities detected in each program, making the dataset ideal for training
LLMs and machine learning algorithms. Our study unveiled that according to
ESBMC, 51.24% of the programs generated by GPT-3.5 contained vulnerabilities,
thereby presenting considerable risks to software safety and security.
"
SciGraphQA: A Large-Scale Synthetic Multi-Turn Question-Answering  Dataset for Scientific Graphs,Shengzhi Li,http://arxiv.org/pdf/2308.03349v1.pdf,2023-08-07,"['cs.cl', 'cs.ai', 'cs.cv']",2308.03349v1.pdf,"  In this work, we present SciGraphQA, a synthetic multi-turn question-answer
dataset related to academic graphs. SciGraphQA is 13 times larger than
ChartVQA, the previously largest chart-visual question-answering dataset. It is
also the largest open-sourced chart VQA dataset with non-synthetic charts. To
build our dataset, we selected 290,000 Computer Science or Machine Learning
ArXiv papers published between 2010 and 2020, and then used Palm-2 to generate
295K samples of open-vocabulary multi-turn question-answering dialogues about
the graphs. As context, we provided the text-only Palm-2 with paper title,
abstract, paragraph mentioning the graph, and rich text contextual data from
the graph itself, obtaining dialogues with an average 2.23 question-answer
turns for each graph. We asked GPT-4 to assess the matching quality of our
question-answer turns given the paper's context, obtaining an average rating of
8.7/10 on our 3K test set. We evaluated the 0-shot capability of the most
popular MLLM models such as LLaVa, mPLUGowl, BLIP-2, and openFlamingo's on our
dataset, finding LLaVA-13B being the most performant with a CIDEr score of
0.08. We further enriched the question prompts for LLAVA by including the
serialized data tables extracted from the graphs using the DePlot model,
boosting LLaVA's 0-shot CIDEr to 0.15. To verify the validity of our dataset,
we also fine-tuned LLaVa using our dataset, reaching a substantially higher
CIDEr score of 0.26. We anticipate further accuracy improvement by including
segmentation mask tokens and leveraging larger LLM backbones coupled with
emergent prompting techniques. Our code and data are open-sourced.
"
GOPro: Generate and Optimize Prompts in CLIP using Self-Supervised  Learning,Mainak Singha,http://arxiv.org/pdf/2308.11605v1.pdf,2023-08-22,['cs.cv'],2308.11605v1.pdf,"  Large-scale foundation models, such as CLIP, have demonstrated remarkable
success in visual recognition tasks by embedding images in a semantically rich
space. Self-supervised learning (SSL) has also shown promise in improving
visual recognition by learning invariant features. However, the combination of
CLIP with SSL is found to face challenges due to the multi-task framework that
blends CLIP's contrastive loss and SSL's loss, including difficulties with loss
weighting and inconsistency among different views of images in CLIP's output
space. To overcome these challenges, we propose a prompt learning-based model
called GOPro, which is a unified framework that ensures similarity between
various augmented views of input images in a shared image-text embedding space,
using a pair of learnable image and text projectors atop CLIP, to promote
invariance and generalizability. To automatically learn such prompts, we
leverage the visual content and style primitives extracted from pre-trained
CLIP and adapt them to the target task. In addition to CLIP's cross-domain
contrastive loss, we introduce a visual contrastive loss and a novel prompt
consistency loss, considering the different views of the images. GOPro is
trained end-to-end on all three loss objectives, combining the strengths of
CLIP and SSL in a principled manner. Empirical evaluations demonstrate that
GOPro outperforms the state-of-the-art prompting techniques on three
challenging domain generalization tasks across multiple benchmarks by a
significant margin. Our code is available at
https://github.com/mainaksingha01/GOPro.
"
Spoken Language Intelligence of Large Language Models for Language  Learning,Linkai Peng,http://arxiv.org/pdf/2308.14536v1.pdf,2023-08-28,"['cs.cl', 'cs.ai', 'cs.lg', 'cs.sd', 'eess.as']",2308.14536v1.pdf,"  People have long hoped for a conversational system that can assist in
real-life situations, and recent progress on large language models (LLMs) is
bringing this idea closer to reality. While LLMs are often impressive in
performance, their efficacy in real-world scenarios that demand expert
knowledge remains unclear. LLMs are believed to hold the most potential and
value in education, especially in the development of Artificial intelligence
(AI) based virtual teachers capable of facilitating language learning. Our
focus is centered on evaluating the efficacy of LLMs in the realm of education,
specifically in the areas of spoken language learning which encompass
phonetics, phonology, and second language acquisition. We introduce a new
multiple-choice question dataset to evaluate the effectiveness of LLMs in the
aforementioned scenarios, including understanding and application of spoken
language knowledge. In addition, we investigate the influence of various
prompting techniques such as zero- and few-shot method (prepending the question
with question-answer exemplars), chain-of-thought (CoT, think step-by-step),
in-domain exampler and external tools (Google, Wikipedia). We conducted
large-scale evaluation on popular LLMs (20 distinct models) using these
methods. We achieved significant performance improvements compared to the
zero-shot baseline in the practical questions reasoning (GPT-3.5, 49.1% ->
63.1%; LLaMA2-70B-Chat, 42.2% -> 48.6%). We found that models of different
sizes have good understanding of concepts in phonetics, phonology, and second
language acquisition, but show limitations in reasoning for real-world
problems. Additionally, we also explore preliminary findings on conversational
communication.
"
Are Emergent Abilities in Large Language Models just In-Context  Learning?,Sheng Lu,http://arxiv.org/pdf/2309.01809v1.pdf,2023-09-04,['cs.cl'],2309.01809v1.pdf,"  Large language models have exhibited emergent abilities, demonstrating
exceptional performance across diverse tasks for which they were not explicitly
trained, including those that require complex reasoning abilities. The
emergence of such abilities carries profound implications for the future
direction of research in NLP, especially as the deployment of such models
becomes more prevalent. However, one key challenge is that the evaluation of
these abilities is often confounded by competencies that arise in models
through alternative prompting techniques, such as in-context learning and
instruction following, which also emerge as the models are scaled up. In this
study, we provide the first comprehensive examination of these emergent
abilities while accounting for various potentially biasing factors that can
influence the evaluation of models. We conduct rigorous tests on a set of 18
models, encompassing a parameter range from 60 million to 175 billion
parameters, across a comprehensive set of 22 tasks. Through an extensive series
of over 1,000 experiments, we provide compelling evidence that emergent
abilities can primarily be ascribed to in-context learning. We find no evidence
for the emergence of reasoning abilities, thus providing valuable insights into
the underlying mechanisms driving the observed abilities and thus alleviating
safety concerns regarding their use.
"
Unsupervised Contrast-Consistent Ranking with Language Models,Niklas Stoehr,http://arxiv.org/pdf/2309.06991v1.pdf,2023-09-13,"['cs.lg', 'cs.cl', 'stat.ml']",2309.06991v1.pdf,"  Language models contain ranking-based knowledge and are powerful solvers of
in-context ranking tasks. For instance, they may have parametric knowledge
about the ordering of countries by size or may be able to rank reviews by
sentiment. Recent work focuses on pairwise, pointwise, and listwise prompting
techniques to elicit a language model's ranking knowledge. However, we find
that even with careful calibration and constrained decoding, prompting-based
techniques may not always be self-consistent in the rankings they produce. This
motivates us to explore an alternative approach that is inspired by an
unsupervised probing method called Contrast-Consistent Search (CCS). The idea
is to train a probing model guided by a logical constraint: a model's
representation of a statement and its negation must be mapped to contrastive
true-false poles consistently across multiple statements. We hypothesize that
similar constraints apply to ranking tasks where all items are related via
consistent pairwise or listwise comparisons. To this end, we extend the binary
CCS method to Contrast-Consistent Ranking (CCR) by adapting existing ranking
methods such as the Max-Margin Loss, Triplet Loss, and Ordinal Regression
objective. Our results confirm that, for the same language model, CCR probing
outperforms prompting and even performs on a par with prompting much larger
language models.
"
S3-DST: Structured Open-Domain Dialogue Segmentation and State Tracking  in the Era of LLMs,Sarkar Snigdha Sarathi Das,http://arxiv.org/pdf/2309.08827v1.pdf,2023-09-16,"['cs.cl', 'cs.ai']",2309.08827v1.pdf,"  The traditional Dialogue State Tracking (DST) problem aims to track user
preferences and intents in user-agent conversations. While sufficient for
task-oriented dialogue systems supporting narrow domain applications, the
advent of Large Language Model (LLM)-based chat systems has introduced many
real-world intricacies in open-domain dialogues. These intricacies manifest in
the form of increased complexity in contextual interactions, extended dialogue
sessions encompassing a diverse array of topics, and more frequent contextual
shifts. To handle these intricacies arising from evolving LLM-based chat
systems, we propose joint dialogue segmentation and state tracking per segment
in open-domain dialogue systems. Assuming a zero-shot setting appropriate to a
true open-domain dialogue system, we propose S3-DST, a structured prompting
technique that harnesses Pre-Analytical Recollection, a novel grounding
mechanism we designed for improving long context tracking. To demonstrate the
efficacy of our proposed approach in joint segmentation and state tracking, we
evaluate S3-DST on a proprietary anonymized open-domain dialogue dataset, as
well as publicly available DST and segmentation datasets. Across all datasets
and settings, S3-DST consistently outperforms the state-of-the-art,
demonstrating its potency and robustness the next generation of LLM-based chat
systems.
"
Scalable Multi-Robot Collaboration with Large Language Models:  Centralized or Decentralized Systems?,Yongchao Chen,http://arxiv.org/pdf/2309.15943v1.pdf,2023-09-27,['cs.ro'],2309.15943v1.pdf,"  A flurry of recent work has demonstrated that pre-trained large language
models (LLMs) can be effective task planners for a variety of single-robot
tasks. The planning performance of LLMs is significantly improved via prompting
techniques, such as in-context learning or re-prompting with state feedback,
placing new importance on the token budget for the context window. An
under-explored but natural next direction is to investigate LLMs as multi-robot
task planners. However, long-horizon, heterogeneous multi-robot planning
introduces new challenges of coordination while also pushing up against the
limits of context window length. It is therefore critical to find
token-efficient LLM planning frameworks that are also able to reason about the
complexities of multi-robot coordination. In this work, we compare the task
success rate and token efficiency of four multi-agent communication frameworks
(centralized, decentralized, and two hybrid) as applied to four
coordination-dependent multi-agent 2D task scenarios for increasing numbers of
agents. We find that a hybrid framework achieves better task success rates
across all four tasks and scales better to more agents. We further demonstrate
the hybrid frameworks in 3D simulations where the vision-to-text problem and
dynamical errors are considered. See our project website
https://yongchao98.github.io/MIT-REALM-Multi-Robot/ for prompts, videos, and
code.
"
Adaptive-Solver Framework for Dynamic Strategy Selection in Large  Language Model Reasoning,Jianpeng Zhou,http://arxiv.org/pdf/2310.01446v1.pdf,2023-10-01,"['cs.cl', 'cs.ai']",2310.01446v1.pdf,"  Large Language Models (LLMs) are showcasing impressive ability in handling
complex reasoning tasks. In real-world situations, problems often span a
spectrum of complexities. Humans inherently adjust their problem-solving
approaches based on task complexity. However, most methodologies that leverage
LLMs tend to adopt a uniform approach: utilizing consistent models, prompting
methods, and degrees of problem decomposition, regardless of the problem
complexity. Inflexibility of them can bring unnecessary computational overhead
or sub-optimal performance. To address this problem, we introduce an
Adaptive-Solver framework. It strategically modulates solving strategies based
on the difficulties of the problems. Given an initial solution, the framework
functions with two primary modules. The initial evaluation module assesses the
adequacy of the current solution. If improvements are needed, the subsequent
adaptation module comes into play. Within this module, three key adaptation
strategies are employed: (1) Model Adaptation: Switching to a stronger LLM when
a weaker variant is inadequate. (2) Prompting Method Adaptation: Alternating
between different prompting techniques to suit the problem's nuances. (3)
Decomposition Granularity Adaptation: Breaking down a complex problem into more
fine-grained sub-questions to enhance solvability. Through such dynamic
adaptations, our framework not only enhances computational efficiency but also
elevates the overall performance. This dual-benefit ensures both the efficiency
of the system for simpler tasks and the precision required for more complex
questions. Experimental results from complex reasoning tasks reveal that the
prompting method adaptation and decomposition granularity adaptation enhance
performance across all tasks. Furthermore, the model adaptation approach
significantly reduces API costs (up to 50%) while maintaining superior
performance.
"
Revisiting Large Language Models as Zero-shot Relation Extractors,Guozheng Li,http://arxiv.org/pdf/2310.05028v3.pdf,2023-10-08,"['cs.ai', 'cs.cl']",2310.05028v3.pdf,"  Relation extraction (RE) consistently involves a certain degree of labeled or
unlabeled data even if under zero-shot setting. Recent studies have shown that
large language models (LLMs) transfer well to new tasks out-of-the-box simply
given a natural language prompt, which provides the possibility of extracting
relations from text without any data and parameter tuning. This work focuses on
the study of exploring LLMs, such as ChatGPT, as zero-shot relation extractors.
On the one hand, we analyze the drawbacks of existing RE prompts and attempt to
incorporate recent prompt techniques such as chain-of-thought (CoT) to improve
zero-shot RE. We propose the summarize-and-ask (\textsc{SumAsk}) prompting, a
simple prompt recursively using LLMs to transform RE inputs to the effective
question answering (QA) format. On the other hand, we conduct comprehensive
experiments on various benchmarks and settings to investigate the capabilities
of LLMs on zero-shot RE. Specifically, we have the following findings: (i)
\textsc{SumAsk} consistently and significantly improves LLMs performance on
different model sizes, benchmarks and settings; (ii) Zero-shot prompting with
ChatGPT achieves competitive or superior results compared with zero-shot and
fully supervised methods; (iii) LLMs deliver promising performance in
extracting overlapping relations; (iv) The performance varies greatly regarding
different relations. Different from small language models, LLMs are effective
in handling challenge none-of-the-above (NoTA) relation.
"
Towards Training-free Open-world Segmentation via Image Prompting  Foundation Models,Lv Tang,http://arxiv.org/pdf/2310.10912v1.pdf,2023-10-17,['cs.cv'],2310.10912v1.pdf,"  The realm of computer vision has witnessed a paradigm shift with the advent
of foundational models, mirroring the transformative influence of large
language models in the domain of natural language processing. This paper delves
into the exploration of open-world segmentation, presenting a novel approach
called Image Prompt Segmentation (IPSeg) that harnesses the power of vision
foundational models. At the heart of IPSeg lies the principle of a
training-free paradigm, which capitalizes on image prompting techniques. IPSeg
utilizes a single image containing a subjective visual concept as a flexible
prompt to query vision foundation models like DINOv2 and Stable Diffusion. Our
approach extracts robust features for the prompt image and input image, then
matches the input representations to the prompt representations via a novel
feature interaction module to generate point prompts highlighting target
objects in the input image. The generated point prompts are further utilized to
guide the Segment Anything Model to segment the target object in the input
image. The proposed method stands out by eliminating the need for exhaustive
training sessions, thereby offering a more efficient and scalable solution.
Experiments on COCO, PASCAL VOC, and other datasets demonstrate IPSeg's
efficacy for flexible open-world segmentation using intuitive image prompts.
This work pioneers tapping foundation models for open-world understanding
through visual concepts conveyed in images.
"
Cross-lingual Prompting: Improving Zero-shot Chain-of-Thought Reasoning  across Languages,Libo Qin,http://arxiv.org/pdf/2310.14799v1.pdf,2023-10-23,"['cs.cl', 'cs.ai']",2310.14799v1.pdf,"  Chain-of-thought (CoT) is capable of eliciting models to explicitly generate
reasoning paths, thus promoting reasoning accuracy and attracting increasing
attention. Specifically, zero-shot CoT achieves remarkable improvements in a
wide range of reasoning tasks by simply instructing the LLM with the prompt
""Let's think step by step!"". Despite the success of zero-shot CoT, the existing
zero-shot prompting techniques remain limited to a single language, making it
challenging to generalize to other languages and hindering global development.
In this work, we introduce cross-lingual prompting (CLP), aiming to improve
zero-shot CoT reasoning across languages. Specifically, CLP consists of two
main components: (1) cross-lingual alignment prompting and (2) task-specific
solver prompting. The cross-lingual alignment prompting is responsible for
aligning representations across different languages, whereas the task-specific
solver prompting is used to generate the final chain of thoughts and results
for the reasoning task. In addition, we further introduce cross-lingual
self-consistent prompting (CLSP) to ensemble different reasoning paths across
languages. Our experimental evaluations on several benchmarks demonstrate that
CLP and CLSP significantly outperform the existing prompting methods and
achieve state-of-the-art performance. We hope this work will inspire further
breakthroughs in cross-lingual CoT.
"
HetGPT: Harnessing the Power of Prompt Tuning in Pre-Trained  Heterogeneous Graph Neural Networks,Yihong Ma,http://arxiv.org/pdf/2310.15318v1.pdf,2023-10-23,"['cs.lg', 'cs.ai']",2310.15318v1.pdf,"  Graphs have emerged as a natural choice to represent and analyze the
intricate patterns and rich information of the Web, enabling applications such
as online page classification and social recommendation. The prevailing
""pre-train, fine-tune"" paradigm has been widely adopted in graph machine
learning tasks, particularly in scenarios with limited labeled nodes. However,
this approach often exhibits a misalignment between the training objectives of
pretext tasks and those of downstream tasks. This gap can result in the
""negative transfer"" problem, wherein the knowledge gained from pre-training
adversely affects performance in the downstream tasks. The surge in
prompt-based learning within Natural Language Processing (NLP) suggests the
potential of adapting a ""pre-train, prompt"" paradigm to graphs as an
alternative. However, existing graph prompting techniques are tailored to
homogeneous graphs, neglecting the inherent heterogeneity of Web graphs. To
bridge this gap, we propose HetGPT, a general post-training prompting framework
to improve the predictive performance of pre-trained heterogeneous graph neural
networks (HGNNs). The key is the design of a novel prompting function that
integrates a virtual class prompt and a heterogeneous feature prompt, with the
aim to reformulate downstream tasks to mirror pretext tasks. Moreover, HetGPT
introduces a multi-view neighborhood aggregation mechanism, capturing the
complex neighborhood structure in heterogeneous graphs. Extensive experiments
on three benchmark datasets demonstrate HetGPT's capability to enhance the
performance of state-of-the-art HGNNs on semi-supervised node classification.
"
Videoprompter: an ensemble of foundational models for zero-shot video  understanding,Adeel Yousaf,http://arxiv.org/pdf/2310.15324v1.pdf,2023-10-23,['cs.cv'],2310.15324v1.pdf,"  Vision-language models (VLMs) classify the query video by calculating a
similarity score between the visual features and text-based class label
representations. Recently, large language models (LLMs) have been used to
enrich the text-based class labels by enhancing the descriptiveness of the
class names. However, these improvements are restricted to the text-based
classifier only, and the query visual features are not considered. In this
paper, we propose a framework which combines pre-trained discriminative VLMs
with pre-trained generative video-to-text and text-to-text models. We introduce
two key modifications to the standard zero-shot setting. First, we propose
language-guided visual feature enhancement and employ a video-to-text model to
convert the query video to its descriptive form. The resulting descriptions
contain vital visual cues of the query video, such as what objects are present
and their spatio-temporal interactions. These descriptive cues provide
additional semantic knowledge to VLMs to enhance their zeroshot performance.
Second, we propose video-specific prompts to LLMs to generate more meaningful
descriptions to enrich class label representations. Specifically, we introduce
prompt techniques to create a Tree Hierarchy of Categories for class names,
offering a higher-level action context for additional visual cues, We
demonstrate the effectiveness of our approach in video understanding across
three different zero-shot settings: 1) video action recognition, 2)
video-to-text and textto-video retrieval, and 3) time-sensitive video tasks.
Consistent improvements across multiple benchmarks and with various VLMs
demonstrate the effectiveness of our proposed framework. Our code will be made
publicly available.
"
Improving Diversity of Demographic Representation in Large Language  Models via Collective-Critiques and Self-Voting,Preethi Lahoti,http://arxiv.org/pdf/2310.16523v1.pdf,2023-10-25,"['cs.cl', 'cs.ai']",2310.16523v1.pdf,"  A crucial challenge for generative large language models (LLMs) is diversity:
when a user's prompt is under-specified, models may follow implicit assumptions
while generating a response, which may result in homogenization of the
responses, as well as certain demographic groups being under-represented or
even erased from the generated responses. In this paper, we formalize diversity
of representation in generative LLMs. We present evaluation datasets and
propose metrics to measure diversity in generated responses along people and
culture axes. We find that LLMs understand the notion of diversity, and that
they can reason and critique their own responses for that goal. This finding
motivated a new prompting technique called collective-critique and self-voting
(CCSV) to self-improve people diversity of LLMs by tapping into its diversity
reasoning capabilities, without relying on handcrafted examples or prompt
tuning. Extensive empirical experiments with both human and automated
evaluations show that our proposed approach is effective at improving people
and culture diversity, and outperforms all baseline methods by a large margin.
"
LLM4DyG: Can Large Language Models Solve Problems on Dynamic Graphs?,Zeyang Zhang,http://arxiv.org/pdf/2310.17110v1.pdf,2023-10-26,['cs.lg'],2310.17110v1.pdf,"  In an era marked by the increasing adoption of Large Language Models (LLMs)
for various tasks, there is a growing focus on exploring LLMs' capabilities in
handling web data, particularly graph data. Dynamic graphs, which capture
temporal network evolution patterns, are ubiquitous in real-world web data.
Evaluating LLMs' competence in understanding spatial-temporal information on
dynamic graphs is essential for their adoption in web applications, which
remains unexplored in the literature. In this paper, we bridge the gap via
proposing to evaluate LLMs' spatial-temporal understanding abilities on dynamic
graphs, to the best of our knowledge, for the first time. Specifically, we
propose the LLM4DyG benchmark, which includes nine specially designed tasks
considering the capability evaluation of LLMs from both temporal and spatial
dimensions. Then, we conduct extensive experiments to analyze the impacts of
different data generators, data statistics, prompting techniques, and LLMs on
the model performance. Finally, we propose Disentangled Spatial-Temporal
Thoughts (DST2) for LLMs on dynamic graphs to enhance LLMs' spatial-temporal
understanding abilities. Our main observations are: 1) LLMs have preliminary
spatial-temporal understanding abilities on dynamic graphs, 2) Dynamic graph
tasks show increasing difficulties for LLMs as the graph size and density
increase, while not sensitive to the time span and data generation mechanism,
3) the proposed DST2 prompting method can help to improve LLMs'
spatial-temporal understanding abilities on dynamic graphs for most tasks. The
data and codes will be open-sourced at publication time.
"
Which is better? Exploring Prompting Strategy For LLM-based Metrics,Joonghoon Kim,http://arxiv.org/pdf/2311.03754v1.pdf,2023-11-07,['cs.cl'],2311.03754v1.pdf,"  This paper describes the DSBA submissions to the Prompting Large Language
Models as Explainable Metrics shared task, where systems were submitted to two
tracks: small and large summarization tracks. With advanced Large Language
Models (LLMs) such as GPT-4, evaluating the quality of Natural Language
Generation (NLG) has become increasingly paramount. Traditional
similarity-based metrics such as BLEU and ROUGE have shown to misalign with
human evaluation and are ill-suited for open-ended generation tasks. To address
this issue, we explore the potential capability of LLM-based metrics,
especially leveraging open-source LLMs. In this study, wide range of prompts
and prompting techniques are systematically analyzed with three approaches:
prompting strategy, score aggregation, and explainability. Our research focuses
on formulating effective prompt templates, determining the granularity of NLG
quality scores and assessing the impact of in-context examples on LLM-based
evaluation. Furthermore, three aggregation strategies are compared to identify
the most reliable method for aggregating NLG quality scores. To examine
explainability, we devise a strategy that generates rationales for the scores
and analyzes the characteristics of the explanation produced by the open-source
LLMs. Extensive experiments provide insights regarding evaluation capabilities
of open-source LLMs and suggest effective prompting strategies.
"
Understanding and Improving Visual Prompting: A Label-Mapping  Perspective,Aochuan Chen,http://arxiv.org/pdf/2211.11635v5.pdf,2022-11-21,['cs.cv'],2211.11635v5.pdf,"  We revisit and advance visual prompting (VP), an input prompting technique
for vision tasks. VP can reprogram a fixed, pre-trained source model to
accomplish downstream tasks in the target domain by simply incorporating
universal prompts (in terms of input perturbation patterns) into downstream
data points. Yet, it remains elusive why VP stays effective even given a
ruleless label mapping (LM) between the source classes and the target classes.
Inspired by the above, we ask: How is LM interrelated with VP? And how to
exploit such a relationship to improve its accuracy on target tasks? We peer
into the influence of LM on VP and provide an affirmative answer that a better
'quality' of LM (assessed by mapping precision and explanation) can
consistently improve the effectiveness of VP. This is in contrast to the prior
art where the factor of LM was missing. To optimize LM, we propose a new VP
framework, termed ILM-VP (iterative label mapping-based visual prompting),
which automatically re-maps the source labels to the target labels and
progressively improves the target task accuracy of VP. Further, when using a
contrastive language-image pretrained (CLIP) model, we propose to integrate an
LM process to assist the text prompt selection of CLIP and to improve the
target task accuracy. Extensive experiments demonstrate that our proposal
significantly outperforms state-of-the-art VP methods. As highlighted below, we
show that when reprogramming an ImageNet-pretrained ResNet-18 to 13 target
tasks, our method outperforms baselines by a substantial margin, e.g., 7.9% and
6.7% accuracy improvements in transfer learning to the target Flowers102 and
CIFAR100 datasets. Besides, our proposal on CLIP-based VP provides 13.7% and
7.1% accuracy improvements on Flowers102 and DTD respectively. Our code is
available at https://github.com/OPTML-Group/ILM-VP.
"
The Power of Large Language Models for Wireless Communication System  Development: A Case Study on FPGA Platforms,Yuyang Du,http://arxiv.org/pdf/2307.07319v4.pdf,2023-07-14,['eess.sp'],2307.07319v4.pdf,"  Large language models (LLMs) have garnered significant attention across
various research disciplines, including the wireless communication community.
There have been several heated discussions on the intersection of LLMs and
wireless technologies. While recent studies have demonstrated the ability of
LLMs to generate hardware description language (HDL) code for simple
computation tasks, developing wireless prototypes and products via HDL poses
far greater challenges because of the more complex computation tasks involved.
In this paper, we aim to address this challenge by investigating the role of
LLMs in FPGA-based hardware development for advanced wireless signal
processing. We begin by exploring LLM-assisted code refactoring, reuse, and
validation, using an open-source software-defined radio (SDR) project as a case
study. Through the case study, we find that an LLM assistant can potentially
yield substantial productivity gains for researchers and developers. We then
examine the feasibility of using LLMs to generate HDL code for advanced
wireless signal processing, using the Fast Fourier Transform (FFT) algorithm as
an example. This task presents two unique challenges: the scheduling of
subtasks within the overall task and the multi-step thinking required to solve
certain arithmetic problem within the task. To address these challenges, we
employ in-context learning (ICL) and Chain-of-Thought (CoT) prompting
techniques, culminating in the successful generation of a 64-point Verilog FFT
module. Our results demonstrate the potential of LLMs for generalization and
imitation, affirming their usefulness in writing HDL code for wireless
communication systems. Overall, this work contributes to understanding the role
of LLMs in wireless communication and motivates further exploration of their
capabilities.
"
Foundation Metrics: Quantifying Effectiveness of Healthcare  Conversations powered by Generative AI,Mahyar Abbasian,http://arxiv.org/pdf/2309.12444v2.pdf,2023-09-21,['cs.cl'],2309.12444v2.pdf,"  Generative Artificial Intelligence is set to revolutionize healthcare
delivery by transforming traditional patient care into a more personalized,
efficient, and proactive process. Chatbots, serving as interactive
conversational models, will probably drive this patient-centered transformation
in healthcare. Through the provision of various services, including diagnosis,
personalized lifestyle recommendations, and mental health support, the
objective is to substantially augment patient health outcomes, all the while
mitigating the workload burden on healthcare providers. The life-critical
nature of healthcare applications necessitates establishing a unified and
comprehensive set of evaluation metrics for conversational models. Existing
evaluation metrics proposed for various generic large language models (LLMs)
demonstrate a lack of comprehension regarding medical and health concepts and
their significance in promoting patients' well-being. Moreover, these metrics
neglect pivotal user-centered aspects, including trust-building, ethics,
personalization, empathy, user comprehension, and emotional support. The
purpose of this paper is to explore state-of-the-art LLM-based evaluation
metrics that are specifically applicable to the assessment of interactive
conversational models in healthcare. Subsequently, we present an comprehensive
set of evaluation metrics designed to thoroughly assess the performance of
healthcare chatbots from an end-user perspective. These metrics encompass an
evaluation of language processing abilities, impact on real-world clinical
tasks, and effectiveness in user-interactive conversations. Finally, we engage
in a discussion concerning the challenges associated with defining and
implementing these metrics, with particular emphasis on confounding factors
such as the target audience, evaluation methods, and prompt techniques involved
in the evaluation process.
"
Fill in the Blank: Exploring and Enhancing LLM Capabilities for Backward  Reasoning in Math Word Problems,Aniruddha Deb,http://arxiv.org/pdf/2310.01991v1.pdf,2023-10-03,"['cs.cl', 'cs.ai', 'cs.lg', 'i.2.3']",2310.01991v1.pdf,"  While forward reasoning (i.e. find the answer given the question) has been
explored extensively in the recent literature, backward reasoning is relatively
unexplored. We examine the backward reasoning capabilities of LLMs on Math Word
Problems (MWPs): given a mathematical question and its answer, with some
details omitted from the question, can LLMs effectively retrieve the missing
information?
  In this paper, we formally define the backward reasoning task on math word
problems and modify three datasets to evaluate this task: GSM8k, SVAMP and
MultiArith. Our findings show a significant drop in the accuracy of models on
backward reasoning compared to forward reasoning across four SOTA LLMs (GPT4,
GPT3.5, PaLM-2, and LLaMa-2). Utilizing the specific format of this task, we
propose three novel techniques that improve performance: Rephrase reformulates
the given problem into a forward reasoning problem, PAL-Tools combines the idea
of Program-Aided LLMs to produce a set of equations that can be solved by an
external solver, and Check your Work exploits the availability of natural
verifier of high accuracy in the forward direction, interleaving solving and
verification steps. Finally, realizing that each of our base methods correctly
solves a different set of problems, we propose a novel Bayesian formulation for
creating an ensemble over these base methods aided by a verifier to further
boost the accuracy by a significant margin. Extensive experimentation
demonstrates that our techniques successively improve the performance of LLMs
on the backward reasoning task, with the final ensemble-based method resulting
in a substantial performance gain compared to the raw LLMs with standard
prompting techniques such as chain-of-thought.
"
Autonomous Tree-search Ability of Large Language Models,Zheyu Zhang,http://arxiv.org/pdf/2310.10686v1.pdf,2023-10-14,"['cs.cl', 'cs.ai']",2310.10686v1.pdf,"  Large Language Models have excelled in remarkable reasoning capabilities with
advanced prompting techniques, but they fall short on tasks that require
exploration, strategic foresight, and sequential decision-making. Recent works
propose to utilize external programs to define search logic, such that LLMs can
perform passive tree search to solve more challenging reasoning tasks. Though
impressive results have been achieved, there are several fundamental
limitations of these approaches. First, passive tree searches are not efficient
as they usually require multiple rounds of LLM API calls to solve one single
problem. Moreover, passive search methods are not flexible since they need
task-specific program designs. Then a natural question arises: can we maintain
the tree-search capability of LLMs without the aid of external programs, and
can still generate responses that clearly demonstrate the process of a
tree-structure search? To this end, we propose a new concept called autonomous
tree-search ability of LLM, which can automatically generate a response
containing search trajectories for the correct answer. Concretely, we perform
search trajectories using capable LLM API via a fixed system prompt, allowing
them to perform autonomous tree-search (ATS) right out of the box. Experiments
on 4 puzzle games demonstrate our method can achieve huge improvements. The
ATS-BFS method outperforms the Chain of Thought approach by achieving an
average accuracy improvement of 33%. Compared to Tree of Thoughts, it requires
65.6% or 47.7% less GPT-api cost to attain a comparable level of accuracy.
Moreover, we have collected data using the ATS prompt method and fine-tuned
LLaMA. This approach yield a greater improvement compared to the ones
fine-tuned on CoT data. Specifically, it outperforms CoT-tuned LLaMAs by an
average of 40.6% and 38.5% for LLaMA2-7B and LLaMA2-13B, respectively.
"
In-Context Impersonation Reveals Large Language Models' Strengths and  Biases,Leonard Salewski,http://arxiv.org/pdf/2305.14930v1.pdf,2023-05-24,"['cs.ai', 'cs.cl', 'cs.lg']",2305.14930v1.pdf,"  In everyday conversations, humans can take on different roles and adapt their
vocabulary to their chosen roles. We explore whether LLMs can take on, that is
impersonate, different roles when they generate text in-context. We ask LLMs to
assume different personas before solving vision and language tasks. We do this
by prefixing the prompt with a persona that is associated either with a social
identity or domain expertise. In a multi-armed bandit task, we find that LLMs
pretending to be children of different ages recover human-like developmental
stages of exploration. In a language-based reasoning task, we find that LLMs
impersonating domain experts perform better than LLMs impersonating non-domain
experts. Finally, we test whether LLMs' impersonations are complementary to
visual information when describing different categories. We find that
impersonation can improve performance: an LLM prompted to be a bird expert
describes birds better than one prompted to be a car expert. However,
impersonation can also uncover LLMs' biases: an LLM prompted to be a man
describes cars better than one prompted to be a woman. These findings
demonstrate that LLMs are capable of taking on diverse roles and that this
in-context impersonation can be used to uncover their hidden strengths and
biases.
"
ROSGPT_Vision: Commanding Robots Using Only Language Models' Prompts,Bilel Benjdira,http://arxiv.org/pdf/2308.11236v2.pdf,2023-08-22,"['cs.ro', 'cs.ai']",2308.11236v2.pdf,"  In this paper, we argue that the next generation of robots can be commanded
using only Language Models' prompts. Every prompt interrogates separately a
specific Robotic Modality via its Modality Language Model (MLM). A central Task
Modality mediates the whole communication to execute the robotic mission via a
Large Language Model (LLM). This paper gives this new robotic design pattern
the name of: Prompting Robotic Modalities (PRM). Moreover, this paper applies
this PRM design pattern in building a new robotic framework named
ROSGPT_Vision. ROSGPT_Vision allows the execution of a robotic task using only
two prompts: a Visual and an LLM prompt. The Visual Prompt extracts, in natural
language, the visual semantic features related to the task under consideration
(Visual Robotic Modality). Meanwhile, the LLM Prompt regulates the robotic
reaction to the visual description (Task Modality). The framework automates all
the mechanisms behind these two prompts. The framework enables the robot to
address complex real-world scenarios by processing visual data, making informed
decisions, and carrying out actions automatically. The framework comprises one
generic vision module and two independent ROS nodes. As a test application, we
used ROSGPT_Vision to develop CarMate, which monitors the driver's distraction
on the roads and makes real-time vocal notifications to the driver. We showed
how ROSGPT_Vision significantly reduced the development cost compared to
traditional methods. We demonstrated how to improve the quality of the
application by optimizing the prompting strategies, without delving into
technical details. ROSGPT_Vision is shared with the community (link:
https://github.com/bilel-bj/ROSGPT_Vision) to advance robotic research in this
direction and to build more robotic frameworks that implement the PRM design
pattern and enables controlling robots using only prompts.
"
ProgPrompt: Generating Situated Robot Task Plans using Large Language  Models,Ishika Singh,http://arxiv.org/pdf/2209.11302v1.pdf,2022-09-22,"['cs.ro', 'cs.ai', 'cs.cl', 'cs.lg']",2209.11302v1.pdf,"  Task planning can require defining myriad domain knowledge about the world in
which a robot needs to act. To ameliorate that effort, large language models
(LLMs) can be used to score potential next actions during task planning, and
even generate action sequences directly, given an instruction in natural
language with no additional domain information. However, such methods either
require enumerating all possible next steps for scoring, or generate free-form
text that may contain actions not possible on a given robot in its current
context. We present a programmatic LLM prompt structure that enables plan
generation functional across situated environments, robot capabilities, and
tasks. Our key insight is to prompt the LLM with program-like specifications of
the available actions and objects in an environment, as well as with example
programs that can be executed. We make concrete recommendations about prompt
structure and generation constraints through ablation experiments, demonstrate
state of the art success rates in VirtualHome household tasks, and deploy our
method on a physical robot arm for tabletop tasks. Website at
progprompt.github.io
"
Characterizing Attribution and Fluency Tradeoffs for Retrieval-Augmented  Large Language Models,Renat Aksitov,http://arxiv.org/pdf/2302.05578v2.pdf,2023-02-11,"['cs.cl', 'cs.ai']",2302.05578v2.pdf,"  Despite recent progress, it has been difficult to prevent semantic
hallucinations in generative Large Language Models. One common solution to this
is augmenting LLMs with a retrieval system and making sure that the generated
output is attributable to the retrieved information. Given this new added
constraint, it is plausible to expect that the overall quality of the output
will be affected, for example, in terms of fluency. Can scaling language models
help?
  Here we examine the relationship between fluency and attribution in LLMs
prompted with retrieved evidence in knowledge-heavy dialog settings. Our
experiments were implemented with a set of auto-metrics that are aligned with
human preferences. They were used to evaluate a large set of generations,
produced under varying parameters of LLMs and supplied context.
  We show that larger models tend to do much better in both fluency and
attribution, and that (naively) using top-k retrieval versus top-1 retrieval
improves attribution but hurts fluency. We next propose a recipe that could
allow smaller models to both close the gap with larger models and preserve the
benefits of top-k retrieval while avoiding its drawbacks.
"
Dictionary-based Phrase-level Prompting of Large Language Models for  Machine Translation,Marjan Ghazvininejad,http://arxiv.org/pdf/2302.07856v1.pdf,2023-02-15,"['cs.cl', 'cs.lg']",2302.07856v1.pdf,"  Large language models (LLMs) demonstrate remarkable machine translation (MT)
abilities via prompting, even though they were not explicitly trained for this
task. However, even given the incredible quantities of data they are trained
on, LLMs can struggle to translate inputs with rare words, which are common in
low resource or domain transfer scenarios. We show that LLM prompting can
provide an effective solution for rare words as well, by using prior knowledge
from bilingual dictionaries to provide control hints in the prompts. We propose
a novel method, DiPMT, that provides a set of possible translations for a
subset of the input words, thereby enabling fine-grained phrase-level prompted
control of the LLM. Extensive experiments show that DiPMT outperforms the
baseline both in low-resource MT, as well as for out-of-domain MT. We further
provide a qualitative analysis of the benefits and limitations of this
approach, including the overall level of controllability that is achieved.
"
UDAPDR: Unsupervised Domain Adaptation via LLM Prompting and  Distillation of Rerankers,Jon Saad-Falcon,http://arxiv.org/pdf/2303.00807v3.pdf,2023-03-01,"['cs.ir', 'cs.cl']",2303.00807v3.pdf,"  Many information retrieval tasks require large labeled datasets for
fine-tuning. However, such datasets are often unavailable, and their utility
for real-world applications can diminish quickly due to domain shifts. To
address this challenge, we develop and motivate a method for using large
language models (LLMs) to generate large numbers of synthetic queries cheaply.
The method begins by generating a small number of synthetic queries using an
expensive LLM. After that, a much less expensive one is used to create large
numbers of synthetic queries, which are used to fine-tune a family of reranker
models. These rerankers are then distilled into a single efficient retriever
for use in the target domain. We show that this technique boosts zero-shot
accuracy in long-tail domains and achieves substantially lower latency than
standard reranking methods.
"
LMCanvas: Object-Oriented Interaction to Personalize Large Language  Model-Powered Writing Environments,Tae Soo Kim,http://arxiv.org/pdf/2303.15125v1.pdf,2023-03-27,"['cs.hc', 'cs.cl']",2303.15125v1.pdf,"  Large language models (LLMs) can enhance writing by automating or supporting
specific tasks in writers' workflows (e.g., paraphrasing, creating analogies).
Leveraging this capability, a collection of interfaces have been developed that
provide LLM-powered tools for specific writing tasks. However, these interfaces
provide limited support for writers to create personal tools for their own
unique tasks, and may not comprehensively fulfill a writer's needs -- requiring
them to continuously switch between interfaces during writing. In this work, we
envision LMCanvas, an interface that enables writers to create their own
LLM-powered writing tools and arrange their personal writing environment by
interacting with ""blocks"" in a canvas. In this interface, users can create text
blocks to encapsulate writing and LLM prompts, model blocks for model parameter
configurations, and connect these to create pipeline blocks that output
generations. In this workshop paper, we discuss the design for LMCanvas and our
plans to develop this concept.
"
SGP-TOD: Building Task Bots Effortlessly via Schema-Guided LLM Prompting,Xiaoying Zhang,http://arxiv.org/pdf/2305.09067v1.pdf,2023-05-15,['cs.cl'],2305.09067v1.pdf,"  Building end-to-end task bots and maintaining their integration with new
functionalities using minimal human efforts is a long-standing challenge in
dialog research. Recently large language models (LLMs) have demonstrated
exceptional proficiency in conversational engagement and adherence to
instructions across various downstream tasks. In this work, we introduce
SGP-TOD, Schema-Guided Prompting for building Task-Oriented Dialog systems
effortlessly based on LLMs. Utilizing the symbolic knowledge -- task schema, we
instruct fixed LLMs to generate appropriate responses on novel tasks,
circumventing the need for training data. Specifically, SGP-TOD comprises three
components: a LLM for engaging with users, a DST Prompter to aid the LLM with
dialog state tracking, which is then used to retrieve database items, and a
Policy Prompter to elicit proper responses adhering to the provided dialog
policy. Experimental results on Multiwoz, RADDLE and STAR datasets show that
our training-free strategy SGP-TOD, without any task-specific data, yields
state-of-the-art (SOTA) zero-shot performance, greatly surpasses the few-shot
approaches. In a domain-extension setting, SGP-TOD aptly adapts to new
functionalities by merely adding supplementary schema rules. We make our code
and data publicly available.
"
TELeR: A General Taxonomy of LLM Prompts for Benchmarking Complex Tasks,Shubhra Kanti Karmaker Santu,http://arxiv.org/pdf/2305.11430v2.pdf,2023-05-19,"['cs.ai', 'cs.cl', 'cs.ir', 'cs.lg', 'i.2.7']",2305.11430v2.pdf,"  While LLMs have shown great success in understanding and generating text in
traditional conversational settings, their potential for performing ill-defined
complex tasks is largely under-studied. Indeed, we are yet to conduct
comprehensive benchmarking studies with multiple LLMs that are exclusively
focused on a complex task. However, conducting such benchmarking studies is
challenging because of the large variations in LLMs' performance when different
prompt types/styles are used and different degrees of detail are provided in
the prompts. To address this issue, the paper proposes a general taxonomy that
can be used to design prompts with specific properties in order to perform a
wide range of complex tasks. This taxonomy will allow future benchmarking
studies to report the specific categories of prompts used as part of the study,
enabling meaningful comparisons across different studies. Also, by establishing
a common standard through this taxonomy, researchers will be able to draw more
accurate conclusions about LLMs' performance on a specific complex task.
"
S$^3$HQA: A Three-Stage Approach for Multi-hop Text-Table Hybrid  Question Answering,Fangyu Lei,http://arxiv.org/pdf/2305.11725v1.pdf,2023-05-19,['cs.cl'],2305.11725v1.pdf,"  Answering multi-hop questions over hybrid factual knowledge from the given
text and table (TextTableQA) is a challenging task. Existing models mainly
adopt a retriever-reader framework, which have several deficiencies, such as
noisy labeling in training retriever, insufficient utilization of heterogeneous
information over text and table, and deficient ability for different reasoning
operations. In this paper, we propose a three-stage TextTableQA framework
S3HQA, which comprises of retriever, selector, and reasoner. We use a retriever
with refinement training to solve the noisy labeling problem. Then, a hybrid
selector considers the linked relationships between heterogeneous data to
select the most relevant factual knowledge. For the final stage, instead of
adapting a reading comprehension module like in previous methods, we employ a
generation-based reasoner to obtain answers. This includes two approaches: a
row-wise generator and an LLM prompting generator~(first time used in this
task). The experimental results demonstrate that our method achieves
competitive results in the few-shot setting. When trained on the full dataset,
our approach outperforms all baseline methods, ranking first on the HybridQA
leaderboard.
"
LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain  Conversations with Large Language Models,Yen-Ting Lin,http://arxiv.org/pdf/2305.13711v1.pdf,2023-05-23,"['cs.cl', 'cs.ai']",2305.13711v1.pdf,"  We propose LLM-Eval, a unified multi-dimensional automatic evaluation method
for open-domain conversations with large language models (LLMs). Existing
evaluation methods often rely on human annotations, ground-truth responses, or
multiple LLM prompts, which can be expensive and time-consuming. To address
these issues, we design a single prompt-based evaluation method that leverages
a unified evaluation schema to cover multiple dimensions of conversation
quality in a single model call. We extensively evaluate the performance of
LLM-Eval on various benchmark datasets, demonstrating its effectiveness,
efficiency, and adaptability compared to state-of-the-art evaluation methods.
Our analysis also highlights the importance of choosing suitable LLMs and
decoding strategies for accurate evaluation results. LLM-Eval offers a
versatile and robust solution for evaluating open-domain conversation systems,
streamlining the evaluation process and providing consistent performance across
diverse scenarios.
"
AutoPlan: Automatic Planning of Interactive Decision-Making Tasks With  Large Language Models,Siqi Ouyang,http://arxiv.org/pdf/2305.15064v3.pdf,2023-05-24,['cs.cl'],2305.15064v3.pdf,"  Recent large language models (LLMs) are promising for making decisions in
grounded environments. However, LLMs frequently fail in complex decision-making
tasks due to the misalignment between the pre-trained knowledge in LLMs and the
actual rules in the environment. Existing methods require either costly
gradient computation or lengthy in-context demonstrations. In this paper, we
propose AutoPlan, an approach to guide LLM-based agents to accomplish
interactive decision-making tasks. AutoPlan augments the LLM prompt with a
task-solving plan and optimizes it through iterative experience collection and
reflection. Our experiments show that AutoPlan, though using no in-context
demonstrations, achieves success rates on par with the baselines using
human-written demonstrations on ALFWorld and even outperforms them by 8% on
HotpotQA. The code is available at https://github.com/owaski/AutoPlan.
"
ChatGPT for PLC/DCS Control Logic Generation,Heiko Koziolek,http://arxiv.org/pdf/2305.15809v1.pdf,2023-05-25,"['cs.se', 'cs.ai', 'd.2.2']",2305.15809v1.pdf,"  Large language models (LLMs) providing generative AI have become popular to
support software engineers in creating, summarizing, optimizing, and
documenting source code. It is still unknown how LLMs can support control
engineers using typical control programming languages in programming tasks.
Researchers have explored GitHub CoPilot or DeepMind AlphaCode for source code
generation but did not yet tackle control logic programming. The contribution
of this paper is an exploratory study, for which we created 100 LLM prompts in
10 representative categories to analyze control logic generation for of PLCs
and DCS from natural language. We tested the prompts by generating answers with
ChatGPT using the GPT-4 LLM. It generated syntactically correct IEC 61131-3
Structured Text code in many cases and demonstrated useful reasoning skills
that could boost control engineer productivity. Our prompt collection is the
basis for a more formal LLM benchmark to test and compare such models for
control logic generation.
"
AdaPlanner: Adaptive Planning from Feedback with Language Models,Haotian Sun,http://arxiv.org/pdf/2305.16653v1.pdf,2023-05-26,"['cs.cl', 'cs.ai', 'cs.lg']",2305.16653v1.pdf,"  Large language models (LLMs) have recently demonstrated the potential in
acting as autonomous agents for sequential decision-making tasks. However, most
existing methods either take actions greedily without planning or rely on
static plans that are not adaptable to environmental feedback. Consequently,
the sequential decision-making performance of LLM agents degenerates with
problem complexity and plan horizons increase. We propose a closed-loop
approach, AdaPlanner, which allows the LLM agent to refine its self-generated
plan adaptively in response to environmental feedback. In AdaPlanner, the LLM
agent adaptively refines its plan from feedback with both in-plan and
out-of-plan refinement strategies. To mitigate hallucination, we develop a
code-style LLM prompt structure that facilitates plan generation across a
variety of tasks, environments, and agent capabilities. Furthermore, we propose
a skill discovery mechanism that leverages successful plans as few-shot
exemplars, enabling the agent to plan and refine with fewer task
demonstrations. Our experiments in the ALFWorld and MiniWoB++ environments
demonstrate that AdaPlanner outperforms state-of-the-art baselines by 3.73% and
4.11% while utilizing 2x and 600x fewer samples, respectively.
"
Robot Task Planning Based on Large Language Model Representing Knowledge  with Directed Graph Structures,Yue Zhen,http://arxiv.org/pdf/2306.05171v1.pdf,2023-06-08,"['cs.ro', 'cs.ai']",2306.05171v1.pdf,"  Traditional robot task planning methods face challenges when dealing with
highly unstructured environments and complex tasks. We propose a task planning
method that combines human expertise with an LLM and have designed an LLM
prompt template, Think_Net_Prompt, with stronger expressive power to represent
structured professional knowledge. We further propose a method to progressively
decompose tasks and generate a task tree to reduce the planning volume for each
task, and we have designed a strategy to decouple robot task planning. By
dividing different planning entities and separating the task from the actual
machine binding process, the task planning process becomes more flexible.
Research results show that our method performs well in handling specified code
formats, understanding the relationship between tasks and subtasks, and
extracting parameters from text descriptions. However, there are also problems
such as limited complexity of task logic handling, ambiguity in the quantity of
parts and the precise location of assembly. Improving the precision of task
description and cognitive structure can bring certain improvements.
https://github.com/NOMIzy/Think_Net_Prompt
"
SayTap: Language to Quadrupedal Locomotion,Yujin Tang,http://arxiv.org/pdf/2306.07580v3.pdf,2023-06-13,['cs.ro'],2306.07580v3.pdf,"  Large language models (LLMs) have demonstrated the potential to perform
high-level planning. Yet, it remains a challenge for LLMs to comprehend
low-level commands, such as joint angle targets or motor torques. This paper
proposes an approach to use foot contact patterns as an interface that bridges
human commands in natural language and a locomotion controller that outputs
these low-level commands. This results in an interactive system for quadrupedal
robots that allows the users to craft diverse locomotion behaviors flexibly. We
contribute an LLM prompt design, a reward function, and a method to expose the
controller to the feasible distribution of contact patterns. The results are a
controller capable of achieving diverse locomotion patterns that can be
transferred to real robot hardware. Compared with other design choices, the
proposed approach enjoys more than 50% success rate in predicting the correct
contact patterns and can solve 10 more tasks out of a total of 30 tasks. Our
project site is: https://saytap.github.io.
"
Large Language Models Enable Few-Shot Clustering,Vijay Viswanathan,http://arxiv.org/pdf/2307.00524v1.pdf,2023-07-02,['cs.cl'],2307.00524v1.pdf,"  Unlike traditional unsupervised clustering, semi-supervised clustering allows
users to provide meaningful structure to the data, which helps the clustering
algorithm to match the user's intent. Existing approaches to semi-supervised
clustering require a significant amount of feedback from an expert to improve
the clusters. In this paper, we ask whether a large language model can amplify
an expert's guidance to enable query-efficient, few-shot semi-supervised text
clustering. We show that LLMs are surprisingly effective at improving
clustering. We explore three stages where LLMs can be incorporated into
clustering: before clustering (improving input features), during clustering (by
providing constraints to the clusterer), and after clustering (using LLMs
post-correction). We find incorporating LLMs in the first two stages can
routinely provide significant improvements in cluster quality, and that LLMs
enable a user to make trade-offs between cost and accuracy to produce desired
clusters. We release our code and LLM prompts for the public to use.
"
GEAR: Augmenting Language Models with Generalizable and Efficient Tool  Resolution,Yining Lu,http://arxiv.org/pdf/2307.08775v1.pdf,2023-07-17,['cs.ai'],2307.08775v1.pdf,"  Augmenting large language models (LLM) to use external tools enhances their
performance across a variety of tasks. However, prior works over-rely on
task-specific demonstration of tool use that limits their generalizability and
computational cost due to making many calls to large-scale LLMs. We introduce
GEAR, a computationally efficient query-tool grounding algorithm that is
generalizable to various tasks that require tool use while not relying on
task-specific demonstrations. GEAR achieves better efficiency by delegating
tool grounding and execution to small language models (SLM) and LLM,
respectively; while leveraging semantic and pattern-based evaluation at both
question and answer levels for generalizable tool grounding. We evaluate GEAR
on 14 datasets across 6 downstream tasks, demonstrating its strong
generalizability to novel tasks, tools and different SLMs. Despite offering
more efficiency, GEAR achieves higher precision in tool grounding compared to
prior strategies using LLM prompting, thus improving downstream accuracy at a
reduced computational cost. For example, we demonstrate that GEAR-augmented
GPT-J and GPT-3 outperform counterpart tool-augmented baselines because of
better tool use.
"
Simple LLM Prompting is State-of-the-Art for Robust and Multilingual  Dialogue Evaluation,John Mendonça,http://arxiv.org/pdf/2308.16797v2.pdf,2023-08-31,['cs.cl'],2308.16797v2.pdf,"  Despite significant research effort in the development of automatic dialogue
evaluation metrics, little thought is given to evaluating dialogues other than
in English. At the same time, ensuring metrics are invariant to semantically
similar responses is also an overlooked topic. In order to achieve the desired
properties of robustness and multilinguality for dialogue evaluation metrics,
we propose a novel framework that takes advantage of the strengths of current
evaluation models with the newly-established paradigm of prompting Large
Language Models (LLMs). Empirical results show our framework achieves state of
the art results in terms of mean Spearman correlation scores across several
benchmarks and ranks first place on both the Robust and Multilingual tasks of
the DSTC11 Track 4 ""Automatic Evaluation Metrics for Open-Domain Dialogue
Systems"", proving the evaluation capabilities of prompted LLMs.
"
"MMHQA-ICL: Multimodal In-context Learning for Hybrid Question Answering  over Text, Tables and Images",Weihao Liu,http://arxiv.org/pdf/2309.04790v1.pdf,2023-09-09,['cs.cl'],2309.04790v1.pdf,"  In the real world, knowledge often exists in a multimodal and heterogeneous
form. Addressing the task of question answering with hybrid data types,
including text, tables, and images, is a challenging task (MMHQA). Recently,
with the rise of large language models (LLM), in-context learning (ICL) has
become the most popular way to solve QA problems. We propose MMHQA-ICL
framework for addressing this problems, which includes stronger heterogeneous
data retriever and an image caption module. Most importantly, we propose a
Type-specific In-context Learning Strategy for MMHQA, enabling LLMs to leverage
their powerful performance in this task. We are the first to use end-to-end LLM
prompting method for this task. Experimental results demonstrate that our
framework outperforms all baselines and methods trained on the full dataset,
achieving state-of-the-art results under the few-shot setting on the
MultimodalQA dataset.
"
Empowering Private Tutoring by Chaining Large Language Models,Yulin Chen,http://arxiv.org/pdf/2309.08112v1.pdf,2023-09-15,['cs.hc'],2309.08112v1.pdf,"  Artificial intelligence has been applied in various aspects of online
education to facilitate teaching and learning. However, few approaches has been
made toward a complete AI-powered tutoring system. In this work, we explore the
development of a full-fledged intelligent tutoring system powered by
state-of-the-art large language models (LLMs), covering automatic course
planning and adjusting, tailored instruction, and flexible quiz evaluation. To
make the system robust to prolonged interaction and cater to individualized
education, the system is decomposed into three inter-connected core
processes-interaction, reflection, and reaction. Each process is implemented by
chaining LLM-powered tools along with dynamically updated memory modules. Tools
are LLMs prompted to execute one specific task at a time, while memories are
data storage that gets updated during education process. Statistical results
from learning logs demonstrate the effectiveness and mechanism of each tool
usage. Subjective feedback from human users reveal the usability of each
function, and comparison with ablation systems further testify the benefits of
the designed processes in long-term interaction.
"
Retrieve-Rewrite-Answer: A KG-to-Text Enhanced LLMs Framework for  Knowledge Graph Question Answering,Yike Wu,http://arxiv.org/pdf/2309.11206v2.pdf,2023-09-20,"['cs.cl', 'cs.ai']",2309.11206v2.pdf,"  Despite their competitive performance on knowledge-intensive tasks, large
language models (LLMs) still have limitations in memorizing all world knowledge
especially long tail knowledge. In this paper, we study the KG-augmented
language model approach for solving the knowledge graph question answering
(KGQA) task that requires rich world knowledge. Existing work has shown that
retrieving KG knowledge to enhance LLMs prompting can significantly improve
LLMs performance in KGQA. However, their approaches lack a well-formed
verbalization of KG knowledge, i.e., they ignore the gap between KG
representations and textual representations. To this end, we propose an
answer-sensitive KG-to-Text approach that can transform KG knowledge into
well-textualized statements most informative for KGQA. Based on this approach,
we propose a KG-to-Text enhanced LLMs framework for solving the KGQA task.
Experiments on several KGQA benchmarks show that the proposed KG-to-Text
augmented LLMs approach outperforms previous KG-augmented LLMs approaches
regarding answer accuracy and usefulness of knowledge statements.
"
LPML: LLM-Prompting Markup Language for Mathematical Reasoning,Ryutaro Yamauchi,http://arxiv.org/pdf/2309.13078v2.pdf,2023-09-21,"['cs.ai', 'cs.lg', 'cs.pl']",2309.13078v2.pdf,"  In utilizing large language models (LLMs) for mathematical reasoning,
addressing the errors in the reasoning and calculation present in the generated
text by LLMs is a crucial challenge. In this paper, we propose a novel
framework that integrates the Chain-of-Thought (CoT) method with an external
tool (Python REPL). We discovered that by prompting LLMs to generate structured
text in XML-like markup language, we could seamlessly integrate CoT and the
external tool and control the undesired behaviors of LLMs. With our approach,
LLMs can utilize Python computation to rectify errors within CoT. We applied
our method to ChatGPT (GPT-3.5) to solve challenging mathematical problems and
demonstrated that combining CoT and Python REPL through the markup language
enhances the reasoning capability of LLMs. Our approach enables LLMs to write
the markup language and perform advanced mathematical reasoning using only
zero-shot prompting.
"
HeaP: Hierarchical Policies for Web Actions using LLMs,Paloma Sodhi,http://arxiv.org/pdf/2310.03720v1.pdf,2023-10-05,['cs.lg'],2310.03720v1.pdf,"  Large language models (LLMs) have demonstrated remarkable capabilities in
performing a range of instruction following tasks in few and zero-shot
settings. However, teaching LLMs to perform tasks on the web presents
fundamental challenges -- combinatorially large open-world tasks and variations
across web interfaces. We tackle these challenges by leveraging LLMs to
decompose web tasks into a collection of sub-tasks, each of which can be solved
by a low-level, closed-loop policy. These policies constitute a shared grammar
across tasks, i.e., new web tasks can be expressed as a composition of these
policies. We propose a novel framework, Hierarchical Policies for Web Actions
using LLMs (HeaP), that learns a set of hierarchical LLM prompts from
demonstrations for planning high-level tasks and executing them via a sequence
of low-level policies. We evaluate HeaP against a range of baselines on a suite
of web tasks, including MiniWoB++, WebArena, a mock airline CRM, as well as
live website interactions, and show that it is able to outperform prior works
using orders of magnitude less data.
"
OptiMUS: Optimization Modeling Using MIP Solvers and large language  models,Ali AhmadiTeshnizi,http://arxiv.org/pdf/2310.06116v2.pdf,2023-10-09,['cs.ai'],2310.06116v2.pdf,"  Optimization problems are pervasive across various sectors, from
manufacturing and distribution to healthcare. However, most such problems are
still solved heuristically by hand rather than optimally by state-of-the-art
solvers, as the expertise required to formulate and solve these problems limits
the widespread adoption of optimization tools and techniques. We introduce
OptiMUS, a Large Language Model (LLM)-based agent designed to formulate and
solve MILP problems from their natural language descriptions. OptiMUS is
capable of developing mathematical models, writing and debugging solver code,
developing tests, and checking the validity of generated solutions. To
benchmark our agent, we present NLP4LP, a novel dataset of linear programming
(LP) and mixed integer linear programming (MILP) problems. Our experiments
demonstrate that OptiMUS solves nearly twice as many problems as a basic LLM
prompting strategy. OptiMUS code and NLP4LP dataset are available at
\href{https://github.com/teshnizi/OptiMUS}{https://github.com/teshnizi/OptiMUS}
"
A ML-LLM pairing for better code comment classification,Hanna Abi Akl,http://arxiv.org/pdf/2310.10275v1.pdf,2023-10-13,"['cs.se', 'cs.ai']",2310.10275v1.pdf,"  The ""Information Retrieval in Software Engineering (IRSE)"" at FIRE 2023
shared task introduces code comment classification, a challenging task that
pairs a code snippet with a comment that should be evaluated as either useful
or not useful to the understanding of the relevant code. We answer the code
comment classification shared task challenge by providing a two-fold
evaluation: from an algorithmic perspective, we compare the performance of
classical machine learning systems and complement our evaluations from a
data-driven perspective by generating additional data with the help of large
language model (LLM) prompting to measure the potential increase in
performance. Our best model, which took second place in the shared task, is a
Neural Network with a Macro-F1 score of 88.401% on the provided seed data and a
1.5% overall increase in performance on the data generated by the LLM.
"
Multi-stage Large Language Model Correction for Speech Recognition,Jie Pu,http://arxiv.org/pdf/2310.11532v1.pdf,2023-10-17,"['cs.cl', 'eess.as']",2310.11532v1.pdf,"  In this paper, we investigate the usage of large language models (LLMs) to
improve the performance of competitive speech recognition systems. Different
from traditional language models that focus on one single data domain, the rise
of LLMs brings us the opportunity to push the limit of state-of-the-art ASR
performance, and at the same time to achieve higher robustness and generalize
effectively across multiple domains. Motivated by this, we propose a novel
multi-stage approach to combine traditional language model re-scoring and LLM
prompting. Specifically, the proposed method has two stages: the first stage
uses a language model to re-score an N-best list of ASR hypotheses and run a
confidence check; The second stage uses prompts to a LLM to perform ASR error
correction on less confident results from the first stage. Our experimental
results demonstrate the effectiveness of the proposed method by showing a 10% ~
20% relative improvement in WER over a competitive ASR system -- across
multiple test domains.
"
PromptInfuser: How Tightly Coupling AI and UI Design Impacts Designers'  Workflows,Savvas Petridis,http://arxiv.org/pdf/2310.15435v1.pdf,2023-10-24,"['cs.hc', 'cs.ai']",2310.15435v1.pdf,"  Prototyping AI applications is notoriously difficult. While large language
model (LLM) prompting has dramatically lowered the barriers to AI prototyping,
designers are still prototyping AI functionality and UI separately. We
investigate how coupling prompt and UI design affects designers' workflows.
Grounding this research, we developed PromptInfuser, a Figma plugin that
enables users to create semi-functional mockups, by connecting UI elements to
the inputs and outputs of prompts. In a study with 14 designers, we compare
PromptInfuser to designers' current AI-prototyping workflow. PromptInfuser was
perceived to be significantly more useful for communicating product ideas, more
capable of producing prototypes that realistically represent the envisioned
artifact, more efficient for prototyping, and more helpful for anticipating UI
issues and technical constraints. PromptInfuser encouraged iteration over
prompt and UI together, which helped designers identify UI and prompt
incompatibilities and reflect upon their total solution. Together, these
findings inform future systems for prototyping AI applications.
"
OmniFill: Domain-Agnostic Form Filling Suggestions Using Multi-Faceted  Context,Timothy J. Aveni,http://arxiv.org/pdf/2310.17826v1.pdf,2023-10-27,['cs.hc'],2310.17826v1.pdf,"  Predictive suggestion systems offer contextually-relevant text entry
completions. Existing approaches, like autofill, often excel in
narrowly-defined domains but fail to generalize to arbitrary workflows. We
introduce a conceptual framework to analyze the compound demands of a
particular suggestion context, yielding unique opportunities for large language
models (LLMs) to infer suggestions for a wide range of domain-agnostic
form-filling tasks that were out of reach with prior approaches. We explore
these opportunities in OmniFill, a prototype that collects multi-faceted
context including browsing and text entry activity to construct an LLM prompt
that offers suggestions in situ for arbitrary structured text entry interfaces.
Through a user study with 18 participants, we found that OmniFill offered
valuable suggestions and we identified four themes that characterize users'
behavior and attitudes: an ""opportunistic scrapbooking"" approach; a trust
placed in the system; value in partial success; and a need for visibility into
prompt context.
"
Knowledge-Infused Prompting: Assessing and Advancing Clinical Text Data  Generation with Large Language Models,Ran Xu,http://arxiv.org/pdf/2311.00287v1.pdf,2023-11-01,"['cs.cl', 'cs.ai', 'cs.lg', 'q-bio.qm']",2311.00287v1.pdf,"  Clinical natural language processing requires methods that can address
domain-specific challenges, such as complex medical terminology and clinical
contexts. Recently, large language models (LLMs) have shown promise in this
domain. Yet, their direct deployment can lead to privacy issues and are
constrained by resources. To address this challenge, we delve into synthetic
clinical text generation using LLMs for clinical NLP tasks. We propose an
innovative, resource-efficient approach, ClinGen, which infuses knowledge into
the process. Our model involves clinical knowledge extraction and
context-informed LLM prompting. Both clinical topics and writing styles are
drawn from external domain-specific knowledge graphs and LLMs to guide data
generation. Our extensive empirical study across 7 clinical NLP tasks and 16
datasets reveals that ClinGen consistently enhances performance across various
tasks, effectively aligning the distribution of real datasets and significantly
enriching the diversity of generated training instances. We will publish our
code and all the generated data in \url{https://github.com/ritaranx/ClinGen}.
"
Promptagator: Few-shot Dense Retrieval From 8 Examples,Zhuyun Dai,http://arxiv.org/pdf/2209.11755v1.pdf,2022-09-23,"['cs.cl', 'cs.ir']",2209.11755v1.pdf,"  Much recent research on information retrieval has focused on how to transfer
from one task (typically with abundant supervised data) to various other tasks
where supervision is limited, with the implicit assumption that it is possible
to generalize from one task to all the rest. However, this overlooks the fact
that there are many diverse and unique retrieval tasks, each targeting
different search intents, queries, and search domains. In this paper, we
suggest to work on Few-shot Dense Retrieval, a setting where each task comes
with a short description and a few examples. To amplify the power of a few
examples, we propose Prompt-base Query Generation for Retriever (Promptagator),
which leverages large language models (LLM) as a few-shot query generator, and
creates task-specific retrievers based on the generated data. Powered by LLM's
generalization ability, Promptagator makes it possible to create task-specific
end-to-end retrievers solely based on a few examples {without} using Natural
Questions or MS MARCO to train %question generators or dual encoders.
Surprisingly, LLM prompting with no more than 8 examples allows dual encoders
to outperform heavily engineered models trained on MS MARCO like ColBERT v2 by
more than 1.2 nDCG on average on 11 retrieval sets. Further training
standard-size re-rankers using the same generated data yields another 5.0 point
nDCG improvement. Our studies determine that query generation can be far more
effective than previously observed, especially when a small amount of
task-specific knowledge is given.
"
Check Your Facts and Try Again: Improving Large Language Models with  External Knowledge and Automated Feedback,Baolin Peng,http://arxiv.org/pdf/2302.12813v3.pdf,2023-02-24,"['cs.cl', 'cs.ai']",2302.12813v3.pdf,"  Large language models (LLMs), such as ChatGPT, are able to generate
human-like, fluent responses for many downstream tasks, e.g., task-oriented
dialog and question answering. However, applying LLMs to real-world,
mission-critical applications remains challenging mainly due to their tendency
to generate hallucinations and their inability to use external knowledge. This
paper proposes a LLM-Augmenter system, which augments a black-box LLM with a
set of plug-and-play modules. Our system makes the LLM generate responses
grounded in external knowledge, e.g., stored in task-specific databases. It
also iteratively revises LLM prompts to improve model responses using feedback
generated by utility functions, e.g., the factuality score of a LLM-generated
response. The effectiveness of LLM-Augmenter is empirically validated on two
types of scenarios, task-oriented dialog and open-domain question answering.
LLM-Augmenter significantly reduces ChatGPT's hallucinations without
sacrificing the fluency and informativeness of its responses. We make the
source code and models publicly available.
"
AlpacaFarm: A Simulation Framework for Methods that Learn from Human  Feedback,Yann Dubois,http://arxiv.org/pdf/2305.14387v2.pdf,2023-05-22,"['cs.lg', 'cs.ai', 'cs.cl']",2305.14387v2.pdf,"  Large language models (LLMs) such as ChatGPT have seen widespread adoption
due to their ability to follow user instructions well. Developing these LLMs
involves a complex yet poorly understood workflow requiring training with human
feedback. Replicating and understanding this instruction-following process
faces three major challenges: the high cost of data collection, the lack of
trustworthy evaluation, and the absence of reference method implementations. We
address these challenges with AlpacaFarm, a simulator that enables research and
development for learning from feedback at a low cost. First, we design LLM
prompts to simulate human feedback that are 45x cheaper than crowdworkers and
display high agreement with humans. Second, we propose an automatic evaluation
and validate it against human instructions obtained on real-world interactions.
Third, we contribute reference implementations for several methods (PPO,
best-of-n, expert iteration, and more) that learn from pairwise feedback.
Finally, as an end-to-end validation of AlpacaFarm, we train and evaluate
eleven models on 10k pairs of real human feedback and show that rankings of
models trained in AlpacaFarm match rankings of models trained on human data. As
a demonstration of the research possible in AlpacaFarm, we find that methods
that use a reward model can substantially improve over supervised fine-tuning
and that our reference PPO implementation leads to a +10% improvement in
win-rate against Davinci003. We release all components of AlpacaFarm at
https://github.com/tatsu-lab/alpaca_farm.
"
MathDial: A Dialogue Tutoring Dataset with Rich Pedagogical Properties  Grounded in Math Reasoning Problems,Jakub Macina,http://arxiv.org/pdf/2305.14536v2.pdf,2023-05-23,['cs.cl'],2305.14536v2.pdf,"  While automatic dialogue tutors hold great potential in making education
personalized and more accessible, research on such systems has been hampered by
a lack of sufficiently large and high-quality datasets. Collecting such
datasets remains challenging, as recording tutoring sessions raises privacy
concerns and crowdsourcing leads to insufficient data quality. To address this,
we propose a framework to generate such dialogues by pairing human teachers
with a Large Language Model (LLM) prompted to represent common student errors.
We describe how we use this framework to collect MathDial, a dataset of 3k
one-to-one teacher-student tutoring dialogues grounded in multi-step math
reasoning problems. While models like GPT-3 are good problem solvers, they fail
at tutoring because they generate factually incorrect feedback or are prone to
revealing solutions to students too early. To overcome this, we let teachers
provide learning opportunities to students by guiding them using various
scaffolding questions according to a taxonomy of teacher moves. We demonstrate
MathDial and its extensive annotations can be used to finetune models to be
more effective tutors (and not just solvers). We confirm this by automatic and
human evaluation, notably in an interactive setting that measures the trade-off
between student solving success and telling solutions. The dataset is released
publicly.
"
SPRING: GPT-4 Out-performs RL Algorithms by Studying Papers and  Reasoning,Yue Wu,http://arxiv.org/pdf/2305.15486v2.pdf,2023-05-24,"['cs.ai', 'cs.lg']",2305.15486v2.pdf,"  Open-world survival games pose significant challenges for AI algorithms due
to their multi-tasking, deep exploration, and goal prioritization requirements.
Despite reinforcement learning (RL) being popular for solving games, its high
sample complexity limits its effectiveness in complex open-world games like
Crafter or Minecraft. We propose a novel approach, SPRING, to read the game's
original academic paper and use the knowledge learned to reason and play the
game through a large language model (LLM). Prompted with the LaTeX source as
game context and a description of the agent's current observation, our SPRING
framework employs a directed acyclic graph (DAG) with game-related questions as
nodes and dependencies as edges. We identify the optimal action to take in the
environment by traversing the DAG and calculating LLM responses for each node
in topological order, with the LLM's answer to final node directly translating
to environment actions. In our experiments, we study the quality of in-context
""reasoning"" induced by different forms of prompts under the setting of the
Crafter open-world environment. Our experiments suggest that LLMs, when
prompted with consistent chain-of-thought, have great potential in completing
sophisticated high-level trajectories. Quantitatively, SPRING with GPT-4
outperforms all state-of-the-art RL baselines, trained for 1M steps, without
any training. Finally, we show the potential of games as a test bed for LLMs.
"
Flocks of Stochastic Parrots: Differentially Private Prompt Learning for  Large Language Models,Haonan Duan,http://arxiv.org/pdf/2305.15594v1.pdf,2023-05-24,"['cs.lg', 'cs.cl', 'cs.cr']",2305.15594v1.pdf,"  Large language models (LLMs) are excellent in-context learners. However, the
sensitivity of data contained in prompts raises privacy concerns. Our work
first shows that these concerns are valid: we instantiate a simple but highly
effective membership inference attack against the data used to prompt LLMs. To
address this vulnerability, one could forego prompting and resort to
fine-tuning LLMs with known algorithms for private gradient descent. However,
this comes at the expense of the practicality and efficiency offered by
prompting. Therefore, we propose to privately learn to prompt. We first show
that soft prompts can be obtained privately through gradient descent on
downstream data. However, this is not the case for discrete prompts. Thus, we
orchestrate a noisy vote among an ensemble of LLMs presented with different
prompts, i.e., a flock of stochastic parrots. The vote privately transfers the
flock's knowledge into a single public prompt. We show that LLMs prompted with
our private algorithms closely match the non-private baselines. For example,
using GPT3 as the base model, we achieve a downstream accuracy of 92.7% on the
sst2 dataset with ($\epsilon=0.147, \delta=10^{-6}$)-differential privacy vs.
95.2% for the non-private baseline. Through our experiments, we also show that
our prompt-based approach is easily deployed with existing commercial APIs.
"
Iterative Zero-Shot LLM Prompting for Knowledge Graph Construction,Salvatore Carta,http://arxiv.org/pdf/2307.01128v1.pdf,2023-07-03,"['cs.cl', 'cs.ai']",2307.01128v1.pdf,"  In the current digitalization era, capturing and effectively representing
knowledge is crucial in most real-world scenarios. In this context, knowledge
graphs represent a potent tool for retrieving and organizing a vast amount of
information in a properly interconnected and interpretable structure. However,
their generation is still challenging and often requires considerable human
effort and domain expertise, hampering the scalability and flexibility across
different application fields. This paper proposes an innovative knowledge graph
generation approach that leverages the potential of the latest generative large
language models, such as GPT-3.5, that can address all the main critical issues
in knowledge graph building. The approach is conveyed in a pipeline that
comprises novel iterative zero-shot and external knowledge-agnostic strategies
in the main stages of the generation process. Our unique manifold approach may
encompass significant benefits to the scientific community. In particular, the
main contribution can be summarized by: (i) an innovative strategy for
iteratively prompting large language models to extract relevant components of
the final graph; (ii) a zero-shot strategy for each prompt, meaning that there
is no need for providing examples for ""guiding"" the prompt result; (iii) a
scalable solution, as the adoption of LLMs avoids the need for any external
resources or human expertise. To assess the effectiveness of our proposed
model, we performed experiments on a dataset that covered a specific domain. We
claim that our proposal is a suitable solution for scalable and versatile
knowledge graph construction and may be applied to different and novel
contexts.
"
PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine,Chenrui Zhang,http://arxiv.org/pdf/2308.12033v1.pdf,2023-08-23,"['cs.cl', 'cs.ai']",2308.12033v1.pdf,"  As an effective tool for eliciting the power of Large Language Models (LLMs),
prompting has recently demonstrated unprecedented abilities across a variety of
complex tasks. To further improve the performance, prompt ensemble has
attracted substantial interest for tackling the hallucination and instability
of LLMs. However, existing methods usually adopt a two-stage paradigm, which
requires a pre-prepared set of prompts with substantial manual effort, and is
unable to perform directed optimization for different weak learners. In this
paper, we propose a simple, universal, and automatic method named PREFER (Pompt
Ensemble learning via Feedback-Reflect-Refine) to address the stated
limitations. Specifically, given the fact that weak learners are supposed to
focus on hard examples during boosting, PREFER builds a feedback mechanism for
reflecting on the inadequacies of existing weak learners. Based on this, the
LLM is required to automatically synthesize new prompts for iterative
refinement. Moreover, to enhance stability of the prompt effect evaluation, we
propose a novel prompt bagging method involving forward and backward thinking,
which is superior to majority voting and is beneficial for both feedback and
weight calculation in boosting. Extensive experiments demonstrate that our
PREFER achieves state-of-the-art performance in multiple types of tasks by a
significant margin. We have made our code publicly available.
"
ABScribe: Rapid Exploration of Multiple Writing Variations in Human-AI  Co-Writing Tasks using Large Language Models,Mohi Reza,http://arxiv.org/pdf/2310.00117v2.pdf,2023-09-29,"['cs.hc', 'cs.ai', 'cs.lg']",2310.00117v2.pdf,"  Exploring alternative ideas by rewriting text is integral to the writing
process. State-of-the-art large language models (LLMs) can simplify writing
variation generation. However, current interfaces pose challenges for
simultaneous consideration of multiple variations: creating new versions
without overwriting text can be difficult, and pasting them sequentially can
clutter documents, increasing workload and disrupting writers' flow. To tackle
this, we present ABScribe, an interface that supports rapid, yet visually
structured, exploration of writing variations in human-AI co-writing tasks.
With ABScribe, users can swiftly produce multiple variations using LLM prompts,
which are auto-converted into reusable buttons. Variations are stored
adjacently within text segments for rapid in-place comparisons using mouse-over
interactions on a context toolbar. Our user study with 12 writers shows that
ABScribe significantly reduces task workload (d = 1.20, p < 0.001), enhances
user perceptions of the revision process (d = 2.41, p < 0.001) compared to a
popular baseline workflow, and provides insights into how writers explore
variations using LLMs.
"
Knowledge Crosswords: Geometric Reasoning over Structured Knowledge with  Large Language Models,Wenxuan Ding,http://arxiv.org/pdf/2310.01290v1.pdf,2023-10-02,"['cs.cl', 'cs.ai']",2310.01290v1.pdf,"  Large language models (LLMs) are widely adopted in knowledge-intensive tasks
and have achieved impressive performance thanks to their knowledge abilities.
While LLMs have demonstrated outstanding performance on atomic or linear
(multi-hop) QA tasks, whether they can reason in knowledge-rich scenarios with
interweaving constraints remains an underexplored problem. In this work, we
propose geometric reasoning over structured knowledge, where pieces of
knowledge are connected in a graph structure and models need to fill in the
missing information. Such geometric knowledge reasoning would require the
ability to handle structured knowledge, reason with uncertainty, verify facts,
and backtrack when an error occurs. We propose Knowledge Crosswords, a
multi-blank QA dataset where each problem consists of a natural language
question representing the geometric constraints of an incomplete entity
network, where LLMs are tasked with working out the missing entities while
meeting all factual constraints. Knowledge Crosswords contains 2,101 individual
problems, covering various knowledge domains and further divided into three
difficulty levels. We conduct extensive experiments to evaluate existing LLM
prompting approaches on the Knowledge Crosswords benchmark. We additionally
propose two new approaches, Staged Prompting and Verify-All, to augment LLMs'
ability to backtrack and verify structured constraints. Our results demonstrate
that while baseline approaches perform well on easier problems but struggle
with hard ones, our proposed Verify-All outperforms other methods by a large
margin and is more robust with hard problems. Further analysis reveals that
LLMs' ability of geometric reasoning over structured knowledge is still far
from robust or perfect, susceptible to confounders such as the order of
options, certain structural patterns, assumption of existence of correct
answer, and more.
"
Retrieval-augmented Generation to Improve Math Question-Answering:  Trade-offs Between Groundedness and Human Preference,Zachary Levonian,http://arxiv.org/pdf/2310.03184v1.pdf,2023-10-04,"['cs.cl', 'cs.hc']",2310.03184v1.pdf,"  For middle-school math students, interactive question-answering (QA) with
tutors is an effective way to learn. The flexibility and emergent capabilities
of generative large language models (LLMs) has led to a surge of interest in
automating portions of the tutoring process - including interactive QA to
support conceptual discussion of mathematical concepts. However, LLM responses
to math questions can be incorrect or mismatched to the educational context -
such as being misaligned with a school's curriculum. One potential solution is
retrieval-augmented generation (RAG), which involves incorporating a vetted
external knowledge source in the LLM prompt to increase response quality. In
this paper, we designed prompts that retrieve and use content from a
high-quality open-source math textbook to generate responses to real student
questions. We evaluate the efficacy of this RAG system for middle-school
algebra and geometry QA by administering a multi-condition survey, finding that
humans prefer responses generated using RAG, but not when responses are too
grounded in the textbook content. We argue that while RAG is able to improve
response quality, designers of math QA systems must consider trade-offs between
generating responses preferred by students and responses closely matched to
specific educational resources.
"
Small Language Models Fine-tuned to Coordinate Larger Language Models  improve Complex Reasoning,Gurusha Juneja,http://arxiv.org/pdf/2310.18338v1.pdf,2023-10-21,"['cs.cl', 'cs.ai']",2310.18338v1.pdf,"  Large Language Models (LLMs) prompted to generate chain-of-thought (CoT)
exhibit impressive reasoning capabilities. Recent attempts at prompt
decomposition toward solving complex, multi-step reasoning problems depend on
the ability of the LLM to simultaneously decompose and solve the problem. A
significant disadvantage is that foundational LLMs are typically not available
for fine-tuning, making adaptation computationally prohibitive. We believe (and
demonstrate) that problem decomposition and solution generation are distinct
capabilites, better addressed in separate modules, than by one monolithic LLM.
We introduce DaSLaM, which uses a decomposition generator to decompose complex
problems into subproblems that require fewer reasoning steps. These subproblems
are answered by a solver. We use a relatively small (13B parameters) LM as the
decomposition generator, which we train using policy gradient optimization to
interact with a solver LM (regarded as black-box) and guide it through
subproblems, thereby rendering our method solver-agnostic. Evaluation on
multiple different reasoning datasets reveal that with our method, a 175
billion parameter LM (text-davinci-003) can produce competitive or even better
performance, compared to its orders-of-magnitude larger successor, GPT-4.
Additionally, we show that DaSLaM is not limited by the solver's capabilities
as a function of scale; e.g., solver LMs with diverse sizes give significant
performance improvement with our solver-agnostic decomposition technique.
Exhaustive ablation studies evince the superiority of our modular finetuning
technique over exorbitantly large decomposer LLMs, based on prompting alone.
"
Universal Fuzzing via Large Language Models,Chunqiu Steven Xia,http://arxiv.org/pdf/2308.04748v1.pdf,2023-08-09,"['cs.se', 'cs.lg']",2308.04748v1.pdf,"  Fuzzing has achieved tremendous success in discovering bugs and
vulnerabilities in various software systems. Systems under test (SUTs) that
take in programming or formal language as inputs, e.g., compilers, runtime
engines, constraint solvers, and software libraries with accessible APIs, are
especially important as they are fundamental building blocks of software
development. However, existing fuzzers for such systems often target a specific
language, and thus cannot be easily applied to other languages or even other
versions of the same language. Moreover, the inputs generated by existing
fuzzers are often limited to specific features of the input language, and thus
can hardly reveal bugs related to other or new features. This paper presents
Fuzz4All, the first fuzzer that is universal in the sense that it can target
many different input languages and many different features of these languages.
The key idea behind Fuzz4All is to leverage large language models (LLMs) as an
input generation and mutation engine, which enables the approach to produce
diverse and realistic inputs for any practically relevant language. To realize
this potential, we present a novel autoprompting technique, which creates LLM
prompts that are wellsuited for fuzzing, and a novel LLM-powered fuzzing loop,
which iteratively updates the prompt to create new fuzzing inputs. We evaluate
Fuzz4All on nine systems under test that take in six different languages (C,
C++, Go, SMT2, Java and Python) as inputs. The evaluation shows, across all six
languages, that universal fuzzing achieves higher coverage than existing,
language-specific fuzzers. Furthermore, Fuzz4All has identified 76 bugs in
widely used systems, such as GCC, Clang, Z3, CVC5, OpenJDK, and the Qiskit
quantum computing platform, with 47 bugs already confirmed by developers as
previously unknown.
"
AI Chains: Transparent and Controllable Human-AI Interaction by Chaining  Large Language Model Prompts,Tongshuang Wu,http://arxiv.org/pdf/2110.01691v3.pdf,2021-10-04,"['cs.hc', 'cs.cl']",2110.01691v3.pdf,"  Although large language models (LLMs) have demonstrated impressive potential
on simple tasks, their breadth of scope, lack of transparency, and insufficient
controllability can make them less effective when assisting humans on more
complex tasks. In response, we introduce the concept of Chaining LLM steps
together, where the output of one step becomes the input for the next, thus
aggregating the gains per step. We first define a set of LLM primitive
operations useful for Chain construction, then present an interactive system
where users can modify these Chains, along with their intermediate results, in
a modular way. In a 20-person user study, we found that Chaining not only
improved the quality of task outcomes, but also significantly enhanced system
transparency, controllability, and sense of collaboration. Additionally, we saw
that users developed new ways of interacting with LLMs through Chains: they
leveraged sub-tasks to calibrate model expectations, compared and contrasted
alternative strategies by observing parallel downstream effects, and debugged
unexpected model outputs by ""unit-testing"" sub-components of a Chain. In two
case studies, we further explore how LLM Chains may be used in future
applications
"
PromptChainer: Chaining Large Language Model Prompts through Visual  Programming,Tongshuang Wu,http://arxiv.org/pdf/2203.06566v1.pdf,2022-03-13,['cs.hc'],2203.06566v1.pdf,"  While LLMs can effectively help prototype single ML functionalities, many
real-world applications involve complex tasks that cannot be easily handled via
a single run of an LLM. Recent work has found that chaining multiple LLM runs
together (with the output of one step being the input to the next) can help
users accomplish these more complex tasks, and in a way that is perceived to be
more transparent and controllable. However, it remains unknown what users need
when authoring their own LLM chains -- a key step for lowering the barriers for
non-AI-experts to prototype AI-infused applications. In this work, we explore
the LLM chain authoring process. We conclude from pilot studies find that
chaining requires careful scaffolding for transforming intermediate node
outputs, as well as debugging the chain at multiple granularities; to help with
these needs, we designed PromptChainer, an interactive interface for visually
programming chains. Through case studies with four people, we show that
PromptChainer supports building prototypes for a range of applications, and
conclude with open questions on scaling chains to complex tasks, and supporting
low-fi chain prototyping.
"
Few-shot Reranking for Multi-hop QA via Language Model Prompting,Muhammad Khalifa,http://arxiv.org/pdf/2205.12650v3.pdf,2022-05-25,"['cs.cl', 'cs.ir']",2205.12650v3.pdf,"  We study few-shot reranking for multi-hop QA with open-domain questions. To
alleviate the need for a large number of labeled question-document pairs for
retriever training, we propose PromptRank, which relies on large language
models prompting for multi-hop path reranking. PromptRank first constructs an
instruction-based prompt that includes a candidate document path and then
computes the relevance score between a given question and the path based on the
conditional likelihood of the question given the path prompt according to a
language model. PromptRank yields strong retrieval performance on HotpotQA with
only 128 training examples compared to state-of-the-art methods trained on
thousands of examples -- 73.6 recall@10 by PromptRank vs. 77.8 by PathRetriever
and 77.5 by multi-hop dense retrieval. Code available at
https://github.com/mukhal/PromptRank
"
Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency,Lingfeng Shen,http://arxiv.org/pdf/2305.10713v2.pdf,2023-05-18,"['cs.cl', 'cs.lg']",2305.10713v2.pdf,"  With growing capabilities of large language models, prompting them has become
the dominant way to access them. This has motivated the development of
strategies for automatically selecting effective language prompts. In this
paper, we introduce prompt flatness, a new metric to quantify the expected
utility of a language prompt. This metric is inspired by flatness
regularization in statistical learning that quantifies the robustness of the
model towards its parameter perturbations. We provide theoretical foundations
for this metric and its relationship with other prompt selection metrics,
providing a comprehensive understanding of existing methods. Empirically, we
show that combining prompt flatness with existing metrics improves both
performance and sample efficiency. Our metric outperforms the previous prompt
selection metrics with an average increase of 5% in accuracy and 10% in Pearson
correlation across 6 classification benchmarks.
"
A Monte Carlo Language Model Pipeline for Zero-Shot Sociopolitical Event  Extraction,Erica Cai,http://arxiv.org/pdf/2305.15051v1.pdf,2023-05-24,['cs.cl'],2305.15051v1.pdf,"  We consider dyadic zero-shot event extraction (EE) to identify actions
between pairs of actors. The \emph{zero-shot} setting allows social scientists
or other non-computational researchers to extract any customized,
user-specified set of events without training, resulting in a \emph{dyadic}
event database, allowing insight into sociopolitical relational dynamics among
actors and the higher level organizations or countries they represent.
Unfortunately, we find that current zero-shot EE methods perform poorly for the
task, with issues including word sense ambiguity, modality mismatch, and
efficiency. Straightforward application of large language model prompting
typically performs even worse. We address these challenges with a new
fine-grained, multi-stage generative question-answer method, using a Monte
Carlo approach to exploit and overcome the randomness of generative outputs. It
performs 90\% fewer queries than a previous approach, with strong performance
on the widely-used Automatic Content Extraction dataset. Finally, we extend our
method to extract affiliations of actor arguments and demonstrate our method
and findings on a dyadic international relations case study.
"
EvalLM: Interactive Evaluation of Large Language Model Prompts on  User-Defined Criteria,Tae Soo Kim,http://arxiv.org/pdf/2309.13633v1.pdf,2023-09-24,"['cs.hc', 'cs.ai', 'cs.cl']",2309.13633v1.pdf,"  By simply composing prompts, developers can prototype novel generative
applications with Large Language Models (LLMs). To refine prototypes into
products, however, developers must iteratively revise prompts by evaluating
outputs to diagnose weaknesses. Formative interviews (N=8) revealed that
developers invest significant effort in manually evaluating outputs as they
assess context-specific and subjective criteria. We present EvalLM, an
interactive system for iteratively refining prompts by evaluating multiple
outputs on user-defined criteria. By describing criteria in natural language,
users can employ the system's LLM-based evaluator to get an overview of where
prompts excel or fail, and improve these based on the evaluator's feedback. A
comparative study (N=12) showed that EvalLM, when compared to manual
evaluation, helped participants compose more diverse criteria, examine twice as
many outputs, and reach satisfactory prompts with 59% fewer revisions. Beyond
prompts, our work can be extended to augment model evaluation and alignment in
specific application contexts.
"
Terminology-Aware Translation with Constrained Decoding and Large  Language Model Prompting,Nikolay Bogoychev,http://arxiv.org/pdf/2310.05824v1.pdf,2023-10-09,['cs.cl'],2310.05824v1.pdf,"  Terminology correctness is important in the downstream application of machine
translation, and a prevalent way to ensure this is to inject terminology
constraints into a translation system. In our submission to the WMT 2023
terminology translation task, we adopt a translate-then-refine approach which
can be domain-independent and requires minimal manual efforts. We annotate
random source words with pseudo-terminology translations obtained from word
alignment to first train a terminology-aware model. Further, we explore two
post-processing methods. First, we use an alignment process to discover whether
a terminology constraint has been violated, and if so, we re-decode with the
violating word negatively constrained. Alternatively, we leverage a large
language model to refine a hypothesis by providing it with terminology
constraints. Results show that our terminology-aware model learns to
incorporate terminologies effectively, and the large language model refinement
process can further improve terminology recall.
"
Prompter: Utilizing Large Language Model Prompting for a Data Efficient  Embodied Instruction Following,Yuki Inoue,http://arxiv.org/pdf/2211.03267v1.pdf,2022-11-07,"['cs.ro', 'cs.cv']",2211.03267v1.pdf,"  Embodied Instruction Following (EIF) studies how mobile manipulator robots
should be controlled to accomplish long-horizon tasks specified by natural
language instructions. While most research on EIF are conducted in simulators,
the ultimate goal of the field is to deploy the agents in real life. As such,
it is important to minimize the data cost required for training an agent, to
help the transition from sim to real. However, many studies only focus on the
performance and overlook the data cost -- modules that require separate
training on extra data are often introduced without a consideration on
deployability. In this work, we propose FILM++ which extends the existing work
FILM with modifications that do not require extra data. While all data-driven
modules are kept constant, FILM++ more than doubles FILM's performance.
Furthermore, we propose Prompter, which replaces FILM++'s semantic search
module with language model prompting. Unlike FILM++'s implementation that
requires training on extra sets of data, no training is needed for our
prompting based implementation while achieving better or at least comparable
performance. Prompter achieves 42.64% and 45.72% on the ALFRED benchmark with
high-level instructions only and with step-by-step instructions, respectively,
outperforming the previous state of the art by 6.57% and 10.31%.
"
FIRE: Food Image to REcipe generation,Prateek Chhikara,http://arxiv.org/pdf/2308.14391v1.pdf,2023-08-28,"['cs.cv', 'cs.cl']",2308.14391v1.pdf,"  Food computing has emerged as a prominent multidisciplinary field of research
in recent years. An ambitious goal of food computing is to develop end-to-end
intelligent systems capable of autonomously producing recipe information for a
food image. Current image-to-recipe methods are retrieval-based and their
success depends heavily on the dataset size and diversity, as well as the
quality of learned embeddings. Meanwhile, the emergence of powerful
attention-based vision and language models presents a promising avenue for
accurate and generalizable recipe generation, which has yet to be extensively
explored. This paper proposes FIRE, a novel multimodal methodology tailored to
recipe generation in the food computing domain, which generates the food title,
ingredients, and cooking instructions based on input food images. FIRE
leverages the BLIP model to generate titles, utilizes a Vision Transformer with
a decoder for ingredient extraction, and employs the T5 model to generate
recipes incorporating titles and ingredients as inputs. We showcase two
practical applications that can benefit from integrating FIRE with large
language model prompting: recipe customization to fit recipes to user
preferences and recipe-to-code transformation to enable automated cooking
processes. Our experimental findings validate the efficacy of our proposed
approach, underscoring its potential for future advancements and widespread
adoption in food computing.
"
Large language models can accurately predict searcher preferences,Paul Thomas,http://arxiv.org/pdf/2309.10621v1.pdf,2023-09-19,"['cs.ir', 'cs.ai', 'cs.cl', 'cs.lg']",2309.10621v1.pdf,"  Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
  We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers.
"
Meta-in-context learning in large language models,Julian Coda-Forno,http://arxiv.org/pdf/2305.12907v1.pdf,2023-05-22,"['cs.cl', 'cs.ai', 'cs.lg']",2305.12907v1.pdf,"  Large language models have shown tremendous performance in a variety of
tasks. In-context learning -- the ability to improve at a task after being
provided with a number of demonstrations -- is seen as one of the main
contributors to their success. In the present paper, we demonstrate that the
in-context learning abilities of large language models can be recursively
improved via in-context learning itself. We coin this phenomenon
meta-in-context learning. Looking at two idealized domains, a one-dimensional
regression task and a two-armed bandit task, we show that meta-in-context
learning adaptively reshapes a large language model's priors over expected
tasks. Furthermore, we find that meta-in-context learning modifies the
in-context learning strategies of such models. Finally, we extend our approach
to a benchmark of real-world regression problems where we observe competitive
performance to traditional learning algorithms. Taken together, our work
improves our understanding of in-context learning and paves the way toward
adapting large language models to the environment they are applied purely
through meta-in-context learning rather than traditional finetuning.
"
MetaVL: Transferring In-Context Learning Ability From Language Models to  Vision-Language Models,Masoud Monajatipoor,http://arxiv.org/pdf/2306.01311v1.pdf,2023-06-02,['cs.cl'],2306.01311v1.pdf,"  Large-scale language models have shown the ability to adapt to a new task via
conditioning on a few demonstrations (i.e., in-context learning). However, in
the vision-language domain, most large-scale pre-trained vision-language (VL)
models do not possess the ability to conduct in-context learning. How can we
enable in-context learning for VL models? In this paper, we study an
interesting hypothesis: can we transfer the in-context learning ability from
the language domain to VL domain? Specifically, we first meta-trains a language
model to perform in-context learning on NLP tasks (as in MetaICL); then we
transfer this model to perform VL tasks by attaching a visual encoder. Our
experiments suggest that indeed in-context learning ability can be transferred
cross modalities: our model considerably improves the in-context learning
capability on VL tasks and can even compensate for the size of the model
significantly. On VQA, OK-VQA, and GQA, our method could outperform the
baseline model while having 20 times fewer parameters.
"
A Theory of Emergent In-Context Learning as Implicit Structure Induction,Michael Hahn,http://arxiv.org/pdf/2303.07971v1.pdf,2023-03-14,"['cs.cl', 'cs.lg']",2303.07971v1.pdf,"  Scaling large language models (LLMs) leads to an emergent capacity to learn
in-context from example demonstrations. Despite progress, theoretical
understanding of this phenomenon remains limited. We argue that in-context
learning relies on recombination of compositional operations found in natural
language data. We derive an information-theoretic bound showing how in-context
learning abilities arise from generic next-token prediction when the
pretraining distribution has sufficient amounts of compositional structure,
under linguistically motivated assumptions. A second bound provides a
theoretical justification for the empirical success of prompting LLMs to output
intermediate steps towards an answer. To validate theoretical predictions, we
introduce a controlled setup for inducing in-context learning; unlike previous
approaches, it accounts for the compositional nature of language. Trained
transformers can perform in-context learning for a range of tasks, in a manner
consistent with the theoretical results. Mirroring real-world LLMs in a
miniature setup, in-context learning emerges when scaling parameters and data,
and models perform better when prompted to output intermediate steps. Probing
shows that in-context learning is supported by a representation of the input's
compositional structure. Taken together, these results provide a step towards
theoretical understanding of emergent behavior in large language models.
"
Fine-tune Language Models to Approximate Unbiased In-context Learning,Timothy Chu,http://arxiv.org/pdf/2310.03331v1.pdf,2023-10-05,['cs.lg'],2310.03331v1.pdf,"  In-context learning (ICL) is an astonishing emergent ability of large
language models (LLMs). By presenting a prompt that includes multiple
input-output pairs as examples and introducing a new query input, models can
generate the corresponding output. However, the performance of models heavily
relies on the quality of the input prompt when implementing in-context
learning. Biased or imbalanced input prompts can significantly degrade the
performance of language models. To address this issue, we introduce a
reweighted algorithm called RICL (Reweighted In-context Learning). This
algorithm fine-tunes language models using an unbiased validation set to
determine the optimal weight for each input-output example to approximate
unbiased in-context learning. Furthermore, we also introduce a low-cost
reweighted algorithm, a linear optimal weight approximation algorithm called
LARICL (Linear Approximation of Reweighted In-context Learning). This algorithm
requires minimal training cost while providing effective results. We prove the
convergence of our algorithm and validate its performance through experiments
conducted on a numerical dataset. The experimental findings reveal a
substantial improvement in comparison to benchmarks including the performance
of casual prompt-based in-context learning and the performance of a classic
fine-tuning method.
"
PRODIGY: Enabling In-context Learning Over Graphs,Qian Huang,http://arxiv.org/pdf/2305.12600v1.pdf,2023-05-21,"['cs.lg', 'cs.ai']",2305.12600v1.pdf,"  In-context learning is the ability of a pretrained model to adapt to novel
and diverse downstream tasks by conditioning on prompt examples, without
optimizing any parameters. While large language models have demonstrated this
ability, how in-context learning could be performed over graphs is unexplored.
In this paper, we develop \textbf{Pr}etraining \textbf{O}ver \textbf{D}iverse
\textbf{I}n-Context \textbf{G}raph S\textbf{y}stems (PRODIGY), the first
pretraining framework that enables in-context learning over graphs. The key
idea of our framework is to formulate in-context learning over graphs with a
novel \emph{prompt graph} representation, which connects prompt examples and
queries. We then propose a graph neural network architecture over the prompt
graph and a corresponding family of in-context pretraining objectives. With
PRODIGY, the pretrained model can directly perform novel downstream
classification tasks on unseen graphs via in-context learning. We provide
empirical evidence of the effectiveness of our framework by showcasing its
strong in-context learning performance on tasks involving citation networks and
knowledge graphs. Our approach outperforms the in-context learning accuracy of
contrastive pretraining baselines with hard-coded adaptation by 18\% on average
across all setups. Moreover, it also outperforms standard finetuning with
limited data by 33\% on average with in-context learning.
"
An Explanation of In-context Learning as Implicit Bayesian Inference,Sang Michael Xie,http://arxiv.org/pdf/2111.02080v6.pdf,2021-11-03,"['cs.cl', 'cs.lg']",2111.02080v6.pdf,"  Large language models (LMs) such as GPT-3 have the surprising ability to do
in-context learning, where the model learns to do a downstream task simply by
conditioning on a prompt consisting of input-output examples. The LM learns
from these examples without being explicitly pretrained to learn. Thus, it is
unclear what enables in-context learning. In this paper, we study how
in-context learning can emerge when pretraining documents have long-range
coherence. Here, the LM must infer a latent document-level concept to generate
coherent next tokens during pretraining. At test time, in-context learning
occurs when the LM also infers a shared latent concept between examples in a
prompt. We prove when this occurs despite a distribution mismatch between
prompts and pretraining data in a setting where the pretraining distribution is
a mixture of HMMs. In contrast to messy large-scale datasets used to train LMs
capable of in-context learning, we generate a small-scale synthetic dataset
(GINC) where Transformers and LSTMs both exhibit in-context learning. Beyond
the theory, experiments on GINC exhibit large-scale real-world phenomena
including improved in-context performance with model scaling (despite the same
pretraining loss), sensitivity to example order, and instances where zero-shot
is better than few-shot in-context learning.
"
Rethinking the Role of Scale for In-Context Learning: An  Interpretability-based Case Study at 66 Billion Scale,Hritik Bansal,http://arxiv.org/pdf/2212.09095v2.pdf,2022-12-18,"['cs.cl', 'cs.ai']",2212.09095v2.pdf,"  Language models have been shown to perform better with an increase in scale
on a wide variety of tasks via the in-context learning paradigm. In this paper,
we investigate the hypothesis that the ability of a large language model to
in-context learn-perform a task is not uniformly spread across all of its
underlying components. Using a 66 billion parameter language model (OPT-66B)
across a diverse set of 14 downstream tasks, we find this is indeed the case:
$\sim$70% of attention heads and $\sim$20% of feed forward networks can be
removed with minimal decline in task performance. We find substantial overlap
in the set of attention heads (un)important for in-context learning across
tasks and number of in-context examples. We also address our hypothesis through
a task-agnostic lens, finding that a small set of attention heads in OPT-66B
score highly on their ability to perform primitive induction operations
associated with in-context learning, namely, prefix matching and copying. These
induction heads overlap with task-specific important heads, reinforcing
arguments by Olsson et al. (arXiv:2209.11895) regarding induction head
generality to more sophisticated behaviors associated with in-context learning.
Overall, our study provides several insights that indicate large language
models may be under-trained for in-context learning and opens up questions on
how to pre-train language models to more effectively perform in-context
learning.
"
A Closer Look at In-Context Learning under Distribution Shifts,Kartik Ahuja,http://arxiv.org/pdf/2305.16704v1.pdf,2023-05-26,"['cs.lg', 'stat.ml']",2305.16704v1.pdf,"  In-context learning, a capability that enables a model to learn from input
examples on the fly without necessitating weight updates, is a defining
characteristic of large language models. In this work, we follow the setting
proposed in (Garg et al., 2022) to better understand the generality and
limitations of in-context learning from the lens of the simple yet fundamental
task of linear regression. The key question we aim to address is: Are
transformers more adept than some natural and simpler architectures at
performing in-context learning under varying distribution shifts? To compare
transformers, we propose to use a simple architecture based on set-based
Multi-Layer Perceptrons (MLPs). We find that both transformers and set-based
MLPs exhibit in-context learning under in-distribution evaluations, but
transformers more closely emulate the performance of ordinary least squares
(OLS). Transformers also display better resilience to mild distribution shifts,
where set-based MLPs falter. However, under severe distribution shifts, both
models' in-context learning abilities diminish.
"
Exploring the Relationship Between Model Architecture and In-Context  Learning Ability,Ivan Lee,http://arxiv.org/pdf/2310.08049v1.pdf,2023-10-12,['cs.lg'],2310.08049v1.pdf,"  What is the relationship between model architecture and the ability to
perform in-context learning? In this empirical study, we take the first steps
towards answering this question. In particular, we evaluate fifteen model
architectures across a suite of synthetic in-context learning tasks. The
selected architectures represent a broad range of paradigms, including
recurrent and convolution-based neural networks, transformers, and emerging
attention alternatives. We discover that all considered architectures can
perform in-context learning under certain conditions. However, contemporary
architectures are found to be the best performing, especially as task
complexity grows. Additionally, our follow-up experiments delve into various
factors that influence in-context learning. We observe varied sensitivities
among architectures with respect to hyperparameter settings. Our study of
training dynamics reveals that certain architectures exhibit a smooth,
progressive learning trajectory, while others demonstrate periods of stagnation
followed by abrupt mastery of the task. Finally, and somewhat surprisingly, we
find that several emerging attention alternatives are more robust in-context
learners than transformers; since such approaches have constant-sized memory
footprints at inference time, this result opens the future possibility of
scaling up in-context learning to vastly larger numbers of in-context examples.
"
What Can Transformers Learn In-Context? A Case Study of Simple Function  Classes,Shivam Garg,http://arxiv.org/pdf/2208.01066v3.pdf,2022-08-01,"['cs.cl', 'cs.lg']",2208.01066v3.pdf,"  In-context learning refers to the ability of a model to condition on a prompt
sequence consisting of in-context examples (input-output pairs corresponding to
some task) along with a new query input, and generate the corresponding output.
Crucially, in-context learning happens only at inference time without any
parameter updates to the model. While large language models such as GPT-3
exhibit some ability to perform in-context learning, it is unclear what the
relationship is between tasks on which this succeeds and what is present in the
training data. To make progress towards understanding in-context learning, we
consider the well-defined problem of training a model to in-context learn a
function class (e.g., linear functions): that is, given data derived from some
functions in the class, can we train a model to in-context learn ""most""
functions from this class? We show empirically that standard Transformers can
be trained from scratch to perform in-context learning of linear functions --
that is, the trained model is able to learn unseen linear functions from
in-context examples with performance comparable to the optimal least squares
estimator. In fact, in-context learning is possible even under two forms of
distribution shift: (i) between the training data of the model and
inference-time prompts, and (ii) between the in-context examples and the query
input during inference. We also show that we can train Transformers to
in-context learn more complex function classes -- namely sparse linear
functions, two-layer neural networks, and decision trees -- with performance
that matches or exceeds task-specific learning algorithms. Our code and models
are available at https://github.com/dtsip/in-context-learning .
"
"Structured Prompting: Scaling In-Context Learning to 1,000 Examples",Yaru Hao,http://arxiv.org/pdf/2212.06713v1.pdf,2022-12-13,['cs.cl'],2212.06713v1.pdf,"  Large language models have exhibited intriguing in-context learning
capability, achieving promising zero- and few-shot performance without updating
the parameters. However, conventional in-context learning is usually restricted
by length constraints, rendering it ineffective to absorb supervision from a
large number of examples. In order to go beyond few shots, we introduce
structured prompting that breaks the length limit and scales in-context
learning to thousands of examples. Specifically, demonstration examples are
separately encoded with well-designed position embeddings, and then they are
jointly attended by the test example using a rescaled attention mechanism. So
we can scale the number of exemplars with linear complexity instead of
quadratic complexity with respect to length. Experimental results on a diverse
set of tasks show that our approach improves end-task performance and reduces
evaluation variance over conventional in-context learning as the number of
demonstration examples increases. Code has been released at
https://aka.ms/structured-prompting.
"
Pre-Training to Learn in Context,Yuxian Gu,http://arxiv.org/pdf/2305.09137v1.pdf,2023-05-16,['cs.cl'],2305.09137v1.pdf,"  In-context learning, where pre-trained language models learn to perform tasks
from task examples and instructions in their contexts, has attracted much
attention in the NLP community. However, the ability of in-context learning is
not fully exploited because language models are not explicitly trained to learn
in context. To this end, we propose PICL (Pre-training for In-Context
Learning), a framework to enhance the language models' in-context learning
ability by pre-training the model on a large collection of ""intrinsic tasks"" in
the general plain-text corpus using the simple language modeling objective.
PICL encourages the model to infer and perform tasks by conditioning on the
contexts while maintaining task generalization of pre-trained models. We
evaluate the in-context learning performance of the model trained with PICL on
seven widely-used text classification datasets and the Super-NaturalInstrctions
benchmark, which contains 100+ NLP tasks formulated to text generation. Our
experiments show that PICL is more effective and task-generalizable than a
range of baselines, outperforming larger language models with nearly 4x
parameters. The code is publicly available at https://github.com/thu-coai/PICL.
"
EXnet: Efficient In-context Learning for Data-less Text classification,Debaditya Shome,http://arxiv.org/pdf/2305.14622v1.pdf,2023-05-24,"['cs.cl', 'cs.lg']",2305.14622v1.pdf,"  Large pre-trained language models (PLMs) have made significant progress in
encoding world knowledge and spawned a new set of learning paradigms including
zero-shot, few-shot, and in-context learning. Many language tasks can be
modeled as a set of prompts (for example, is this text about geography?) and
language models can provide binary answers, i.e., Yes or No. There is evidence
to suggest that the next-word prediction used by many PLMs does not align well
with zero-shot paradigms. Therefore, PLMs are fine-tuned as a
question-answering system. In-context learning extends zero-shot learning by
incorporating prompts and examples, resulting in increased task accuracy. Our
paper presents EXnet, a model specifically designed to perform in-context
learning without any limitations on the number of examples. We argue that
in-context learning is an effective method to increase task accuracy, and
providing examples facilitates cross-task generalization, especially when it
comes to text classification tasks. With extensive experiments, we show that
even our smallest model (15M parameters) generalizes to several unseen
classification tasks and domains.
"
RAVEN: In-Context Learning with Retrieval Augmented Encoder-Decoder  Language Models,Jie Huang,http://arxiv.org/pdf/2308.07922v1.pdf,2023-08-15,"['cs.cl', 'cs.ai', 'cs.lg']",2308.07922v1.pdf,"  In this paper, we investigate the in-context learning ability of
retrieval-augmented encoder-decoder language models. We first conduct a
comprehensive analysis of the state-of-the-art ATLAS model and identify its
limitations in in-context learning, primarily due to a mismatch between
pretraining and testing, as well as a restricted context length. To address
these issues, we propose RAVEN, a model that combines retrieval-augmented
masked language modeling and prefix language modeling. We further introduce
Fusion-in-Context Learning to enhance the few-shot performance by enabling the
model to leverage more in-context examples without requiring additional
training or model modifications. Through extensive experiments, we demonstrate
that RAVEN significantly outperforms ATLAS and achieves results comparable to
the most advanced language models in certain scenarios, despite having
substantially fewer parameters. Our work underscores the potential of
retrieval-augmented encoder-decoder language models for in-context learning and
encourages further research in this direction.
"
Understanding In-Context Learning from Repetitions,Jianhao Yan,http://arxiv.org/pdf/2310.00297v2.pdf,2023-09-30,['cs.cl'],2310.00297v2.pdf,"  This paper explores the elusive mechanism underpinning in-context learning in
Large Language Models (LLMs). Our work provides a novel perspective by
examining in-context learning via the lens of surface repetitions. We
quantitatively investigate the role of surface features in text generation, and
empirically establish the existence of \emph{token co-occurrence
reinforcement}, a principle that strengthens the relationship between two
tokens based on their contextual co-occurrences. By investigating the dual
impacts of these features, our research illuminates the internal workings of
in-context learning and expounds on the reasons for its failures. This paper
provides an essential contribution to the understanding of in-context learning
and its potential limitations, providing a fresh perspective on this exciting
capability.
"
In-Context Learning Dynamics with Random Binary Sequences,Eric J. Bigelow,http://arxiv.org/pdf/2310.17639v1.pdf,2023-10-26,"['cs.ai', 'cs.cl', 'cs.lg']",2310.17639v1.pdf,"  Large language models (LLMs) trained on huge corpora of text datasets
demonstrate complex, emergent capabilities, achieving state-of-the-art
performance on tasks they were not explicitly trained for. The precise nature
of LLM capabilities is often mysterious, and different prompts can elicit
different capabilities through in-context learning. We propose a Cognitive
Interpretability framework that enables us to analyze in-context learning
dynamics to understand latent concepts in LLMs underlying behavioral patterns.
This provides a more nuanced understanding than success-or-failure evaluation
benchmarks, but does not require observing internal activations as a
mechanistic interpretation of circuits would. Inspired by the cognitive science
of human randomness perception, we use random binary sequences as context and
study dynamics of in-context learning by manipulating properties of context
data, such as sequence length. In the latest GPT-3.5+ models, we find emergent
abilities to generate pseudo-random numbers and learn basic formal languages,
with striking in-context learning dynamics where model outputs transition
sharply from pseudo-random behaviors to deterministic repetition.
"
In-Context Learning with Many Demonstration Examples,Mukai Li,http://arxiv.org/pdf/2302.04931v1.pdf,2023-02-09,"['cs.cl', 'cs.ai']",2302.04931v1.pdf,"  Large pre-training language models (PLMs) have shown promising in-context
learning abilities. However, due to the backbone transformer architecture,
existing PLMs are bottlenecked by the memory and computational cost when
scaling up to a large context size, leaving instruction tuning and in-context
learning of many demonstration examples, as well as long-range language
modeling under-explored. In this study, we propose a long-range language model
EVALM based on an efficient transformer mechanism. EVALM is trained with 8k
tokens per batch line and can test up to 256k-lengthed contexts with
extrapolation, 128 times to the limit of existing PLMs (e.g. GPT3). Based on
EVALM, we scale up the size of examples efficiently in both instruction tuning
and in-context learning to explore the boundary of the benefits from more
annotated data. Experimental results on a diverse set of tasks show that EVALM
achieves 4.1% higher accuracy on average, and the average length of achieving
the best accuracy score over tasks is around 12k. We find that in-context
learning can achieve higher performance with more demonstrations under
many-shot instruction tuning (8k), and further extending the length of
instructions (16k) can further improve the upper bound of scaling in-context
learning.
"
The Learnability of In-Context Learning,Noam Wies,http://arxiv.org/pdf/2303.07895v1.pdf,2023-03-14,['cs.cl'],2303.07895v1.pdf,"  In-context learning is a surprising and important phenomenon that emerged
when modern language models were scaled to billions of learned parameters.
Without modifying a large language model's weights, it can be tuned to perform
various downstream natural language tasks simply by including concatenated
training examples of these tasks in its input. Though disruptive for many
practical applications of large language models, this emergent learning
paradigm is not well understood from a theoretical perspective. In this paper,
we propose a first-of-its-kind PAC based framework for in-context learnability,
and use it to provide the first finite sample complexity results for the
in-context learning setup. Our framework includes an initial pretraining phase,
which fits a function to the pretraining distribution, and then a second
in-context learning phase, which keeps this function constant and concatenates
training examples of the downstream task in its input. We use our framework in
order to prove that, under mild assumptions, when the pretraining distribution
is a mixture of latent tasks (a model often considered for natural language
pretraining), these tasks can be efficiently learned via in-context learning,
even though the model's weights are unchanged and the input significantly
diverges from the pretraining distribution. Our theoretical analysis reveals
that in this setting, in-context learning is more about identifying the task
than about learning it, a result which is in line with a series of recent
empirical findings. We hope that the in-context learnability framework
presented in this paper will facilitate future progress towards a deeper
understanding of this important new learning paradigm.
"
SINC: Self-Supervised In-Context Learning for Vision-Language Tasks,Yi-Syuan Chen,http://arxiv.org/pdf/2307.07742v2.pdf,2023-07-15,"['cs.cv', 'cs.ai']",2307.07742v2.pdf,"  Large Pre-trained Transformers exhibit an intriguing capacity for in-context
learning. Without gradient updates, these models can rapidly construct new
predictors from demonstrations presented in the inputs. Recent works promote
this ability in the vision-language domain by incorporating visual information
into large language models that can already make in-context predictions.
However, these methods could inherit issues in the language domain, such as
template sensitivity and hallucination. Also, the scale of these language
models raises a significant demand for computations, making learning and
operating these models resource-intensive. To this end, we raise a question:
``How can we enable in-context learning without relying on the intrinsic
in-context ability of large language models?"". To answer it, we propose a
succinct and general framework, Self-supervised IN-Context learning (SINC),
that introduces a meta-model to learn on self-supervised prompts consisting of
tailored demonstrations. The learned models can be transferred to downstream
tasks for making in-context predictions on-the-fly. Extensive experiments show
that SINC outperforms gradient-based methods in various vision-language tasks
under few-shot settings. Furthermore, the designs of SINC help us investigate
the benefits of in-context learning across different tasks, and the analysis
further reveals the essential components for the emergence of in-context
learning in the vision-language domain.
"
Self-Generated In-Context Learning: Leveraging Auto-regressive Language  Models as a Demonstration Generator,Hyuhng Joon Kim,http://arxiv.org/pdf/2206.08082v1.pdf,2022-06-16,['cs.cl'],2206.08082v1.pdf,"  Large-scale pre-trained language models (PLMs) are well-known for being
capable of solving a task simply by conditioning a few input-label pairs dubbed
demonstrations on a prompt without being explicitly tuned for the desired
downstream task. Such a process (i.e., in-context learning), however, naturally
leads to high reliance on the demonstrations which are usually selected from
external datasets. In this paper, we propose self-generated in-context learning
(SG-ICL), which generates demonstrations for in-context learning from PLM
itself to minimize the reliance on the external demonstration. We conduct
experiments on four different text classification tasks and show SG-ICL
significantly outperforms zero-shot learning and is generally worth
approximately 0.6 gold training samples. Moreover, our generated demonstrations
show more consistent performance with low variance compared to randomly
selected demonstrations from the training dataset.
"
Active Example Selection for In-Context Learning,Yiming Zhang,http://arxiv.org/pdf/2211.04486v1.pdf,2022-11-08,"['cs.cl', 'cs.ai']",2211.04486v1.pdf,"  With a handful of demonstration examples, large-scale language models show
strong capability to perform various tasks by in-context learning from these
examples, without any fine-tuning. We demonstrate that in-context learning
performance can be highly unstable across samples of examples, indicating the
idiosyncrasies of how language models acquire information. We formulate example
selection for in-context learning as a sequential decision problem, and propose
a reinforcement learning algorithm for identifying generalizable policies to
select demonstration examples. For GPT-2, our learned policies demonstrate
strong abilities of generalizing to unseen tasks in training, with a $5.8\%$
improvement on average. Examples selected from our learned policies can even
achieve a small improvement on GPT-3 Ada. However, the improvement diminishes
on larger GPT-3 models, suggesting emerging capabilities of large language
models.
"
On the Compositional Generalization Gap of In-Context Learning,Arian Hosseini,http://arxiv.org/pdf/2211.08473v1.pdf,2022-11-15,"['cs.cl', 'cs.lg']",2211.08473v1.pdf,"  Pretrained large generative language models have shown great performance on
many tasks, but exhibit low compositional generalization abilities. Scaling
such models has been shown to improve their performance on various NLP tasks
even just by conditioning them on a few examples to solve the task without any
fine-tuning (also known as in-context learning). In this work, we look at the
gap between the in-distribution (ID) and out-of-distribution (OOD) performance
of such models in semantic parsing tasks with in-context learning. In the ID
settings, the demonstrations are from the same split (test or train) that the
model is being evaluated on, and in the OOD settings, they are from the other
split. We look at how the relative generalization gap of in-context learning
evolves as models are scaled up. We evaluate four model families, OPT, BLOOM,
CodeGen and Codex on three semantic parsing datasets, CFQ, SCAN and GeoQuery
with different number of exemplars, and observe a trend of decreasing relative
generalization gap as models are scaled up.
"
Bayesian Optimization of Catalysts With In-context Learning,Mayk Caldas Ramos,http://arxiv.org/pdf/2304.05341v1.pdf,2023-04-11,"['physics.chem-ph', 'cs.lg']",2304.05341v1.pdf,"  Large language models (LLMs) are able to do accurate classification with zero
or only a few examples (in-context learning). We show a prompting system that
enables regression with uncertainty for in-context learning with frozen LLM
(GPT-3, GPT-3.5, and GPT-4) models, allowing predictions without features or
architecture tuning. By incorporating uncertainty, our approach enables
Bayesian optimization for catalyst or molecule optimization using natural
language, eliminating the need for training or simulation. Here, we performed
the optimization using the synthesis procedure of catalysts to predict
properties. Working with natural language mitigates difficulty synthesizability
since the literal synthesis procedure is the model's input. We showed that
in-context learning could improve past a model context window (maximum number
of tokens the model can process at once) as data is gathered via example
selection, allowing the model to scale better. Although our method does not
outperform all baselines, it requires zero training, feature selection, and
minimal computing while maintaining satisfactory performance. We also find
Gaussian Process Regression on text embeddings is strong at Bayesian
optimization. The code is available in our GitHub repository:
https://github.com/ur-whitelab/BO-LIFT
"
In-Context Learning Unlocked for Diffusion Models,Zhendong Wang,http://arxiv.org/pdf/2305.01115v2.pdf,2023-05-01,['cs.cv'],2305.01115v2.pdf,"  We present Prompt Diffusion, a framework for enabling in-context learning in
diffusion-based generative models. Given a pair of task-specific example
images, such as depth from/to image and scribble from/to image, and a text
guidance, our model automatically understands the underlying task and performs
the same task on a new query image following the text guidance. To achieve
this, we propose a vision-language prompt that can model a wide range of
vision-language tasks and a diffusion model that takes it as input. The
diffusion model is trained jointly over six different tasks using these
prompts. The resulting Prompt Diffusion model is the first diffusion-based
vision-language foundation model capable of in-context learning. It
demonstrates high-quality in-context generation on the trained tasks and
generalizes effectively to new, unseen vision tasks with their respective
prompts. Our model also shows compelling text-guided image editing results. Our
framework aims to facilitate research into in-context learning for computer
vision. We share our code and pre-trained models at
https://github.com/Zhendong-Wang/Prompt-Diffusion.
"
Few-shot Fine-tuning vs. In-context Learning: A Fair Comparison and  Evaluation,Marius Mosbach,http://arxiv.org/pdf/2305.16938v2.pdf,2023-05-26,['cs.cl'],2305.16938v2.pdf,"  Few-shot fine-tuning and in-context learning are two alternative strategies
for task adaptation of pre-trained language models. Recently, in-context
learning has gained popularity over fine-tuning due to its simplicity and
improved out-of-domain generalization, and because extensive evidence shows
that fine-tuned models pick up on spurious correlations. Unfortunately,
previous comparisons of the two approaches were done using models of different
sizes. This raises the question of whether the observed weaker out-of-domain
generalization of fine-tuned models is an inherent property of fine-tuning or a
limitation of the experimental setup. In this paper, we compare the
generalization of few-shot fine-tuning and in-context learning to challenge
datasets, while controlling for the models used, the number of examples, and
the number of parameters, ranging from 125M to 30B. Our results show that
fine-tuned language models can in fact generalize well out-of-domain. We find
that both approaches generalize similarly; they exhibit large variation and
depend on properties such as model size and the number of examples,
highlighting that robust task adaptation remains a challenge.
"
Large Language Models Can be Lazy Learners: Analyze Shortcuts in  In-Context Learning,Ruixiang Tang,http://arxiv.org/pdf/2305.17256v2.pdf,2023-05-26,"['cs.cl', 'cs.ai', 'cs.lg']",2305.17256v2.pdf,"  Large language models (LLMs) have recently shown great potential for
in-context learning, where LLMs learn a new task simply by conditioning on a
few input-label pairs (prompts). Despite their potential, our understanding of
the factors influencing end-task performance and the robustness of in-context
learning remains limited. This paper aims to bridge this knowledge gap by
investigating the reliance of LLMs on shortcuts or spurious correlations within
prompts. Through comprehensive experiments on classification and extraction
tasks, we reveal that LLMs are ""lazy learners"" that tend to exploit shortcuts
in prompts for downstream tasks. Additionally, we uncover a surprising finding
that larger models are more likely to utilize shortcuts in prompts during
inference. Our findings provide a new perspective on evaluating robustness in
in-context learning and pose new challenges for detecting and mitigating the
use of shortcuts in prompts.
"
Multi-Dimensional Evaluation of Text Summarization with In-Context  Learning,Sameer Jain,http://arxiv.org/pdf/2306.01200v1.pdf,2023-06-01,['cs.cl'],2306.01200v1.pdf,"  Evaluation of natural language generation (NLG) is complex and
multi-dimensional. Generated text can be evaluated for fluency, coherence,
factuality, or any other dimensions of interest. Most frameworks that perform
such multi-dimensional evaluation require training on large manually or
synthetically generated datasets. In this paper, we study the efficacy of large
language models as multi-dimensional evaluators using in-context learning,
obviating the need for large training datasets. Our experiments show that
in-context learning-based evaluators are competitive with learned evaluation
frameworks for the task of text summarization, establishing state-of-the-art on
dimensions such as relevance and factual consistency. We then analyze the
effects of factors such as the selection and number of in-context examples on
performance. Finally, we study the efficacy of in-context learning based
evaluators in evaluating zero-shot summaries written by large language models
such as GPT-3.
"
Exploring the Integration of Large Language Models into Automatic Speech  Recognition Systems: An Empirical Study,Zeping Min,http://arxiv.org/pdf/2307.06530v1.pdf,2023-07-13,"['cs.cl', 'cs.sd', 'eess.as']",2307.06530v1.pdf,"  This paper explores the integration of Large Language Models (LLMs) into
Automatic Speech Recognition (ASR) systems to improve transcription accuracy.
The increasing sophistication of LLMs, with their in-context learning
capabilities and instruction-following behavior, has drawn significant
attention in the field of Natural Language Processing (NLP). Our primary focus
is to investigate the potential of using an LLM's in-context learning
capabilities to enhance the performance of ASR systems, which currently face
challenges such as ambient noise, speaker accents, and complex linguistic
contexts. We designed a study using the Aishell-1 and LibriSpeech datasets,
with ChatGPT and GPT-4 serving as benchmarks for LLM capabilities.
Unfortunately, our initial experiments did not yield promising results,
indicating the complexity of leveraging LLM's in-context learning for ASR
applications. Despite further exploration with varied settings and models, the
corrected sentences from the LLMs frequently resulted in higher Word Error
Rates (WER), demonstrating the limitations of LLMs in speech applications. This
paper provides a detailed overview of these experiments, their results, and
implications, establishing that using LLMs' in-context learning capabilities to
correct potential errors in speech recognition transcriptions is still a
challenging task at the current stage.
"
ACT-SQL: In-Context Learning for Text-to-SQL with  Automatically-Generated Chain-of-Thought,Hanchong Zhang,http://arxiv.org/pdf/2310.17342v1.pdf,2023-10-26,['cs.cl'],2310.17342v1.pdf,"  Recently Large Language Models (LLMs) have been proven to have strong
abilities in various domains and tasks. We study the problem of prompt
designing in the text-to-SQL task and attempt to improve the LLMs' reasoning
ability when generating SQL queries. Besides the trivial few-shot in-context
learning setting, we design our chain-of-thought (CoT) prompt with a similar
method to schema linking. We provide a method named ACT-SQL to automatically
generate auto-CoT exemplars and thus the whole process doesn't need manual
labeling. Our approach is cost-saving since we only use the LLMs' API call once
when generating one SQL query. Furthermore, we extend our in-context learning
method to the multi-turn text-to-SQL task. The experiment results show that the
LLMs' performance can benefit from our ACT-SQL approach. Our approach achieves
SOTA performance on the Spider dev set among existing in-context learning
approaches.
"
COSMIC: Data Efficient Instruction-tuning For Speech In-Context Learning,Jing Pan,http://arxiv.org/pdf/2311.02248v1.pdf,2023-11-03,"['cs.cl', 'cs.ai', 'eess.as']",2311.02248v1.pdf,"  We present a data and cost efficient way of incorporating the speech modality
into a large language model (LLM). The resulting multi-modal LLM is a
COntextual Speech Model with Instruction-following/in-context-learning
Capabilities - COSMIC. Speech comprehension test question-answer (SQA) pairs
are generated using GPT-3.5 based on the speech transcriptions as a part of the
supervision for the instruction tuning. With fewer than 20M trainable
parameters and as little as 450 hours of English speech data for SQA
generation, COSMIC exhibits emergent instruction-following and in-context
learning capabilities in speech-to-text tasks. The model is able to follow the
given text instructions to generate text response even on the unseen EN$\to$X
speech-to-text translation (S2TT) task with zero-shot setting. We evaluate the
model's in-context learning via various tasks such as EN$\to$X S2TT and
few-shot domain adaptation. And instruction-following capabilities are
evaluated through a contextual biasing benchmark. Our results demonstrate the
efficacy of the proposed low cost recipe for building a speech LLM and that
with the new instruction-tuning data.
"
Thinking about GPT-3 In-Context Learning for Biomedical IE? Think Again,Bernal Jiménez Gutiérrez,http://arxiv.org/pdf/2203.08410v3.pdf,2022-03-16,"['cs.cl', 'cs.ir']",2203.08410v3.pdf,"  The strong few-shot in-context learning capability of large pre-trained
language models (PLMs) such as GPT-3 is highly appealing for application
domains such as biomedicine, which feature high and diverse demands of language
technologies but also high data annotation costs. In this paper, we present the
first systematic and comprehensive study to compare the few-shot performance of
GPT-3 in-context learning with fine-tuning smaller (i.e., BERT-sized) PLMs on
two highly representative biomedical information extraction tasks, named entity
recognition and relation extraction. We follow the true few-shot setting to
avoid overestimating models' few-shot performance by model selection over a
large validation set. We also optimize GPT-3's performance with known
techniques such as contextual calibration and dynamic in-context example
retrieval. However, our results show that GPT-3 still significantly
underperforms compared to simply fine-tuning a smaller PLM. In addition, GPT-3
in-context learning also yields smaller gains in accuracy when more training
data becomes available. Our in-depth analyses further reveal issues of the
in-context learning setting that may be detrimental to information extraction
tasks in general. Given the high cost of experimenting with GPT-3, we hope our
study provides guidance for biomedical researchers and practitioners towards
more promising directions such as fine-tuning small PLMs.
"
Exploring Effective Factors for Improving Visual In-Context Learning,Yanpeng Sun,http://arxiv.org/pdf/2304.04748v1.pdf,2023-04-10,['cs.cv'],2304.04748v1.pdf,"  The In-Context Learning (ICL) is to understand a new task via a few
demonstrations (aka. prompt) and predict new inputs without tuning the models.
While it has been widely studied in NLP, it is still a relatively new area of
research in computer vision. To reveal the factors influencing the performance
of visual in-context learning, this paper shows that prompt selection and
prompt fusion are two major factors that have a direct impact on the inference
performance of visual context learning. Prompt selection is the process of
identifying the most appropriate prompt or example to help the model understand
new tasks. This is important because providing the model with relevant prompts
can help it learn more effectively and efficiently. Prompt fusion involves
combining knowledge from different positions within the large-scale visual
model. By doing this, the model can leverage the diverse knowledge stored in
different parts of the model to improve its performance on new tasks. Based
these findings, we propose a simple framework prompt-SelF for visual in-context
learning. Specifically, we first use the pixel-level retrieval method to select
a suitable prompt, and then use different prompt fusion methods to activate all
the knowledge stored in the large-scale model, and finally ensemble the
prediction results obtained from different prompt fusion methods to obtain the
final prediction results. And we conduct extensive experiments on single-object
segmentation and detection tasks to demonstrate the effectiveness of
prompt-SelF. Remarkably, the prompt-SelF has outperformed OSLSM based
meta-learning in 1-shot segmentation for the first time. This indicated the
great potential of visual in-context learning. The source code and models will
be available at \url{https://github.com/syp2ysy/prompt-SelF}.
"
Dissecting Chain-of-Thought: Compositionality through In-Context  Filtering and Learning,Yingcong Li,http://arxiv.org/pdf/2305.18869v2.pdf,2023-05-30,"['cs.lg', 'cs.ai', 'cs.cl']",2305.18869v2.pdf,"  Chain-of-thought (CoT) is a method that enables language models to handle
complex reasoning tasks by decomposing them into simpler steps. Despite its
success, the underlying mechanics of CoT are not yet fully understood. In an
attempt to shed light on this, our study investigates the impact of CoT on the
ability of transformers to in-context learn a simple to study, yet general
family of compositional functions: multi-layer perceptrons (MLPs). In this
setting, we find that the success of CoT can be attributed to breaking down
in-context learning of a compositional function into two distinct phases:
focusing on and filtering data related to each step of the composition and
in-context learning the single-step composition function. Through both
experimental and theoretical evidence, we demonstrate how CoT significantly
reduces the sample complexity of in-context learning (ICL) and facilitates the
learning of complex functions that non-CoT methods struggle with. Furthermore,
we illustrate how transformers can transition from vanilla in-context learning
to mastering a compositional function with CoT by simply incorporating
additional layers that perform the necessary data-filtering for CoT via the
attention mechanism. In addition to these test-time benefits, we show CoT helps
accelerate pretraining by learning shortcuts to represent complex functions and
filtering plays an important role in this process. These findings collectively
provide insights into the mechanics of CoT, inviting further investigation of
its role in complex reasoning tasks.
"
In-Context Learning through the Bayesian Prism,Kabir Ahuja,http://arxiv.org/pdf/2306.04891v1.pdf,2023-06-08,"['cs.lg', 'cs.cl']",2306.04891v1.pdf,"  In-context learning is one of the surprising and useful features of large
language models. How it works is an active area of research. Recently, stylized
meta-learning-like setups have been devised that train these models on a
sequence of input-output pairs $(x, f(x))$ from a function class using the
language modeling loss and observe generalization to unseen functions from the
same class. One of the main discoveries in this line of research has been that
for several problems such as linear regression, trained transformers learn
algorithms for learning functions in context. However, the inductive biases of
these models resulting in this behavior are not clearly understood. A model
with unlimited training data and compute is a Bayesian predictor: it learns the
pretraining distribution. It has been shown that high-capacity transformers
mimic the Bayesian predictor for linear regression. In this paper, we show
empirical evidence of transformers exhibiting the behavior of this ideal
learner across different linear and non-linear function classes. We also extend
the previous setups to work in the multitask setting and verify that
transformers can do in-context learning in this setup as well and the Bayesian
perspective sheds light on this setting also. Finally, via the example of
learning Fourier series, we study the inductive bias for in-context learning.
We find that in-context learning may or may not have simplicity bias depending
on the pretraining data distribution.
"
Explore In-Context Learning for 3D Point Cloud Understanding,Zhongbin Fang,http://arxiv.org/pdf/2306.08659v1.pdf,2023-06-14,['cs.cv'],2306.08659v1.pdf,"  With the rise of large-scale models trained on broad data, in-context
learning has become a new learning paradigm that has demonstrated significant
potential in natural language processing and computer vision tasks. Meanwhile,
in-context learning is still largely unexplored in the 3D point cloud domain.
Although masked modeling has been successfully applied for in-context learning
in 2D vision, directly extending it to 3D point clouds remains a formidable
challenge. In the case of point clouds, the tokens themselves are the point
cloud positions (coordinates) that are masked during inference. Moreover,
position embedding in previous works may inadvertently introduce information
leakage. To address these challenges, we introduce a novel framework, named
Point-In-Context, designed especially for in-context learning in 3D point
clouds, where both inputs and outputs are modeled as coordinates for each task.
Additionally, we propose the Joint Sampling module, carefully designed to work
in tandem with the general point sampling operator, effectively resolving the
aforementioned technical issues. We conduct extensive experiments to validate
the versatility and adaptability of our proposed methods in handling a wide
range of tasks. Furthermore, with a more effective prompt selection strategy,
our framework surpasses the results of individually trained models.
"
Scaling In-Context Demonstrations with Structured Attention,Tianle Cai,http://arxiv.org/pdf/2307.02690v1.pdf,2023-07-05,"['cs.cl', 'cs.ai', 'cs.lg']",2307.02690v1.pdf,"  The recent surge of large language models (LLMs) highlights their ability to
perform in-context learning, i.e., ""learning"" to perform a task from a few
demonstrations in the context without any parameter updates. However, their
capabilities of in-context learning are limited by the model architecture: 1)
the use of demonstrations is constrained by a maximum sentence length due to
positional embeddings; 2) the quadratic complexity of attention hinders users
from using more demonstrations efficiently; 3) LLMs are shown to be sensitive
to the order of the demonstrations. In this work, we tackle these challenges by
proposing a better architectural design for in-context learning. We propose
SAICL (Structured Attention for In-Context Learning), which replaces the
full-attention by a structured attention mechanism designed for in-context
learning, and removes unnecessary dependencies between individual
demonstrations, while making the model invariant to the permutation of
demonstrations. We evaluate SAICL in a meta-training framework and show that
SAICL achieves comparable or better performance than full attention while
obtaining up to 3.4x inference speed-up. SAICL also consistently outperforms a
strong Fusion-in-Decoder (FiD) baseline which processes each demonstration
independently. Finally, thanks to its linear nature, we demonstrate that SAICL
can easily scale to hundreds of demonstrations with continuous performance
gains with scaling.
"
DQ-LoRe: Dual Queries with Low Rank Approximation Re-ranking for  In-Context Learning,Jing Xiong,http://arxiv.org/pdf/2310.02954v4.pdf,2023-10-04,['cs.cl'],2310.02954v4.pdf,"  Recent advances in natural language processing, primarily propelled by Large
Language Models (LLMs), have showcased their remarkable capabilities grounded
in in-context learning. A promising avenue for guiding LLMs in intricate
reasoning tasks involves the utilization of intermediate reasoning steps within
the Chain-of-Thought (CoT) paradigm. Nevertheless, the central challenge lies
in the effective selection of exemplars for facilitating in-context learning.
In this study, we introduce a framework that leverages Dual Queries and
Low-rank approximation Re-ranking (DQ-LoRe) to automatically select exemplars
for in-context learning. Dual Queries first query LLM to obtain LLM-generated
knowledge such as CoT, then query the retriever to obtain the final exemplars
via both question and the knowledge. Moreover, for the second query, LoRe
employs dimensionality reduction techniques to refine exemplar selection,
ensuring close alignment with the input question's knowledge. Through extensive
experiments, we demonstrate that DQ-LoRe significantly outperforms prior
state-of-the-art methods in the automatic selection of exemplars for GPT-4,
enhancing performance from 92.5% to 94.2%. Our comprehensive analysis further
reveals that DQ-LoRe consistently outperforms retrieval-based approaches in
terms of both performance and adaptability, especially in scenarios
characterized by distribution shifts. DQ-LoRe pushes the boundaries of
in-context learning and opens up new avenues for addressing complex reasoning
challenges. We will release the code soon.
"
OverPrompt: Enhancing ChatGPT Capabilities through an Efficient  In-Context Learning Approach,Jiazheng Li,http://arxiv.org/pdf/2305.14973v1.pdf,2023-05-24,['cs.cl'],2305.14973v1.pdf,"  The exceptional performance of pre-trained large language models has
revolutionised various applications, but their adoption in production
environments is hindered by prohibitive costs and inefficiencies, particularly
when utilising long prompts. This paper proposes OverPrompt, an in-context
learning method aimed at improving LLM efficiency and performance by processing
multiple inputs in parallel. Evaluated across diverse datasets, OverPrompt
enhances task efficiency and integrates a diverse range of examples for
improved performance. Particularly, it amplifies fact-checking and sentiment
analysis tasks when supplemented with contextual information. Synthetic data
grouping further enhances performance, suggesting a viable approach for data
augmentation.
"
Crosslingual Retrieval Augmented In-context Learning for Bangla,Xiaoqian Li,http://arxiv.org/pdf/2311.00587v1.pdf,2023-11-01,['cs.cl'],2311.00587v1.pdf,"  The promise of Large Language Models (LLMs) in Natural Language Processing
has often been overshadowed by their limited performance in low-resource
languages such as Bangla. To address this, our paper presents a pioneering
approach that utilizes cross-lingual retrieval augmented in-context learning.
By strategically sourcing semantically similar prompts from high-resource
language, we enable multilingual pretrained language models (MPLMs), especially
the generative model BLOOMZ, to successfully boost performance on Bangla tasks.
Our extensive evaluation highlights that the cross-lingual retrieval augmented
prompts bring steady improvements to MPLMs over the zero-shot performance.
"
Ground-Truth Labels Matter: A Deeper Look into Input-Label  Demonstrations,Kang Min Yoo,http://arxiv.org/pdf/2205.12685v2.pdf,2022-05-25,"['cs.cl', 'cs.ai', 'cs.lg']",2205.12685v2.pdf,"  Despite recent explosion of interests in in-context learning, the underlying
mechanism and the precise impact of the quality of demonstrations remain
elusive. Intuitively, ground-truth labels should have as much impact in
in-context learning (ICL) as supervised learning, but recent work reported that
the input-label correspondence is significantly less important than previously
thought. Intrigued by this counter-intuitive observation, we re-examine the
importance of ground-truth labels in in-context learning. With the introduction
of two novel metrics, namely Label-Correctness Sensitivity and Ground-truth
Label Effect Ratio (GLER), we were able to conduct quantifiable analysis on the
impact of ground-truth label demonstrations. Through extensive analyses, we
find that the correct input-label mappings can have varying impacts on the
downstream in-context learning performances, depending on the experimental
configuration. Through additional studies, we identify key components, such as
the verbosity of prompt templates and the language model size, as the
controlling factor to achieve more noise-resilient ICL.
"
In-context Learning and Induction Heads,Catherine Olsson,http://arxiv.org/pdf/2209.11895v1.pdf,2022-09-24,['cs.lg'],2209.11895v1.pdf,"  ""Induction heads"" are attention heads that implement a simple algorithm to
complete token sequences like [A][B] ... [A] -> [B]. In this work, we present
preliminary and indirect evidence for a hypothesis that induction heads might
constitute the mechanism for the majority of all ""in-context learning"" in large
transformer models (i.e. decreasing loss at increasing token indices). We find
that induction heads develop at precisely the same point as a sudden sharp
increase in in-context learning ability, visible as a bump in the training
loss. We present six complementary lines of evidence, arguing that induction
heads may be the mechanistic source of general in-context learning in
transformer models of any size. For small attention-only models, we present
strong, causal evidence; for larger models with MLPs, we present correlational
evidence.
"
Transformers learn in-context by gradient descent,Johannes von Oswald,http://arxiv.org/pdf/2212.07677v2.pdf,2022-12-15,"['cs.lg', 'cs.ai', 'cs.cl']",2212.07677v2.pdf,"  At present, the mechanisms of in-context learning in Transformers are not
well understood and remain mostly an intuition. In this paper, we suggest that
training Transformers on auto-regressive objectives is closely related to
gradient-based meta-learning formulations. We start by providing a simple
weight construction that shows the equivalence of data transformations induced
by 1) a single linear self-attention layer and by 2) gradient-descent (GD) on a
regression loss. Motivated by that construction, we show empirically that when
training self-attention-only Transformers on simple regression tasks either the
models learned by GD and Transformers show great similarity or, remarkably, the
weights found by optimization match the construction. Thus we show how trained
Transformers become mesa-optimizers i.e. learn models by gradient descent in
their forward pass. This allows us, at least in the domain of regression
problems, to mechanistically understand the inner workings of in-context
learning in optimized Transformers. Building on this insight, we furthermore
identify how Transformers surpass the performance of plain gradient descent by
learning an iterative curvature correction and learn linear models on deep data
representations to solve non-linear regression tasks. Finally, we discuss
intriguing parallels to a mechanism identified to be crucial for in-context
learning termed induction-head (Olsson et al., 2022) and show how it could be
understood as a specific case of in-context learning by gradient descent
learning within Transformers. Code to reproduce the experiments can be found at
https://github.com/google-research/self-organising-systems/tree/master/transformers_learn_icl_by_gd .
"
What Makes Good Examples for Visual In-Context Learning?,Yuanhan Zhang,http://arxiv.org/pdf/2301.13670v2.pdf,2023-01-31,['cs.cv'],2301.13670v2.pdf,"  Large-scale models trained on broad data have recently become the mainstream
architecture in computer vision due to their strong generalization performance.
In this paper, the main focus is on an emergent ability in large vision models,
known as in-context learning, which allows inference on unseen tasks by
conditioning on in-context examples (a.k.a.~prompt) without updating the model
parameters. This concept has been well-known in natural language processing but
has only been studied very recently for large vision models. We for the first
time provide a comprehensive investigation on the impact of in-context examples
in computer vision, and find that the performance is highly sensitive to the
choice of in-context examples. To overcome the problem, we propose a prompt
retrieval framework to automate the selection of in-context examples.
Specifically, we present (1) an unsupervised prompt retrieval method based on
nearest example search using an off-the-shelf model, and (2) a supervised
prompt retrieval method, which trains a neural network to choose examples that
directly maximize in-context learning performance. The results demonstrate that
our methods can bring non-trivial improvements to visual in-context learning in
comparison to the commonly-used random selection.
"
Compositional Exemplars for In-context Learning,Jiacheng Ye,http://arxiv.org/pdf/2302.05698v3.pdf,2023-02-11,"['cs.cl', 'cs.ai', 'cs.lg']",2302.05698v3.pdf,"  Large pretrained language models (LMs) have shown impressive In-Context
Learning (ICL) ability, where the model learns to do an unseen task via a
prompt consisting of input-output examples as the demonstration, without any
parameter updates. The performance of ICL is highly dominated by the quality of
the selected in-context examples. However, previous selection methods are
mostly based on simple heuristics, leading to sub-optimal performance. In this
work, we formulate in-context example selection as a subset selection problem.
We propose CEIL (Compositional Exemplars for In-context Learning), which is
instantiated by Determinantal Point Processes (DPPs) to model the interaction
between the given input and in-context examples, and optimized through a
carefully-designed contrastive learning objective to obtain preference from
LMs. We validate CEIL on 12 classification and generation datasets from 7
distinct NLP tasks, including sentiment analysis, paraphrase detection, natural
language inference, commonsense reasoning, open-domain question answering, code
generation, and semantic parsing. Extensive experiments demonstrate not only
the state-of-the-art performance but also the transferability and
compositionality of CEIL, shedding new light on effective and efficient
in-context learning. Our code is released at
https://github.com/HKUNLP/icl-ceil.
"
ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for  Document Information Extraction,Jiabang He,http://arxiv.org/pdf/2303.05063v4.pdf,2023-03-09,['cs.cl'],2303.05063v4.pdf,"  Large language models (LLMs), such as GPT-3 and ChatGPT, have demonstrated
remarkable results in various natural language processing (NLP) tasks with
in-context learning, which involves inference based on a few demonstration
examples. Despite their successes in NLP tasks, no investigation has been
conducted to assess the ability of LLMs to perform document information
extraction (DIE) using in-context learning. Applying LLMs to DIE poses two
challenges: the modality and task gap. To this end, we propose a simple but
effective in-context learning framework called ICL-D3IE, which enables LLMs to
perform DIE with different types of demonstration examples. Specifically, we
extract the most difficult and distinct segments from hard training documents
as hard demonstrations for benefiting all test instances. We design
demonstrations describing relationships that enable LLMs to understand
positional relationships. We introduce formatting demonstrations for easy
answer extraction. Additionally, the framework improves diverse demonstrations
by updating them iteratively. Our experiments on three widely used benchmark
datasets demonstrate that the ICL-D3IE framework enables Davinci-003/ChatGPT to
achieve superior performance when compared to previous pre-trained methods
fine-tuned with full training in both the in-distribution (ID) setting and in
the out-of-distribution (OOD) setting. Code is available at
https://github.com/MAEHCM/ICL-D3IE.
"
The Closeness of In-Context Learning and Weight Shifting for Softmax  Regression,Shuai Li,http://arxiv.org/pdf/2304.13276v1.pdf,2023-04-26,"['cs.cl', 'cs.lg']",2304.13276v1.pdf,"  Large language models (LLMs) are known for their exceptional performance in
natural language processing, making them highly effective in many human
life-related or even job-related tasks. The attention mechanism in the
Transformer architecture is a critical component of LLMs, as it allows the
model to selectively focus on specific input parts. The softmax unit, which is
a key part of the attention mechanism, normalizes the attention scores. Hence,
the performance of LLMs in various NLP tasks depends significantly on the
crucial role played by the attention mechanism with the softmax unit.
  In-context learning, as one of the celebrated abilities of recent LLMs, is an
important concept in querying LLMs such as ChatGPT. Without further parameter
updates, Transformers can learn to predict based on few in-context examples.
However, the reason why Transformers becomes in-context learners is not well
understood. Recently, several works [ASA+22,GTLV22,ONR+22] have studied the
in-context learning from a mathematical perspective based on a linear
regression formulation $\min_x\| Ax - b \|_2$, which show Transformers'
capability of learning linear functions in context.
  In this work, we study the in-context learning based on a softmax regression
formulation $\min_{x} \| \langle \exp(Ax), {\bf 1}_n \rangle^{-1} \exp(Ax) - b
\|_2$ of Transformer's attention mechanism. We show the upper bounds of the
data transformations induced by a single self-attention layer and by
gradient-descent on a $\ell_2$ regression loss for softmax prediction function,
which imply that when training self-attention-only Transformers for fundamental
regression tasks, the models learned by gradient-descent and Transformers show
great similarity.
"
MMICL: Empowering Vision-language Model with Multi-Modal In-Context  Learning,Haozhe Zhao,http://arxiv.org/pdf/2309.07915v2.pdf,2023-09-14,"['cs.cl', 'cs.ai', 'cs.cv']",2309.07915v2.pdf,"  Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context.
"
Visual In-Context Learning for Few-Shot Eczema Segmentation,Neelesh Kumar,http://arxiv.org/pdf/2309.16656v1.pdf,2023-09-28,"['cs.cv', 'cs.lg']",2309.16656v1.pdf,"  Automated diagnosis of eczema from digital camera images is crucial for
developing applications that allow patients to self-monitor their recovery. An
important component of this is the segmentation of eczema region from such
images. Current methods for eczema segmentation rely on deep neural networks
such as convolutional (CNN)-based U-Net or transformer-based Swin U-Net. While
effective, these methods require high volume of annotated data, which can be
difficult to obtain. Here, we investigate the capabilities of visual in-context
learning that can perform few-shot eczema segmentation with just a handful of
examples and without any need for retraining models. Specifically, we propose a
strategy for applying in-context learning for eczema segmentation with a
generalist vision model called SegGPT. When benchmarked on a dataset of
annotated eczema images, we show that SegGPT with just 2 representative example
images from the training dataset performs better (mIoU: 36.69) than a CNN U-Net
trained on 428 images (mIoU: 32.60). We also discover that using more number of
examples for SegGPT may in fact be harmful to its performance. Our result
highlights the importance of visual in-context learning in developing faster
and better solutions to skin imaging tasks. Our result also paves the way for
developing inclusive solutions that can cater to minorities in the demographics
who are typically heavily under-represented in the training data.
"
Learning To Retrieve Prompts for In-Context Learning,Ohad Rubin,http://arxiv.org/pdf/2112.08633v2.pdf,2021-12-16,"['cs.cl', 'cs.lg']",2112.08633v2.pdf,"  In-context learning is a recent paradigm in natural language understanding,
where a large pre-trained language model (LM) observes a test instance and a
few training examples as its input, and directly decodes the output without any
update to its parameters. However, performance has been shown to strongly
depend on the selected training examples (termed prompt). In this work, we
propose an efficient method for retrieving prompts for in-context learning
using annotated data and a LM. Given an input-output pair, we estimate the
probability of the output given the input and a candidate training example as
the prompt, and label training examples as positive or negative based on this
probability. We then train an efficient dense retriever from this data, which
is used to retrieve training examples as prompts at test time. We evaluate our
approach on three sequence-to-sequence tasks where language utterances are
mapped to meaning representations, and find that it substantially outperforms
prior work and multiple baselines across the board.
"
Semantic-Oriented Unlabeled Priming for Large-Scale Language Models,Yanchen Liu,http://arxiv.org/pdf/2202.06133v1.pdf,2022-02-12,['cs.cl'],2202.06133v1.pdf,"  Due to the high costs associated with finetuning large language models,
various recent works propose to adapt them to specific tasks without any
parameter updates through in-context learning. Unfortunately, for in-context
learning there is currently no way to leverage unlabeled data, which is often
much easier to obtain in large quantities than labeled examples. In this work,
we therefore investigate ways to make use of unlabeled examples to improve the
zero-shot performance of pretrained language models without any finetuning: We
introduce Semantic-Oriented Unlabeled Priming (SOUP), a method that classifies
examples by retrieving semantically similar unlabeled examples, assigning
labels to them in a zero-shot fashion, and then using them for in-context
learning. We also propose bag-of-contexts priming, a new priming strategy that
is more suitable for our setting and enables the usage of more examples than
fit into the context window.
"
Complementary Explanations for Effective In-Context Learning,Xi Ye,http://arxiv.org/pdf/2211.13892v2.pdf,2022-11-25,['cs.cl'],2211.13892v2.pdf,"  Large language models (LLMs) have exhibited remarkable capabilities in
learning from explanations in prompts, but there has been limited understanding
of exactly how these explanations function or why they are effective. This work
aims to better understand the mechanisms by which explanations are used for
in-context learning. We first study the impact of two different factors on the
performance of prompts with explanations: the computation trace (the way the
solution is decomposed) and the natural language used to express the prompt. By
perturbing explanations on three controlled tasks, we show that both factors
contribute to the effectiveness of explanations. We further study how to form
maximally effective sets of explanations for solving a given test query. We
find that LLMs can benefit from the complementarity of the explanation set:
diverse reasoning skills shown by different exemplars can lead to better
performance. Therefore, we propose a maximal marginal relevance-based exemplar
selection approach for constructing exemplar sets that are both relevant as
well as complementary, which successfully improves the in-context learning
performance across three real-world tasks on multiple LLMs.
"
Diverse Demonstrations Improve In-context Compositional Generalization,Itay Levy,http://arxiv.org/pdf/2212.06800v3.pdf,2022-12-13,['cs.cl'],2212.06800v3.pdf,"  In-context learning has shown great success in i.i.d semantic parsing splits,
where the training and test sets are drawn from the same distribution. In this
setup, models are typically prompted with demonstrations that are similar to
the input utterance. However, in the setup of compositional generalization,
where models are tested on outputs with structures that are absent from the
training set, selecting similar demonstrations is insufficient, as often no
example will be similar enough to the input. In this work, we propose a method
to select diverse demonstrations that aims to collectively cover all of the
structures required in the output program, in order to encourage the model to
generalize to new structures from these demonstrations. We empirically show
that combining diverse demonstrations with in-context learning substantially
improves performance across three compositional generalization semantic parsing
datasets in the pure in-context learning setup and when combined with
finetuning.
"
The Impact of Symbolic Representations on In-context Learning for  Few-shot Reasoning,Hanlin Zhang,http://arxiv.org/pdf/2212.08686v1.pdf,2022-12-16,['cs.cl'],2212.08686v1.pdf,"  Pre-trained language models (LMs) have shown remarkable reasoning performance
using explanations (or ``chain-of-thought'' (CoT)) for in-context learning. On
the other hand, these reasoning tasks are usually presumed to be more
approachable for symbolic programming. To make progress towards understanding
in-context learning, we curate synthetic datasets containing equivalent
(natural, symbolic) data pairs, where symbolic examples contain first-order
logic rules and predicates from knowledge bases (KBs). Then we revisit
neuro-symbolic approaches and use Language Models as Logic Programmer (LMLP)
that learns from demonstrations containing logic rules and corresponding
examples to iteratively reason over KBs, recovering Prolog's backward chaining
algorithm. Comprehensive experiments are included to systematically compare
LMLP with CoT in deductive reasoning settings, showing that LMLP enjoys more
than 25% higher accuracy than CoT on length generalization benchmarks even with
fewer parameters.
"
Self-Adaptive In-Context Learning: An Information Compression  Perspective for In-Context Example Selection and Ordering,Zhiyong Wu,http://arxiv.org/pdf/2212.10375v2.pdf,2022-12-20,"['cs.cl', 'cs.ai']",2212.10375v2.pdf,"  Despite the surprising few-shot performance of in-context learning (ICL), it
is still a common practice to randomly sample examples to serve as context.
This paper advocates a new principle for ICL: self-adaptive in-context
learning. The self-adaption mechanism is introduced to help each sample find an
in-context example permutation (i.e., selection and ordering) that can derive
the correct prediction, thus maximizing performance. To validate the
effectiveness of self-adaptive ICL, we propose a general select-then-rank
framework and instantiate it with new selection and ranking algorithms. Upon
extensive evaluation on eight different NLP datasets, our self-adaptive ICL
method achieves a 40% relative improvement over the common practice setting.
Further analysis reveals the enormous potential of self-adaptive ICL that it
might be able to close the gap between ICL and finetuning given more advanced
algorithms. Our code is released to facilitate future research in this area:
https://github.com/Shark-NLP/self-adaptive-ICL
"
Privacy-Preserving In-Context Learning for Large Language Models,Tong Wu,http://arxiv.org/pdf/2305.01639v2.pdf,2023-05-02,"['cs.lg', 'cs.ai', 'cs.cr']",2305.01639v2.pdf,"  In-context learning (ICL) is an important capability of Large Language Models
(LLMs), enabling these models to dynamically adapt based on specific,
in-context exemplars, thereby improving accuracy and relevance. However, LLM's
responses may leak the sensitive private information contained in in-context
exemplars. To address this challenge, we propose Differentially Private
In-context Learning (DP-ICL), a general paradigm for privatizing ICL tasks. The
key idea for DP-ICL paradigm is generating differentially private responses
through a noisy consensus among an ensemble of LLM's responses based on
disjoint exemplar sets. Based on the general paradigm of DP-ICL, we instantiate
several techniques showing how to privatize ICL for text classification and
language generation. We evaluate DP-ICL on four text classification benchmarks
and two language generation tasks, and our empirical results show that DP-ICL
achieves a strong utility-privacy tradeoff.
"
In-context Learning as Maintaining Coherency: A Study of On-the-fly  Machine Translation Using Large Language Models,Suzanna Sia,http://arxiv.org/pdf/2305.03573v1.pdf,2023-05-05,"['cs.cl', 'cs.ai']",2305.03573v1.pdf,"  The phenomena of in-context learning has typically been thought of as
""learning from examples"". In this work which focuses on Machine Translation, we
present a perspective of in-context learning as the desired generation task
maintaining coherency with its context, i.e., the prompt examples. We first
investigate randomly sampled prompts across 4 domains, and find that
translation performance improves when shown in-domain prompts. Next, we
investigate coherency for the in-domain setting, which uses prompt examples
from a moving window. We study this with respect to other factors that have
previously been identified in the literature such as length, surface similarity
and sentence embedding similarity. Our results across 3 models (GPTNeo2.7B,
Bloom3B, XGLM2.9B), and three translation directions
(\texttt{en}$\rightarrow$\{\texttt{pt, de, fr}\}) suggest that the long-term
coherency of the prompts and the test sentence is a good indicator of
downstream translation performance. In doing so, we demonstrate the efficacy of
In-context Machine Translation for on-the-fly adaptation.
"
Small Models are Valuable Plug-ins for Large Language Models,Canwen Xu,http://arxiv.org/pdf/2305.08848v1.pdf,2023-05-15,"['cs.cl', 'cs.ai', 'cs.lg']",2305.08848v1.pdf,"  Large language models (LLMs) such as GPT-3 and GPT-4 are powerful but their
weights are often publicly unavailable and their immense sizes make the models
difficult to be tuned with common hardware. As a result, effectively tuning
these models with large-scale supervised data can be challenging. As an
alternative, In-Context Learning (ICL) can only use a small number of
supervised examples due to context length limits. In this paper, we propose
Super In-Context Learning (SuperICL) which allows black-box LLMs to work with
locally fine-tuned smaller models, resulting in superior performance on
supervised tasks. Our experiments demonstrate that SuperICL can improve
performance beyond state-of-the-art fine-tuned models while addressing the
instability problem of in-context learning. Furthermore, SuperICL can enhance
the capabilities of smaller models, such as multilinguality and
interpretability.
"
ScoNe: Benchmarking Negation Reasoning in Language Models With  Fine-Tuning and In-Context Learning,Jingyuan Selena She,http://arxiv.org/pdf/2305.19426v1.pdf,2023-05-30,"['cs.cl', 'cs.lg']",2305.19426v1.pdf,"  A number of recent benchmarks seek to assess how well models handle natural
language negation. However, these benchmarks lack the controlled example
paradigms that would allow us to infer whether a model had learned how negation
morphemes semantically scope. To fill these analytical gaps, we present the
Scoped Negation NLI (ScoNe-NLI) benchmark, which contains contrast sets of six
examples with up to two negations where either zero, one, or both negative
morphemes affect the NLI label. We use ScoNe-NLI to assess fine-tuning and
in-context learning strategies. We find that RoBERTa and DeBERTa models solve
ScoNe-NLI after many shot fine-tuning. For in-context learning, we test
InstructGPT models and find that most prompt strategies are not successful,
including those using step-by-step reasoning. To better understand this result,
we extend ScoNe with ScoNe-NLG, a sentence completion test set that embeds
negation reasoning in short narratives. Here, InstructGPT is successful, which
reveals the model can correctly reason about negation, but struggles to do so
on prompt-adapted NLI examples outside of its core pretraining regime.
"
GPT-FinRE: In-context Learning for Financial Relation Extraction using  Large Language Models,Pawan Kumar Rajpoot,http://arxiv.org/pdf/2306.17519v2.pdf,2023-06-30,['cs.cl'],2306.17519v2.pdf,"  Relation extraction (RE) is a crucial task in natural language processing
(NLP) that aims to identify and classify relationships between entities
mentioned in text. In the financial domain, relation extraction plays a vital
role in extracting valuable information from financial documents, such as news
articles, earnings reports, and company filings. This paper describes our
solution to relation extraction on one such dataset REFinD. The dataset was
released along with shared task as a part of the Fourth Workshop on Knowledge
Discovery from Unstructured Data in Financial Services, co-located with SIGIR
2023. In this paper, we employed OpenAI models under the framework of
in-context learning (ICL). We utilized two retrieval strategies to find top K
relevant in-context learning demonstrations / examples from training data for a
given test example. The first retrieval mechanism, we employed, is a
learning-free dense retriever and the other system is a learning-based
retriever. We were able to achieve 3rd rank overall. Our best F1-score is
0.718.
"
Code-Style In-Context Learning for Knowledge-Based Question Answering,Zhijie Nie,http://arxiv.org/pdf/2309.04695v1.pdf,2023-09-09,"['cs.cl', 'cs.ai']",2309.04695v1.pdf,"  Current methods for Knowledge-Based Question Answering (KBQA) usually rely on
complex training techniques and model frameworks, leading to many limitations
in practical applications. Recently, the emergence of In-Context Learning (ICL)
capabilities in Large Language Models (LLMs) provides a simple and
training-free semantic parsing paradigm for KBQA: Given a small number of
questions and their labeled logical forms as demo examples, LLMs can understand
the task intent and generate the logic form for a new question. However,
current powerful LLMs have little exposure to logic forms during pre-training,
resulting in a high format error rate. To solve this problem, we propose a
code-style in-context learning method for KBQA, which converts the generation
process of unfamiliar logical form into the more familiar code generation
process for LLMs. Experimental results on three mainstream datasets show that
our method dramatically mitigated the formatting error problem in generating
logic forms while realizing a new SOTA on WebQSP, GrailQA, and GraphQ under the
few-shot setting.
"
Can Whisper perform speech-based in-context learning,Siyin Wang,http://arxiv.org/pdf/2309.07081v1.pdf,2023-09-13,"['eess.as', 'cs.cl', 'cs.sd']",2309.07081v1.pdf,"  This paper investigates the in-context learning abilities of the Whisper
automatic speech recognition (ASR) models released by OpenAI. A novel
speech-based in-context learning (SICL) approach is proposed for test-time
adaptation, which can reduce the word error rates (WERs) with only a small
number of labelled speech samples without gradient descent. Language-level
adaptation experiments using Chinese dialects showed that when applying SICL to
isolated word ASR, consistent and considerable relative WER reductions can be
achieved using Whisper models of any size on two dialects, which is on average
32.3%. A k-nearest-neighbours-based in-context example selection technique can
be applied to further improve the efficiency of SICL, which can increase the
average relative WER reduction to 36.4%. The findings are verified using
speaker adaptation or continuous speech recognition tasks, and both achieved
considerable relative WER reductions. Detailed quantitative analyses are also
provided to shed light on SICL's adaptability to phonological variances and
dialect-specific lexical nuances.
"
ICLEF: In-Context Learning with Expert Feedback for Explainable Style  Transfer,Arkadiy Saakyan,http://arxiv.org/pdf/2309.08583v1.pdf,2023-09-15,['cs.cl'],2309.08583v1.pdf,"  While state-of-the-art language models excel at the style transfer task,
current work does not address explainability of style transfer systems.
Explanations could be generated using large language models such as GPT-3.5 and
GPT-4, but the use of such complex systems is inefficient when smaller, widely
distributed, and transparent alternatives are available. We propose a framework
to augment and improve a formality style transfer dataset with explanations via
model distillation from ChatGPT. To further refine the generated explanations,
we propose a novel way to incorporate scarce expert human feedback using
in-context learning (ICLEF: In-Context Learning from Expert Feedback) by
prompting ChatGPT to act as a critic to its own outputs. We use the resulting
dataset of 9,960 explainable formality style transfer instances (e-GYAFC) to
show that current openly distributed instruction-tuned models (and, in some
settings, ChatGPT) perform poorly on the task, and that fine-tuning on our
high-quality dataset leads to significant improvements as shown by automatic
evaluation. In human evaluation, we show that models much smaller than ChatGPT
fine-tuned on our data align better with expert preferences. Finally, we
discuss two potential applications of models fine-tuned on the explainable
style transfer task: interpretable authorship verification and interpretable
adversarial attacks on AI-generated text detectors.
"
SALM: Speech-augmented Language Model with In-context Learning for  Speech Recognition and Translation,Zhehuai Chen,http://arxiv.org/pdf/2310.09424v1.pdf,2023-10-13,"['cs.cl', 'cs.hc', 'cs.sd', 'eess.as', '68t10', 'i.2.7']",2310.09424v1.pdf,"  We present a novel Speech Augmented Language Model (SALM) with {\em
multitask} and {\em in-context} learning capabilities. SALM comprises a frozen
text LLM, a audio encoder, a modality adapter module, and LoRA layers to
accommodate speech input and associated task instructions. The unified SALM not
only achieves performance on par with task-specific Conformer baselines for
Automatic Speech Recognition (ASR) and Speech Translation (AST), but also
exhibits zero-shot in-context learning capabilities, demonstrated through
keyword-boosting task for ASR and AST. Moreover, {\em speech supervised
in-context training} is proposed to bridge the gap between LLM training and
downstream speech tasks, which further boosts the in-context learning ability
of speech-to-text models. Proposed model is open-sourced via NeMo toolkit.
"
Utilising a Large Language Model to Annotate Subject Metadata: A Case  Study in an Australian National Research Data Catalogue,Shiwei Zhang,http://arxiv.org/pdf/2310.11318v1.pdf,2023-10-17,"['cs.cl', 'cs.ai']",2310.11318v1.pdf,"  In support of open and reproducible research, there has been a rapidly
increasing number of datasets made available for research. As the availability
of datasets increases, it becomes more important to have quality metadata for
discovering and reusing them. Yet, it is a common issue that datasets often
lack quality metadata due to limited resources for data curation. Meanwhile,
technologies such as artificial intelligence and large language models (LLMs)
are progressing rapidly. Recently, systems based on these technologies, such as
ChatGPT, have demonstrated promising capabilities for certain data curation
tasks. This paper proposes to leverage LLMs for cost-effective annotation of
subject metadata through the LLM-based in-context learning. Our method employs
GPT-3.5 with prompts designed for annotating subject metadata, demonstrating
promising performance in automatic metadata annotation. However, models based
on in-context learning cannot acquire discipline-specific rules, resulting in
lower performance in several categories. This limitation arises from the
limited contextual information available for subject inference. To the best of
our knowledge, we are introducing, for the first time, an in-context learning
method that harnesses large language models for automated subject metadata
annotation.
"
Hint-enhanced In-Context Learning wakes Large Language Models up for  knowledge-intensive tasks,Yifan Wang,http://arxiv.org/pdf/2311.01949v1.pdf,2023-11-03,['cs.cl'],2311.01949v1.pdf,"  In-context learning (ICL) ability has emerged with the increasing scale of
large language models (LLMs), enabling them to learn input-label mappings from
demonstrations and perform well on downstream tasks. However, under the
standard ICL setting, LLMs may sometimes neglect query-related information in
demonstrations, leading to incorrect predictions. To address this limitation,
we propose a new paradigm called Hint-enhanced In-Context Learning (HICL) to
explore the power of ICL in open-domain question answering, an important form
in knowledge-intensive tasks. HICL leverages LLMs' reasoning ability to extract
query-related knowledge from demonstrations, then concatenates the knowledge to
prompt LLMs in a more explicit way. Furthermore, we track the source of this
knowledge to identify specific examples, and introduce a Hint-related Example
Retriever (HER) to select informative examples for enhanced demonstrations. We
evaluate HICL with HER on 3 open-domain QA benchmarks, and observe average
performance gains of 2.89 EM score and 2.52 F1 score on gpt-3.5-turbo, 7.62 EM
score and 7.27 F1 score on LLaMA-2-Chat-7B compared with standard setting.
"
Rethinking the Role of Demonstrations: What Makes In-Context Learning  Work?,Sewon Min,http://arxiv.org/pdf/2202.12837v2.pdf,2022-02-25,"['cs.cl', 'cs.ai']",2202.12837v2.pdf,"  Large language models (LMs) are able to in-context learn -- perform a new
task via inference alone by conditioning on a few input-label pairs
(demonstrations) and making predictions for new inputs. However, there has been
little understanding of how the model learns and which aspects of the
demonstrations contribute to end task performance. In this paper, we show that
ground truth demonstrations are in fact not required -- randomly replacing
labels in the demonstrations barely hurts performance on a range of
classification and multi-choce tasks, consistently over 12 different models
including GPT-3. Instead, we find that other aspects of the demonstrations are
the key drivers of end task performance, including the fact that they provide a
few examples of (1) the label space, (2) the distribution of the input text,
and (3) the overall format of the sequence. Together, our analysis provides a
new way of understanding how and why in-context learning works, while opening
up new questions about how much can be learned from large language models
through inference alone.
"
Can Foundation Models Help Us Achieve Perfect Secrecy?,Simran Arora,http://arxiv.org/pdf/2205.13722v2.pdf,2022-05-27,"['cs.lg', 'cs.cl']",2205.13722v2.pdf,"  A key promise of machine learning is the ability to assist users with
personal tasks. Because the personal context required to make accurate
predictions is often sensitive, we require systems that protect privacy. A gold
standard privacy-preserving system will satisfy perfect secrecy, meaning that
interactions with the system provably reveal no private information. However,
privacy and quality appear to be in tension in existing systems for personal
tasks. Neural models typically require copious amounts of training to perform
well, while individual users typically hold a limited scale of data, so
federated learning (FL) systems propose to learn from the aggregate data of
multiple users. FL does not provide perfect secrecy, but rather practitioners
apply statistical notions of privacy -- i.e., the probability of learning
private information about a user should be reasonably low. The strength of the
privacy guarantee is governed by privacy parameters. Numerous privacy attacks
have been demonstrated on FL systems and it can be challenging to reason about
the appropriate privacy parameters for a privacy-sensitive use case. Therefore
our work proposes a simple baseline for FL, which both provides the stronger
perfect secrecy guarantee and does not require setting any privacy parameters.
We initiate the study of when and where an emerging tool in ML -- the
in-context learning abilities of recent pretrained models -- can be an
effective baseline alongside FL. We find in-context learning is competitive
with strong FL baselines on 6 of 7 popular benchmarks from the privacy
literature and a real-world case study, which is disjoint from the pretraining
data. We release our code here: https://github.com/simran-arora/focus
"
Few-Shot Anaphora Resolution in Scientific Protocols via Mixtures of  In-Context Experts,Nghia T. Le,http://arxiv.org/pdf/2210.03690v2.pdf,2022-10-07,"['cs.cl', 'cs.ai']",2210.03690v2.pdf,"  Anaphora resolution is an important task for information extraction across a
range of languages, text genres, and domains, motivating the need for methods
that do not require large annotated datasets. In-context learning has emerged
as a promising approach, yet there are a number of challenges in applying
in-context learning to resolve anaphora. For example, encoding a single
in-context demonstration that consists of: an anaphor, a paragraph-length
context, and a list of corresponding antecedents, requires conditioning a
language model on a long sequence of tokens, limiting the number of
demonstrations per prompt. In this paper, we present MICE (Mixtures of
In-Context Experts), which we demonstrate is effective for few-shot anaphora
resolution in scientific protocols (Tamari et al., 2021). Given only a handful
of training examples, MICE combines the predictions of hundreds of in-context
experts, yielding a 30% increase in F1 score over a competitive prompt
retrieval baseline. Furthermore, we show MICE can be used to train compact
student models without sacrificing performance. As far as we are aware, this is
the first work to present experimental results demonstrating the effectiveness
of in-context learning on the task of few-shot anaphora resolution in
scientific protocols.
"
What learning algorithm is in-context learning? Investigations with  linear models,Ekin AkyĂĽrek,http://arxiv.org/pdf/2211.15661v3.pdf,2022-11-28,"['cs.lg', 'cs.cl']",2211.15661v3.pdf,"  Neural sequence models, especially transformers, exhibit a remarkable
capacity for in-context learning. They can construct new predictors from
sequences of labeled examples $(x, f(x))$ presented in the input without
further parameter updates. We investigate the hypothesis that transformer-based
in-context learners implement standard learning algorithms implicitly, by
encoding smaller models in their activations, and updating these implicit
models as new examples appear in the context. Using linear regression as a
prototypical problem, we offer three sources of evidence for this hypothesis.
First, we prove by construction that transformers can implement learning
algorithms for linear models based on gradient descent and closed-form ridge
regression. Second, we show that trained in-context learners closely match the
predictors computed by gradient descent, ridge regression, and exact
least-squares regression, transitioning between different predictors as
transformer depth and dataset noise vary, and converging to Bayesian estimators
for large widths and depths. Third, we present preliminary evidence that
in-context learners share algorithmic features with these predictors: learners'
late layers non-linearly encode weight vectors and moment matrices. These
results suggest that in-context learning is understandable in algorithmic
terms, and that (at least in the linear case) learners may rediscover standard
estimation algorithms. Code and reference implementations are released at
https://github.com/ekinakyurek/google-research/blob/master/incontext.
"
SE Factual Knowledge in Frozen Giant Code Model: A Study on FQN and its  Retrieval,Qing Huang,http://arxiv.org/pdf/2212.08221v1.pdf,2022-12-16,['cs.se'],2212.08221v1.pdf,"  Pre-trained giant code models (PCMs) start coming into the developers' daily
practices. Understanding what types of and how much software knowledge is
packed into PCMs is the foundation for incorporating PCMs into software
engineering (SE) tasks and fully releasing their potential. In this work, we
conduct the first systematic study on the SE factual knowledge in the
state-of-the-art PCM CoPilot, focusing on APIs' Fully Qualified Names (FQNs),
the fundamental knowledge for effective code analysis, search and reuse. Driven
by FQNs' data distribution properties, we design a novel lightweight in-context
learning on Copilot for FQN inference, which does not require code compilation
as traditional methods or gradient update by recent FQN prompt-tuning. We
systematically experiment with five in-context-learning design factors to
identify the best in-context learning configuration that developers can adopt
in practice. With this best configuration, we investigate the effects of amount
of example prompts and FQN data properties on Copilot's FQN inference
capability. Our results confirm that Copilot stores diverse FQN knowledge and
can be applied for the FQN inference due to its high inference accuracy and
non-reliance on code analysis. Based on our experience interacting with
Copilot, we discuss various opportunities to improve human-CoPilot interaction
in the FQN inference task.
"
Transformers as Algorithms: Generalization and Stability in In-context  Learning,Yingcong Li,http://arxiv.org/pdf/2301.07067v2.pdf,2023-01-17,"['cs.lg', 'cs.cl', 'stat.ml']",2301.07067v2.pdf,"  In-context learning (ICL) is a type of prompting where a transformer model
operates on a sequence of (input, output) examples and performs inference
on-the-fly. In this work, we formalize in-context learning as an algorithm
learning problem where a transformer model implicitly constructs a hypothesis
function at inference-time. We first explore the statistical aspects of this
abstraction through the lens of multitask learning: We obtain generalization
bounds for ICL when the input prompt is (1) a sequence of i.i.d. (input, label)
pairs or (2) a trajectory arising from a dynamical system. The crux of our
analysis is relating the excess risk to the stability of the algorithm
implemented by the transformer. We characterize when transformer/attention
architecture provably obeys the stability condition and also provide empirical
verification. For generalization on unseen tasks, we identify an inductive bias
phenomenon in which the transfer learning risk is governed by the task
complexity and the number of MTL tasks in a highly predictable manner. Finally,
we provide numerical evaluations that (1) demonstrate transformers can indeed
implement near-optimal algorithms on classical regression problems with i.i.d.
and dynamic data, (2) provide insights on stability, and (3) verify our
theoretical predictions.
"
Adaptive Machine Translation with Large Language Models,Yasmin Moslem,http://arxiv.org/pdf/2301.13294v3.pdf,2023-01-30,['cs.cl'],2301.13294v3.pdf,"  Consistency is a key requirement of high-quality translation. It is
especially important to adhere to pre-approved terminology and adapt to
corrected translations in domain-specific projects. Machine translation (MT)
has achieved significant progress in the area of domain adaptation. However,
real-time adaptation remains challenging. Large-scale language models (LLMs)
have recently shown interesting capabilities of in-context learning, where they
learn to replicate certain input-output text generation patterns, without
further fine-tuning. By feeding an LLM at inference time with a prompt that
consists of a list of translation pairs, it can then simulate the domain and
style characteristics. This work aims to investigate how we can utilize
in-context learning to improve real-time adaptive MT. Our extensive experiments
show promising results at translation time. For example, LLMs can adapt to a
set of in-domain sentence pairs and/or terminology while translating a new
sentence. We observe that the translation quality with few-shot in-context
learning can surpass that of strong encoder-decoder MT systems, especially for
high-resource languages. Moreover, we investigate whether we can combine MT
from strong encoder-decoder models with fuzzy matches, which can further
improve translation quality, especially for less supported languages. We
conduct our experiments across five diverse language pairs, namely
English-to-Arabic (EN-AR), English-to-Chinese (EN-ZH), English-to-French
(EN-FR), English-to-Kinyarwanda (EN-RW), and English-to-Spanish (EN-ES).
"
ScatterShot: Interactive In-context Example Curation for Text  Transformation,Tongshuang Wu,http://arxiv.org/pdf/2302.07346v1.pdf,2023-02-14,"['cs.hc', 'cs.cl']",2302.07346v1.pdf,"  The in-context learning capabilities of LLMs like GPT-3 allow annotators to
customize an LLM to their specific tasks with a small number of examples.
However, users tend to include only the most obvious patterns when crafting
examples, resulting in underspecified in-context functions that fall short on
unseen cases. Further, it is hard to know when ""enough"" examples have been
included even for known patterns. In this work, we present ScatterShot, an
interactive system for building high-quality demonstration sets for in-context
learning. ScatterShot iteratively slices unlabeled data into task-specific
patterns, samples informative inputs from underexplored or not-yet-saturated
slices in an active learning manner, and helps users label more efficiently
with the help of an LLM and the current example set. In simulation studies on
two text perturbation scenarios, ScatterShot sampling improves the resulting
few-shot functions by 4-5 percentage points over random sampling, with less
variance as more examples are added. In a user study, ScatterShot greatly helps
users in covering different patterns in the input space and labeling in-context
examples more efficiently, resulting in better in-context learning and less
user effort.
"
Resources and Few-shot Learners for In-context Learning in Slavic  Languages,Michal Štefánik,http://arxiv.org/pdf/2304.01922v1.pdf,2023-04-04,['cs.cl'],2304.01922v1.pdf,"  Despite the rapid recent progress in creating accurate and compact in-context
learners, most recent work focuses on in-context learning (ICL) for tasks in
English. However, the ability to interact with users of languages outside
English presents a great potential for broadening the applicability of language
technologies to non-English speakers.
  In this work, we collect the infrastructure necessary for training and
evaluation of ICL in a selection of Slavic languages: Czech, Polish, and
Russian. We link a diverse set of datasets and cast these into a unified
instructional format through a set of transformations and newly-crafted
templates written purely in target languages. Using the newly-curated dataset,
we evaluate a set of the most recent in-context learners and compare their
results to the supervised baselines. Finally, we train, evaluate and publish a
set of in-context learning models that we train on the collected resources and
compare their performance to previous work.
  We find that ICL models tuned in English are also able to learn some tasks
from non-English contexts, but multilingual instruction fine-tuning
consistently improves the ICL ability. We also find that the massive multitask
training can be outperformed by single-task training in the target language,
uncovering the potential for specializing in-context learners to the
language(s) of their application.
"
Boosting Theory-of-Mind Performance in Large Language Models via  Prompting,Shima Rahimi Moghaddam,http://arxiv.org/pdf/2304.11490v3.pdf,2023-04-22,"['cs.ai', 'cs.cl']",2304.11490v3.pdf,"  Large language models (LLMs) excel in many tasks in 2023, but they still face
challenges in complex reasoning. Theory-of-mind (ToM) tasks, which require
understanding agents' beliefs, goals, and mental states, are essential for
common-sense reasoning involving humans, making it crucial to enhance LLM
performance in this area. This study measures the ToM performance of GPT-4 and
three GPT-3.5 variants (Davinci-2, Davinci-3, GPT-3.5-Turbo), and investigates
the effectiveness of in-context learning in improving their ToM comprehension.
We evaluated prompts featuring two-shot chain of thought reasoning and
step-by-step thinking instructions. We found that LLMs trained with
Reinforcement Learning from Human Feedback (RLHF) (all models excluding
Davinci-2) improved their ToM accuracy via in-context learning. GPT-4 performed
best in zero-shot settings, reaching nearly 80% ToM accuracy, but still fell
short of the 87% human accuracy on the test set. However, when supplied with
prompts for in-context learning, all RLHF-trained LLMs exceeded 80% ToM
accuracy, with GPT-4 reaching 100%. These results demonstrate that appropriate
prompting enhances LLM ToM reasoning, and they underscore the context-dependent
nature of LLM cognitive capacities.
"
Unified Demonstration Retriever for In-Context Learning,Xiaonan Li,http://arxiv.org/pdf/2305.04320v2.pdf,2023-05-07,['cs.cl'],2305.04320v2.pdf,"  In-context learning is a new learning paradigm where a language model
conditions on a few input-output pairs (demonstrations) and a test input, and
directly outputs the prediction. It has been shown highly dependent on the
provided demonstrations and thus promotes the research of demonstration
retrieval: given a test input, relevant examples are retrieved from the
training set to serve as informative demonstrations for in-context learning.
While previous works focus on training task-specific retrievers for several
tasks separately, these methods are often hard to transfer and scale on various
tasks, and separately trained retrievers incur a lot of parameter storage and
deployment cost. In this paper, we propose Unified Demonstration Retriever
(\textbf{UDR}), a single model to retrieve demonstrations for a wide range of
tasks. To train UDR, we cast various tasks' training signals into a unified
list-wise ranking formulation by language model's feedback. Then we propose a
multi-task list-wise ranking training framework, with an iterative mining
strategy to find high-quality candidates, which can help UDR fully incorporate
various tasks' signals. Experiments on 30+ tasks across 13 task families and
multiple data domains show that UDR significantly outperforms baselines.
Further analyses show the effectiveness of each proposed component and UDR's
strong ability in various scenarios including different LMs (1.3B - 175B),
unseen datasets, varying demonstration quantities, etc.
"
Efficient Prompting via Dynamic In-Context Learning,Wangchunshu Zhou,http://arxiv.org/pdf/2305.11170v1.pdf,2023-05-18,"['cs.cl', 'cs.ai', 'cs.lg']",2305.11170v1.pdf,"  The primary way of building AI applications is shifting from training
specialist models to prompting generalist models. A common practice for
prompting generalist models, often referred to as in-context learning, is to
append a few examples (demonstrations) to the prompt to help the model better
understand the task. While effective, in-context learning can be inefficient
because it makes the input prompt much longer, consuming valuable space in the
context window and leading to larger computational costs. In this paper, we
propose DynaICL, a recipe for efficient prompting with black-box generalist
models that dynamically allocate in-context examples according to the input
complexity and the computational budget. To achieve this, we train a meta
controller that predicts the number of in-context examples suitable for the
generalist model to make a good prediction based on the performance-efficiency
trade-off for a specific input. We then dynamically allocate the number of
demonstrations for an input according to predictions from the meta controller
and the given computation budget. Experimental results show that dynamic
example allocation helps achieve a better performance-efficiency trade-off in
two practical settings where computational resources or the required
performance is constrained. Specifically, DynaICL saves up to 46% token budget
compared to the common practice that allocates the same number of in-context
examples to each input. We also find that a meta controller trained on a
certain backbone model and tasks can successfully generalize to unseen models
and tasks.
"
Post Hoc Explanations of Language Models Can Improve Language Models,Satyapriya Krishna,http://arxiv.org/pdf/2305.11426v2.pdf,2023-05-19,"['cs.cl', 'cs.ai']",2305.11426v2.pdf,"  Large Language Models (LLMs) have demonstrated remarkable capabilities in
performing complex tasks. Moreover, recent research has shown that
incorporating human-annotated rationales (e.g., Chain-of-Thought prompting)
during in-context learning can significantly enhance the performance of these
models, particularly on tasks that require reasoning capabilities. However,
incorporating such rationales poses challenges in terms of scalability as this
requires a high degree of human involvement. In this work, we present a novel
framework, Amplifying Model Performance by Leveraging In-Context Learning with
Post Hoc Explanations (AMPLIFY), which addresses the aforementioned challenges
by automating the process of rationale generation. To this end, we leverage
post hoc explanation methods which output attribution scores (explanations)
capturing the influence of each of the input features on model predictions.
More specifically, we construct automated natural language rationales that
embed insights from post hoc explanations to provide corrective signals to
LLMs. Extensive experimentation with real-world datasets demonstrates that our
framework, AMPLIFY, leads to prediction accuracy improvements of about 10-25%
over a wide range of tasks, including those where prior approaches which rely
on human-annotated rationales such as Chain-of-Thought prompting fall short.
Our work makes one of the first attempts at highlighting the potential of post
hoc explanations as valuable tools for enhancing the effectiveness of LLMs.
Furthermore, we conduct additional empirical analyses and ablation studies to
demonstrate the impact of each of the components of AMPLIFY, which, in turn,
leads to critical insights for refining in-context learning.
"
Explaining Emergent In-Context Learning as Kernel Regression,Chi Han,http://arxiv.org/pdf/2305.12766v2.pdf,2023-05-22,"['cs.cl', 'cs.ai', 'cs.lg']",2305.12766v2.pdf,"  Large language models (LLMs) have initiated a paradigm shift in transfer
learning. In contrast to the classic pretraining-then-finetuning procedure, in
order to use LLMs for downstream prediction tasks, one only needs to provide a
few demonstrations, known as in-context examples, without adding more or
updating existing model parameters. This in-context learning (ICL) capability
of LLMs is intriguing, and it is not yet fully understood how pretrained LLMs
acquire such capabilities. In this paper, we investigate the reason why a
transformer-based language model can accomplish in-context learning after
pre-training on a general language corpus by proposing one hypothesis that LLMs
can simulate kernel regression with internal representations when faced with
in-context examples. More concretely, we first prove that Bayesian inference on
in-context prompts can be asymptotically understood as kernel regression $\hat
y = \sum_i y_i K(x, x_i)/\sum_i K(x, x_i)$ as the number of in-context
demonstrations grows. Then, we empirically investigate the in-context behaviors
of language models. We find that during ICL, the attention and hidden features
in LLMs match the behaviors of a kernel regression. Finally, our theory
provides insights into multiple phenomena observed in the ICL field: why
retrieving demonstrative samples similar to test samples can help, why ICL
performance is sensitive to the output formats, and why ICL accuracy benefits
from selecting in-distribution and representative samples.
"
RetICL: Sequential Retrieval of In-Context Examples with Reinforcement  Learning,Alexander Scarlatos,http://arxiv.org/pdf/2305.14502v1.pdf,2023-05-23,"['cs.cl', 'cs.ai', 'cs.lg']",2305.14502v1.pdf,"  Many recent developments in large language models focus on prompting them to
perform specific tasks. One effective prompting method is in-context learning,
where the model performs a (possibly new) generation/prediction task given one
(or more) examples. Past work has shown that the choice of examples can make a
large impact on task performance. However, finding good examples is not
straightforward since the definition of a representative group of examples can
vary greatly depending on the task. While there are many existing methods for
selecting in-context examples, they generally score examples independently,
ignoring the dependency between them and the order in which they are provided
to the large language model. In this work, we propose Retrieval for In-Context
Learning (RetICL), a learnable method for modeling and optimally selecting
examples sequentially for in-context learning. We frame the problem of
sequential example selection as a Markov decision process, design an example
retriever model using an LSTM, and train it using proximal policy optimization
(PPO). We validate RetICL on math problem solving datasets and show that it
outperforms both heuristic and learnable baselines, and achieves
state-of-the-art accuracy on the TabMWP dataset. We also use case studies to
show that RetICL implicitly learns representations of math problem solving
strategies.
"
In-Context Learning for Attention Scheme: from Single Softmax Regression  to Multiple Softmax Regression via a Tensor Trick,Yeqi Gao,http://arxiv.org/pdf/2307.02419v1.pdf,2023-07-05,['cs.lg'],2307.02419v1.pdf,"  Large language models (LLMs) have brought significant and transformative
changes in human society. These models have demonstrated remarkable
capabilities in natural language understanding and generation, leading to
various advancements and impacts across several domains.
  We consider the in-context learning under two formulation for attention
related regression in this work. Given matrices $A_1 \in \mathbb{R}^{n \times
d}$, and $A_2 \in \mathbb{R}^{n \times d}$ and $B \in \mathbb{R}^{n \times n}$,
the purpose is to solve some certain optimization problems: Normalized version
$\min_{X} \| D(X)^{-1} \exp(A_1 X A_2^\top) - B \|_F^2$ and Rescaled version
$\| \exp(A_1 X A_2^\top) - D(X) \cdot B \|_F^2$. Here $D(X) := \mathrm{diag}(
\exp(A_1 X A_2^\top) {\bf 1}_n )$.
  Our regression problem shares similarities with previous studies on
softmax-related regression. Prior research has extensively investigated
regression techniques related to softmax regression: Normalized version $\|
\langle \exp(Ax) , {\bf 1}_n \rangle^{-1} \exp(Ax) - b \|_2^2$ and Resscaled
version $\| \exp(Ax) - \langle \exp(Ax), {\bf 1}_n \rangle b \|_2^2 $
  In contrast to previous approaches, we adopt a vectorization technique to
address the regression problem in matrix formulation. This approach expands the
dimension from $d$ to $d^2$, resembling the formulation of the regression
problem mentioned earlier.
  Upon completing the lipschitz analysis of our regression function, we have
derived our main result concerning in-context learning.
"
SynerGPT: In-Context Learning for Personalized Drug Synergy Prediction  and Drug Design,Carl Edwards,http://arxiv.org/pdf/2307.11694v2.pdf,2023-06-19,"['cs.ai', 'cs.lg', 'q-bio.bm', 'q-bio.mn']",2307.11694v2.pdf,"  Predicting synergistic drug combinations can help accelerate discovery of
cancer treatments, particularly therapies personalized to a patient's specific
tumor via biopsied cells. In this paper, we propose a novel setting and models
for in-context drug synergy learning. We are given a small ""personalized
dataset"" of 10-20 drug synergy relationships in the context of specific cancer
cell targets. Our goal is to predict additional drug synergy relationships in
that context. Inspired by recent work that pre-trains a GPT language model (LM)
to ""in-context learn"" common function classes, we devise novel pre-training
schemes that enable a GPT model to in-context learn ""drug synergy functions"".
Our model -- which does not use any textual corpora, molecular fingerprints,
protein interaction or any other domain-specific knowledge -- is able to
achieve competitive results. We further integrate our in-context approach with
a genetic algorithm to optimize model prompts and select synergy candidates to
test after conducting a patient biopsy. Finally, we explore a novel task of
inverse drug design which can potentially enable the design of drugs that
synergize specifically to target a given patient's ""personalized dataset"". Our
findings can potentially have an important impact on precision cancer medicine,
and also raise intriguing questions on non-textual pre-training for LMs.
"
OUTFOX: LLM-generated Essay Detection through In-context Learning with  Adversarially Generated Examples,Ryuto Koike,http://arxiv.org/pdf/2307.11729v2.pdf,2023-07-21,['cs.cl'],2307.11729v2.pdf,"  Large Language Models (LLMs) have achieved human-level fluency in text
generation, making it difficult to distinguish between human-written and
LLM-generated texts. This poses a growing risk of misuse of LLMs and demands
the development of detectors to identify LLM-generated texts. However, existing
detectors lack robustness against attacks: they degrade detection accuracy by
simply paraphrasing LLM-generated texts. Furthermore, a malicious user might
attempt to deliberately evade the detectors based on detection results, but
this has not been assumed in previous studies. In this paper, we propose
OUTFOX, a framework that improves the robustness of LLM-generated-text
detectors by allowing both the detector and the attacker to consider each
other's output. In this framework, the attacker uses the detector's prediction
labels as examples for in-context learning and adversarially generates essays
that are harder to detect, while the detector uses the adversarially generated
essays as examples for in-context learning to learn to detect essays from a
strong attacker. Experiments in the domain of student essays show that the
proposed detector improves the detection performance on the attacker-generated
texts by up to +41.3 points in F1-score. Furthermore, the proposed detector
shows a state-of-the-art detection performance: up to 96.9 points in F1-score,
beating existing detectors on non-attacked texts. Finally, the proposed
attacker drastically degrades the performance of detectors by up to -57.0
points F1-score, massively outperforming the baseline paraphrasing method for
evading detection.
"
Metric-Based In-context Learning: A Case Study in Text Simplification,Subha Vadlamannati,http://arxiv.org/pdf/2307.14632v1.pdf,2023-07-27,"['cs.cl', 'cs.ai']",2307.14632v1.pdf,"  In-context learning (ICL) for large language models has proven to be a
powerful approach for many natural language processing tasks. However,
determining the best method to select examples for ICL is nontrivial as the
results can vary greatly depending on the quality, quantity, and order of
examples used. In this paper, we conduct a case study on text simplification
(TS) to investigate how to select the best and most robust examples for ICL. We
propose Metric-Based in-context Learning (MBL) method that utilizes commonly
used TS metrics such as SARI, compression ratio, and BERT-Precision for
selection. Through an extensive set of experiments with various-sized GPT
models on standard TS benchmarks such as TurkCorpus and ASSET, we show that
examples selected by the top SARI scores perform the best on larger models such
as GPT-175B, while the compression ratio generally performs better on smaller
models such as GPT-13B and GPT-6.7B. Furthermore, we demonstrate that MBL is
generally robust to example orderings and out-of-domain test sets, and
outperforms strong baselines and state-of-the-art finetuned language models.
Finally, we show that the behaviour of large GPT models can be implicitly
controlled by the chosen metric. Our research provides a new framework for
selecting examples in ICL, and demonstrates its effectiveness in text
simplification tasks, breaking new ground for more accurate and efficient NLG
systems.
"
HICL: Hashtag-Driven In-Context Learning for Social Media Natural  Language Understanding,Hanzhuo Tan,http://arxiv.org/pdf/2308.09985v1.pdf,2023-08-19,['cs.cl'],2308.09985v1.pdf,"  Natural language understanding (NLU) is integral to various social media
applications. However, existing NLU models rely heavily on context for semantic
learning, resulting in compromised performance when faced with short and noisy
social media content. To address this issue, we leverage in-context learning
(ICL), wherein language models learn to make inferences by conditioning on a
handful of demonstrations to enrich the context and propose a novel
hashtag-driven in-context learning (HICL) framework. Concretely, we pre-train a
model #Encoder, which employs #hashtags (user-annotated topic labels) to drive
BERT-based pre-training through contrastive learning. Our objective here is to
enable #Encoder to gain the ability to incorporate topic-related semantic
information, which allows it to retrieve topic-related posts to enrich contexts
and enhance social media NLU with noisy contexts. To further integrate the
retrieved context with the source text, we employ a gradient-based method to
identify trigger terms useful in fusing information from both sources. For
empirical studies, we collected 45M tweets to set up an in-context NLU
benchmark, and the experimental results on seven downstream tasks show that
HICL substantially advances the previous state-of-the-art results. Furthermore,
we conducted extensive analyzes and found that: (1) combining source input with
a top-retrieved post from #Encoder is more effective than using semantically
similar posts; (2) trigger words can largely benefit in merging context from
the source and retrieved posts.
"
Improving the Reliability of Large Language Models by Leveraging  Uncertainty-Aware In-Context Learning,Yuchen Yang,http://arxiv.org/pdf/2310.04782v1.pdf,2023-10-07,['cs.cl'],2310.04782v1.pdf,"  In recent years, large-scale language models (LLMs) have gained attention for
their impressive text generation capabilities. However, these models often face
the challenge of ""hallucination,"" which undermines their reliability. In this
study, we introduce an uncertainty-aware in-context learning framework to
empower the model to enhance or reject its output in response to uncertainty.
Human-defined methods for estimating uncertainty typically assume that
""uncertainty is lower when the model's response is correct compared to when it
is incorrect."" However, setting a precise threshold to distinguish correctness
is challenging. Therefore, we introduce uncertainty information as an
intermediary variable that implicitly influences the model's behavior. Our
innovative uncertainty-aware in-context learning framework involves fine-tuning
the LLM using a calibration dataset. Our aim is to improve the model's
responses by filtering out answers with high uncertainty while considering the
model's knowledge limitations. We evaluate the model's knowledge by examining
multiple responses to the same question for the presence of a correct answer.
When the model lacks relevant knowledge, the response should indicate that the
question cannot be answered. Conversely, when the model has relevant knowledge,
the response should provide the correct answer. Extensive experiments confirm
the effectiveness of our framework, leading to two key findings. First, the
logit output values of the LLM partly reflect inherent uncertainty. Second, our
model autonomously recognizes uncertainty, resulting in improved responses.
"
In-Context Convergence of Transformers,Yu Huang,http://arxiv.org/pdf/2310.05249v1.pdf,2023-10-08,"['cs.lg', 'cs.ai', 'math.oc', 'stat.ml']",2310.05249v1.pdf,"  Transformers have recently revolutionized many domains in modern machine
learning and one salient discovery is their remarkable in-context learning
capability, where models can solve an unseen task by utilizing task-specific
prompts without further parameters fine-tuning. This also inspired recent
theoretical studies aiming to understand the in-context learning mechanism of
transformers, which however focused only on linear transformers. In this work,
we take the first step toward studying the learning dynamics of a one-layer
transformer with softmax attention trained via gradient descent in order to
in-context learn linear function classes. We consider a structured data model,
where each token is randomly sampled from a set of feature vectors in either
balanced or imbalanced fashion. For data with balanced features, we establish
the finite-time convergence guarantee with near-zero prediction error by
navigating our analysis over two phases of the training dynamics of the
attention map. More notably, for data with imbalanced features, we show that
the learning dynamics take a stage-wise convergence process, where the
transformer first converges to a near-zero prediction error for the query
tokens of dominant features, and then converges later to a near-zero prediction
error for the query tokens of under-represented features, respectively via one
and four training phases. Our proof features new techniques for analyzing the
competing strengths of two types of attention weights, the change of which
determines different training phases.
"
Large Language Model-Aware In-Context Learning for Code Generation,Jia Li,http://arxiv.org/pdf/2310.09748v1.pdf,2023-10-15,"['cs.se', 'cs.cl']",2310.09748v1.pdf,"  Large language models (LLMs) have shown impressive in-context learning (ICL)
ability in code generation. LLMs take a prompt consisting of requirement-code
examples and a new requirement as input, and output new programs. Existing
studies have found that ICL is highly dominated by the examples and thus arises
research on example selection. However, existing approaches randomly select
examples or only consider the textual similarity of requirements to retrieve,
leading to sub-optimal performance. In this paper, we propose a novel
learning-based selection approach named LAIL (LLM-Aware In-context Learning)
for code generation. Given a candidate example, we exploit LLMs themselves to
estimate it by considering the generation probabilities of ground-truth
programs given a requirement and the example. We then label candidate examples
as positive or negative through the probability feedback. Based on the labeled
data, we import a contrastive learning objective to train an effective
retriever that acquires the preference of LLMs in code generation. We apply
LAIL to three LLMs and evaluate it on three representative datasets (e.g.,
MBJP, MBPP, and MBCPP). LATA outperforms the state-of-the-art baselines by
11.58%, 6.89%, and 5.07% on CodeGen, and 4.38%, 2.85%, and 2.74% on GPT-3.5 in
terms of Pass@1, respectively.
"
Two-stage LLM Fine-tuning with Less Specialization and More  Generalization,Yihan Wang,http://arxiv.org/pdf/2211.00635v2.pdf,2022-11-01,"['cs.cl', 'cs.lg']",2211.00635v2.pdf,"  Pretrained large language models (LLMs) are general purpose problem solvers
applicable to a diverse set of tasks with prompts. They can be further improved
towards a specific task by fine-tuning on a specialized dataset. However,
fine-tuning usually makes the model narrowly specialized on this dataset with
reduced general in-context learning performances, which is undesirable whenever
the fine-tuned model needs to handle additional tasks where no fine-tuning data
is available. In this work, we first demonstrate that fine-tuning on a single
task indeed decreases LLMs' general in-context learning performance. We
discover one important cause of such forgetting, format specialization, where
the model overfits to the format of the fine-tuned task. We further show that
format specialization happens at the very beginning of fine-tuning. To solve
this problem, we propose Prompt Tuning with MOdel Tuning (ProMoT), a simple yet
effective two-stage fine-tuning framework that reduces format specialization
and improves generalization. ProMoT offloads task-specific format learning into
additional and removable parameters by first doing prompt tuning and then
fine-tuning the model itself with this soft prompt attached. With experiments
on several fine-tuning tasks and 8 in-context evaluation tasks, we show that
ProMoT achieves comparable performance on fine-tuned tasks to standard
fine-tuning, but with much less loss of in-context learning performances across
a board range of out-of-domain evaluation tasks. More importantly, ProMoT can
even enhance generalization on in-context learning tasks that are semantically
related to the fine-tuned task, e.g. ProMoT on En-Fr translation significantly
improves performance on other language pairs, and ProMoT on NLI improves
performance on summarization. Experiments also show that ProMoT can improve the
generalization performance of multi-task training.
"
On the Relation between Sensitivity and Accuracy in In-context Learning,Yanda Chen,http://arxiv.org/pdf/2209.07661v2.pdf,2022-09-16,"['cs.cl', 'cs.ai', 'cs.lg']",2209.07661v2.pdf,"  In-context learning (ICL) suffers from oversensitivity to the prompt, making
it unreliable in real-world scenarios. We study the sensitivity of ICL with
respect to multiple perturbation types. First, we find that label bias obscures
the true sensitivity, and therefore prior work may have significantly
underestimated ICL sensitivity. Second, we observe a strong negative
correlation between ICL sensitivity and accuracy: predictions sensitive to
perturbations are less likely to be correct. Motivated by these findings, we
propose \textsc{SenSel}, a few-shot selective prediction method that abstains
from sensitive predictions. Experiments on ten classification datasets show
that \textsc{SenSel} consistently outperforms two commonly used
confidence-based and entropy-based baselines on abstention decisions.
"
WinoDict: Probing language models for in-context word acquisition,Julian Martin Eisenschlos,http://arxiv.org/pdf/2209.12153v1.pdf,2022-09-25,"['cs.cl', 'cs.ai']",2209.12153v1.pdf,"  We introduce a new in-context learning paradigm to measure Large Language
Models' (LLMs) ability to learn novel words during inference. In particular, we
rewrite Winograd-style co-reference resolution problems by replacing the key
concept word with a synthetic but plausible word that the model must understand
to complete the task. Solving this task requires the model to make use of the
dictionary definition of the new word given in the prompt. This benchmark
addresses word acquisition, one important aspect of the diachronic degradation
known to afflict LLMs. As LLMs are frozen in time at the moment they are
trained, they are normally unable to reflect the way language changes over
time. We show that the accuracy of LLMs compared to the original Winograd tasks
decreases radically in our benchmark, thus identifying a limitation of current
models and providing a benchmark to measure future improvements in LLMs ability
to do in-context learning.
"
Data Curation Alone Can Stabilize In-context Learning,Ting-Yun Chang,http://arxiv.org/pdf/2212.10378v2.pdf,2022-12-20,['cs.cl'],2212.10378v2.pdf,"  In-context learning (ICL) enables large language models (LLMs) to perform new
tasks by prompting them with a sequence of training examples. However, it is
known that ICL is very sensitive to the choice of training examples: randomly
sampling examples from a training set leads to high variance in performance. In
this paper, we show that carefully curating a subset of training data greatly
stabilizes ICL performance without any other changes to the ICL algorithm
(e.g., prompt retrieval or calibration). We introduce two methods to choose
training subsets -- both score training examples individually, then select the
highest-scoring ones. CondAcc scores a training example by its average dev-set
ICL accuracy when combined with random training examples, while Datamodels
learns linear regressors that estimate how the presence of each training
example influences LLM outputs. Across five tasks and two LLMs, sampling from
stable subsets selected by CondAcc and Datamodels improves average accuracy
over sampling from the entire training set by 7.7% and 6.3%, respectively.
Surprisingly, the stable subset examples are not especially diverse in content
or low in perplexity, in contrast with other work suggesting that diversity and
perplexity are important when prompting LLMs.
"
A Survey on In-context Learning,Qingxiu Dong,http://arxiv.org/pdf/2301.00234v3.pdf,2022-12-31,"['cs.cl', 'cs.ai']",2301.00234v3.pdf,"  With the increasing ability of large language models (LLMs), in-context
learning (ICL) has become a new paradigm for natural language processing (NLP),
where LLMs make predictions only based on contexts augmented with a few
examples. It has been a new trend to explore ICL to evaluate and extrapolate
the ability of LLMs. In this paper, we aim to survey and summarize the progress
and challenges of ICL. We first present a formal definition of ICL and clarify
its correlation to related studies. Then, we organize and discuss advanced
techniques, including training strategies, demonstration designing strategies,
as well as related analysis. Finally, we discuss the challenges of ICL and
provide potential directions for further research. We hope that our work can
encourage more research on uncovering how ICL works and improving ICL.
"
Using In-Context Learning to Improve Dialogue Safety,Nicholas Meade,http://arxiv.org/pdf/2302.00871v3.pdf,2023-02-02,['cs.cl'],2302.00871v3.pdf,"  While large neural-based conversational models have become increasingly
proficient dialogue agents, recent work has highlighted safety issues with
these systems. For example, these systems can be goaded into generating toxic
content, which often perpetuates social biases or stereotypes. We investigate a
retrieval-based method for reducing bias and toxicity in responses from
chatbots. It uses in-context learning to steer a model towards safer
generations. Concretely, to generate a response to an unsafe dialogue context,
we retrieve demonstrations of safe responses to similar dialogue contexts. We
find our method performs competitively with strong baselines without requiring
training. For instance, using automatic evaluation, we find our best fine-tuned
baseline only generates safe responses to unsafe dialogue contexts from
DiaSafety 4.04% more than our approach. Finally, we also propose a re-ranking
procedure which can further improve response safeness.
"
Towards Few-Shot Identification of Morality Frames using In-Context  Learning,Shamik Roy,http://arxiv.org/pdf/2302.02029v1.pdf,2023-02-03,['cs.cl'],2302.02029v1.pdf,"  Data scarcity is a common problem in NLP, especially when the annotation
pertains to nuanced socio-linguistic concepts that require specialized
knowledge. As a result, few-shot identification of these concepts is desirable.
Few-shot in-context learning using pre-trained Large Language Models (LLMs) has
been recently applied successfully in many NLP tasks. In this paper, we study
few-shot identification of a psycho-linguistic concept, Morality Frames (Roy et
al., 2021), using LLMs. Morality frames are a representation framework that
provides a holistic view of the moral sentiment expressed in text, identifying
the relevant moral foundation (Haidt and Graham, 2007) and at a finer level of
granularity, the moral sentiment expressed towards the entities mentioned in
the text. Previous studies relied on human annotation to identify morality
frames in text which is expensive. In this paper, we propose prompting-based
approaches using pretrained Large Language Models for identification of
morality frames, relying only on few-shot exemplars. We compare our models'
performance with few-shot RoBERTa and found promising results.
"
OpenICL: An Open-Source Framework for In-context Learning,Zhenyu Wu,http://arxiv.org/pdf/2303.02913v1.pdf,2023-03-06,['cs.cl'],2303.02913v1.pdf,"  In recent years, In-context Learning (ICL) has gained increasing attention
and emerged as the new paradigm for large language model (LLM) evaluation.
Unlike traditional fine-tuning methods, ICL instead adapts the pre-trained
models to unseen tasks without any parameter updates. However, the
implementation of ICL is sophisticated due to the diverse retrieval and
inference methods involved, as well as the varying pre-processing requirements
for different models, datasets, and tasks. A unified and flexible framework for
ICL is urgently needed to ease the implementation of the aforementioned
components. To facilitate ICL research, we introduce OpenICL, an open-source
toolkit for ICL and LLM evaluation. OpenICL is research-friendly with a highly
flexible architecture that users can easily combine different components to
suit their needs. It also provides various state-of-the-art retrieval and
inference methods to streamline the process of adapting ICL to cutting-edge
research. The effectiveness of OpenICL has been validated on a wide range of
NLP tasks, including classification, QA, machine translation, and semantic
parsing. As a side-product, we found OpenICL to be an efficient yet robust tool
for LLMs evaluation. OpenICL is released at
https://github.com/Shark-NLP/OpenICL
"
The Scope of In-Context Learning for the Extraction of Medical Temporal  Constraints,Parker Seegmiller,http://arxiv.org/pdf/2303.09366v2.pdf,2023-03-16,"['cs.cl', 'cs.lg']",2303.09366v2.pdf,"  Medications often impose temporal constraints on everyday patient activity.
Violations of such medical temporal constraints (MTCs) lead to a lack of
treatment adherence, in addition to poor health outcomes and increased
healthcare expenses. These MTCs are found in drug usage guidelines (DUGs) in
both patient education materials and clinical texts. Computationally
representing MTCs in DUGs will advance patient-centric healthcare applications
by helping to define safe patient activity patterns. We define a novel taxonomy
of MTCs found in DUGs and develop a novel context-free grammar (CFG) based
model to computationally represent MTCs from unstructured DUGs. Additionally,
we release three new datasets with a combined total of N = 836 DUGs labeled
with normalized MTCs. We develop an in-context learning (ICL) solution for
automatically extracting and normalizing MTCs found in DUGs, achieving an
average F1 score of 0.62 across all datasets. Finally, we rigorously
investigate ICL model performance against a baseline model, across datasets and
MTC types, and through in-depth error analysis.
"
How to Unleash the Power of Large Language Models for Few-shot Relation  Extraction?,Xin Xu,http://arxiv.org/pdf/2305.01555v4.pdf,2023-05-02,"['cs.cl', 'cs.ai', 'cs.db', 'cs.ir', 'cs.lg']",2305.01555v4.pdf,"  Scaling language models have revolutionized widespread NLP tasks, yet little
comprehensively explored few-shot relation extraction with large language
models. In this paper, we investigate principal methodologies, in-context
learning and data generation, for few-shot relation extraction via GPT-3.5
through exhaustive experiments. To enhance few-shot performance, we further
propose task-related instructions and schema-constrained data generation. We
observe that in-context learning can achieve performance on par with previous
prompt learning approaches, and data generation with the large language model
can boost previous solutions to obtain new state-of-the-art few-shot results on
four widely-studied relation extraction datasets. We hope our work can inspire
future research for the capabilities of large language models in few-shot
relation extraction. Code is available in
https://github.com/zjunlp/DeepKE/tree/main/example/llm.
"
GPT-RE: In-context Learning for Relation Extraction using Large Language  Models,Zhen Wan,http://arxiv.org/pdf/2305.02105v2.pdf,2023-05-03,['cs.cl'],2305.02105v2.pdf,"  In spite of the potential for ground-breaking achievements offered by large
language models (LLMs) (e.g., GPT-3), they still lag significantly behind
fully-supervised baselines (e.g., fine-tuned BERT) in relation extraction (RE).
This is due to the two major shortcomings of LLMs in RE: (1) low relevance
regarding entity and relation in retrieved demonstrations for in-context
learning; and (2) the strong inclination to wrongly classify NULL examples into
other pre-defined labels.
  In this paper, we propose GPT-RE to bridge the gap between LLMs and
fully-supervised baselines. GPT-RE successfully addresses the aforementioned
issues by (1) incorporating task-specific entity representations in
demonstration retrieval; and (2) enriching the demonstrations with gold
label-induced reasoning logic. We evaluate GPT-RE on four widely-used RE
datasets, and observe that GPT-RE achieves improvements over not only existing
GPT-3 baselines, but also fully-supervised baselines. Specifically, GPT-RE
achieves SOTA performances on the Semeval and SciERC datasets, and competitive
performances on the TACRED and ACE05 datasets.
"
GersteinLab at MEDIQA-Chat 2023: Clinical Note Summarization from  Doctor-Patient Conversations through Fine-tuning and In-context Learning,Xiangru Tang,http://arxiv.org/pdf/2305.05001v1.pdf,2023-05-08,['cs.cl'],2305.05001v1.pdf,"  This paper presents our contribution to the MEDIQA-2023 Dialogue2Note shared
task, encompassing both subtask A and subtask B. We approach the task as a
dialogue summarization problem and implement two distinct pipelines: (a) a
fine-tuning of a pre-trained dialogue summarization model and GPT-3, and (b)
few-shot in-context learning (ICL) using a large language model, GPT-4. Both
methods achieve excellent results in terms of ROUGE-1 F1, BERTScore F1
(deberta-xlarge-mnli), and BLEURT, with scores of 0.4011, 0.7058, and 0.5421,
respectively. Additionally, we predict the associated section headers using
RoBERTa and SciBERT based classification models. Our team ranked fourth among
all teams, while each team is allowed to submit three runs as part of their
submission. We also utilize expert annotations to demonstrate that the notes
generated through the ICL GPT-4 are better than all other baselines. The code
for our submission is available.
"
Can We Edit Factual Knowledge by In-Context Learning?,Ce Zheng,http://arxiv.org/pdf/2305.12740v1.pdf,2023-05-22,['cs.cl'],2305.12740v1.pdf,"  Previous studies have shown that large language models (LLMs) like GPTs store
massive factual knowledge in their parameters. However, the stored knowledge
could be false or out-dated. Traditional knowledge editing methods refine LLMs
via fine-tuning on texts containing specific knowledge. However, with the
increasing scales of LLMs, these gradient-based approaches bring large
computation costs. The trend of model-as-a-service also makes it impossible to
modify knowledge in black-box LMs. Inspired by in-context learning (ICL), a new
paradigm based on demonstration contexts without parameter updating, we explore
whether ICL can edit factual knowledge. To answer this question, we give a
comprehensive empirical study of ICL strategies. Experiments show that
in-context knowledge editing (IKE), without any gradient and parameter
updating, achieves a competitive success rate compared to gradient-based
methods on GPT-J (6B) but with much fewer side effects, including less
over-editing on similar but unrelated facts and less knowledge forgetting on
previously stored knowledge. We also apply the method to larger LMs with tens
or hundreds of parameters like OPT-175B, which shows the scalability of our
method. The code is available at https://github.com/Zce1112zslx/IKE.
"
Concept-aware Training Improves In-context Learning Ability of Language  Models,Michal Štefánik,http://arxiv.org/pdf/2305.13775v1.pdf,2023-05-23,"['cs.cl', 'cs.ai']",2305.13775v1.pdf,"  Many recent language models (LMs) of Transformers family exhibit so-called
in-context learning (ICL) ability, manifested in the LMs' ability to modulate
their function by a task described in a natural language input. Previous work
curating these models assumes that ICL emerges from vast over-parametrization
or the scale of multi-task training. However, a complementary branch of recent
theoretical work attributes ICL emergence to specific properties of training
data and creates functional in-context learners in small-scale, synthetic
settings.
  Inspired by recent findings on data properties driving the emergence of ICL,
we propose a method to create LMs able to better utilize the in-context
information, by constructing training scenarios where it is beneficial for the
LM to capture the analogical reasoning concepts. We measure that data sampling
of Concept-aware Training (CoAT) consistently improves models' reasoning
ability. As a result, the in-context learners trained with CoAT on only two
datasets of a single (QA) task perform comparably to larger models trained on
1600+ tasks.
"
Dr.ICL: Demonstration-Retrieved In-context Learning,Man Luo,http://arxiv.org/pdf/2305.14128v1.pdf,2023-05-23,"['cs.cl', 'cs.ai']",2305.14128v1.pdf,"  In-context learning (ICL), teaching a large language model (LLM) to perform a
task with few-shot demonstrations rather than adjusting the model parameters,
has emerged as a strong paradigm for using LLMs. While early studies primarily
used a fixed or random set of demonstrations for all test queries, recent
research suggests that retrieving semantically similar demonstrations to the
input from a pool of available demonstrations results in better performance.
This work expands the applicability of retrieval-based ICL approaches by
demonstrating that even simple word-overlap similarity measures such as BM25
outperform randomly selected demonstrations. Furthermore, we extend the success
of retrieval-based ICL to instruction-finetuned LLMs as well as
Chain-of-Thought (CoT) prompting. For instruction-finetuned LLMs, we find that
although a model has already seen the training data at training time,
retrieving demonstrations from the training data at test time yields better
results compared to using no demonstrations or random demonstrations. Last but
not least, we train a task-specific demonstration retriever that outperforms
off-the-shelf retrievers.
"
Label Words are Anchors: An Information Flow Perspective for  Understanding In-Context Learning,Lean Wang,http://arxiv.org/pdf/2305.14160v1.pdf,2023-05-23,"['cs.cl', 'cs.lg']",2305.14160v1.pdf,"  In-context learning (ICL) emerges as a promising capability of large language
models (LLMs) by providing them with demonstration examples to perform diverse
tasks. However, the underlying mechanism of how LLMs learn from the provided
context remains under-explored. In this paper, we investigate the working
mechanism of ICL through an information flow lens. Our findings reveal that
label words in the demonstration examples function as anchors: (1) semantic
information aggregates into label word representations during the shallow
computation layers' processing; (2) the consolidated information in label words
serves as a reference for LLMs' final predictions. Based on these insights, we
introduce an anchor re-weighting method to improve ICL performance, a
demonstration compression technique to expedite inference, and an analysis
framework for diagnosing ICL errors in GPT2-XL. The promising applications of
our findings again validate the uncovered ICL working mechanism and pave the
way for future studies.
"
Probing in Context: Toward Building Robust Classifiers via Probing Large  Language Models,Afra Amini,http://arxiv.org/pdf/2305.14171v2.pdf,2023-05-23,['cs.cl'],2305.14171v2.pdf,"  Large language models are able to learn new tasks in context, where they are
provided with instructions and a few annotated examples. However, the
effectiveness of in-context learning is dependent on the provided context, and
the performance on a downstream task can vary considerably, depending on the
instruction. Importantly, such dependency on the context can surface in
unpredictable ways, e.g., a seemingly more informative instruction might lead
to a worse performance. In this paper, we propose an alternative approach,
which we term in-context probing. Similar to in-context learning, we
contextualize the representation of the input with an instruction, but instead
of decoding the output prediction, we probe the contextualized representation
to predict the label. Through a series of experiments on a diverse set of
classification tasks, we show that in-context probing is significantly more
robust to changes in instructions. We further show that probing performs
competitive or superior to finetuning and can be particularly helpful to build
classifiers on top of smaller models, and with only a hundred training
examples.
"
Coverage-based Example Selection for In-Context Learning,Shivanshu Gupta,http://arxiv.org/pdf/2305.14907v3.pdf,2023-05-24,['cs.cl'],2305.14907v3.pdf,"  In-context learning (ICL), the ability of large language models to perform
novel tasks by conditioning on a prompt with a few task examples, requires
these examples to be informative about the test instance. The standard approach
of independently ranking and selecting the most similar examples selects
redundant examples while omitting important information. In this work, we show
that BERTScore-Recall (BSR) selects better examples that demonstrate more of
the salient aspects, e.g. reasoning patterns, of the test input. We further
extend BSR and many standard metrics to easily optimizable set-level metrics,
giving still better coverage of those salient aspects. On 15 datasets spanning
6 tasks and with 7 diverse LLMs, we show that (1) BSR is the superior metric
for in-context example selection across the board, and (2) for compositional
tasks, set selection using Set-BSR outperforms independent ranking by up to 17
points on average and, despite being training-free, surpasses methods that
leverage task or LLM-specific training.
"
Transformers learn to implement preconditioned gradient descent for  in-context learning,Kwangjun Ahn,http://arxiv.org/pdf/2306.00297v1.pdf,2023-06-01,"['cs.lg', 'cs.ai']",2306.00297v1.pdf,"  Motivated by the striking ability of transformers for in-context learning,
several works demonstrate that transformers can implement algorithms like
gradient descent. By a careful construction of weights, these works show that
multiple layers of transformers are expressive enough to simulate gradient
descent iterations. Going beyond the question of expressivity, we ask: Can
transformers learn to implement such algorithms by training over random problem
instances? To our knowledge, we make the first theoretical progress toward this
question via analysis of the loss landscape for linear transformers trained
over random instances of linear regression. For a single attention layer, we
prove the global minimum of the training objective implements a single
iteration of preconditioned gradient descent. Notably, the preconditioning
matrix not only adapts to the input distribution but also to the variance
induced by data inadequacy. For a transformer with $k$ attention layers, we
prove certain critical points of the training objective implement $k$
iterations of preconditioned gradient descent. Our results call for future
theoretical studies on learning algorithms by training transformers.
"
In-Context Learning User Simulators for Task-Oriented Dialog Systems,Silvia Terragni,http://arxiv.org/pdf/2306.00774v1.pdf,2023-06-01,"['cs.cl', 'cs.lg']",2306.00774v1.pdf,"  This paper presents a novel application of large language models in user
simulation for task-oriented dialog systems, specifically focusing on an
in-context learning approach. By harnessing the power of these models, the
proposed approach generates diverse utterances based on user goals and limited
dialog examples. Unlike traditional simulators, this method eliminates the need
for labor-intensive rule definition or extensive annotated data, making it more
efficient and accessible. Additionally, an error analysis of the interaction
between the user simulator and dialog system uncovers common mistakes,
providing valuable insights into areas that require improvement. Our
implementation is available at
https://github.com/telepathylabsai/prompt-based-user-simulator.
"
Towards In-context Scene Understanding,Ivana Balažević,http://arxiv.org/pdf/2306.01667v2.pdf,2023-06-02,['cs.cv'],2306.01667v2.pdf,"  In-context learning$\unicode{x2013}$the ability to configure a model's
behavior with different prompts$\unicode{x2013}$has revolutionized the field of
natural language processing, alleviating the need for task-specific models and
paving the way for generalist models capable of assisting with any query.
Computer vision, in contrast, has largely stayed in the former regime:
specialized decoders and finetuning protocols are generally required to perform
dense tasks such as semantic segmentation and depth estimation. In this work we
explore a simple mechanism for in-context learning of such scene understanding
tasks: nearest neighbor retrieval from a prompt of annotated features. We
propose a new pretraining protocol$\unicode{x2013}$leveraging attention within
and across images$\unicode{x2013}$which yields representations particularly
useful in this regime. The resulting Hummingbird model, suitably prompted,
performs various scene understanding tasks without modification while
approaching the performance of specialists that have been finetuned for each
task. Moreover, Hummingbird can be configured to perform new tasks much more
efficiently than finetuned models, raising the possibility of scene
understanding in the interactive assistant regime.
"
Leveraging Large Language Models for Scalable Vector Graphics-Driven  Image Understanding,Mu Cai,http://arxiv.org/pdf/2306.06094v1.pdf,2023-06-09,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",2306.06094v1.pdf,"  Recently, large language models (LLMs) have made significant advancements in
natural language understanding and generation. However, their potential in
computer vision remains largely unexplored. In this paper, we introduce a new,
exploratory approach that enables LLMs to process images using the Scalable
Vector Graphics (SVG) format. By leveraging the XML-based textual descriptions
of SVG representations instead of raster images, we aim to bridge the gap
between the visual and textual modalities, allowing LLMs to directly understand
and manipulate images without the need for parameterized visual components. Our
method facilitates simple image classification, generation, and in-context
learning using only LLM capabilities. We demonstrate the promise of our
approach across discriminative and generative tasks, highlighting its (i)
robustness against distribution shift, (ii) substantial improvements achieved
by tapping into the in-context learning abilities of LLMs, and (iii) image
understanding and generation capabilities with human guidance. Our code, data,
and models can be found here https://github.com/mu-cai/svg-llm.
"
Exploring the In-context Learning Ability of Large Language Model for  Biomedical Concept Linking,Qinyong Wang,http://arxiv.org/pdf/2307.01137v1.pdf,2023-07-03,"['cs.cl', 'cs.ai']",2307.01137v1.pdf,"  The biomedical field relies heavily on concept linking in various areas such
as literature mining, graph alignment, information retrieval,
question-answering, data, and knowledge integration. Although large language
models (LLMs) have made significant strides in many natural language processing
tasks, their effectiveness in biomedical concept mapping is yet to be fully
explored. This research investigates a method that exploits the in-context
learning (ICL) capabilities of large models for biomedical concept linking. The
proposed approach adopts a two-stage retrieve-and-rank framework. Initially,
biomedical concepts are embedded using language models, and then embedding
similarity is utilized to retrieve the top candidates. These candidates'
contextual information is subsequently incorporated into the prompt and
processed by a large language model to re-rank the concepts. This approach
achieved an accuracy of 90.% in BC5CDR disease entity normalization and 94.7%
in chemical entity normalization, exhibiting a competitive performance relative
to supervised learning methods. Further, it showed a significant improvement,
with an over 20-point absolute increase in F1 score on an oncology matching
dataset. Extensive qualitative assessments were conducted, and the benefits and
potential shortcomings of using large language models within the biomedical
domain were discussed. were discussed.
"
Learning to Retrieve In-Context Examples for Large Language Models,Liang Wang,http://arxiv.org/pdf/2307.07164v1.pdf,2023-07-14,"['cs.cl', 'cs.ir']",2307.07164v1.pdf,"  Large language models (LLMs) have demonstrated their ability to learn
in-context, allowing them to perform various tasks based on a few input-output
examples. However, the effectiveness of in-context learning is heavily reliant
on the quality of the selected examples. In this paper, we propose a novel
framework to iteratively train dense retrievers that can identify high-quality
in-context examples for LLMs. Our framework initially trains a reward model
based on LLM feedback to evaluate the quality of candidate examples, followed
by knowledge distillation to train a bi-encoder based dense retriever. Our
experiments on a suite of 30 tasks demonstrate that our framework significantly
enhances in-context learning performance. Furthermore, we show the
generalization ability of our framework to unseen tasks during training. An
in-depth analysis reveals that our model improves performance by retrieving
examples with similar patterns, and the gains are consistent across LLMs of
varying sizes.
"
In-Context Learning Learns Label Relationships but Is Not Conventional  Learning,Jannik Kossen,http://arxiv.org/pdf/2307.12375v3.pdf,2023-07-23,"['cs.cl', 'cs.ai', 'cs.lg']",2307.12375v3.pdf,"  The predictions of Large Language Models (LLMs) on downstream tasks often
improve significantly when including examples of the input--label relationship
in the context. However, there is currently no consensus about how this
in-context learning (ICL) ability of LLMs works. For example, while Xie et al.
(2021) liken ICL to a general-purpose learning algorithm, Min et al. (2022)
argue ICL does not even learn label relationships from in-context examples. In
this paper, we provide novel insights into how ICL leverages label information,
revealing both capabilities and limitations. To ensure we obtain a
comprehensive picture of ICL behavior, we study probabilistic aspects of ICL
predictions and thoroughly examine the dynamics of ICL as more examples are
provided. Our experiments show that ICL predictions almost always depend on
in-context labels, and that ICL can learn truly novel tasks in-context.
However, we also find that ICL struggles to fully overcome prediction
preferences acquired from pre-training data, and, further, that ICL does not
consider all in-context information equally.
"
Investigating the Learning Behaviour of In-context Learning: A  Comparison with Supervised Learning,Xindi Wang,http://arxiv.org/pdf/2307.15411v2.pdf,2023-07-28,['cs.cl'],2307.15411v2.pdf,"  Large language models (LLMs) have shown remarkable capacity for in-context
learning (ICL), where learning a new task from just a few training examples is
done without being explicitly pre-trained. However, despite the success of
LLMs, there has been little understanding of how ICL learns the knowledge from
the given prompts. In this paper, to make progress toward understanding the
learning behaviour of ICL, we train the same LLMs with the same demonstration
examples via ICL and supervised learning (SL), respectively, and investigate
their performance under label perturbations (i.e., noisy labels and label
imbalance) on a range of classification tasks. First, via extensive
experiments, we find that gold labels have significant impacts on the
downstream in-context performance, especially for large language models;
however, imbalanced labels matter little to ICL across all model sizes. Second,
when comparing with SL, we show empirically that ICL is less sensitive to label
perturbations than SL, and ICL gradually attains comparable performance to SL
as the model size increases.
"
Exploring Automated Distractor and Feedback Generation for Math  Multiple-choice Questions via In-context Learning,Hunter McNichols,http://arxiv.org/pdf/2308.03234v1.pdf,2023-08-07,['cs.cl'],2308.03234v1.pdf,"  Multiple-choice questions (MCQs) are ubiquitous in almost all levels of
education since they are easy to administer, grade, and are a reliable format
in both assessments and practices. An important aspect of MCQs is the
distractors, i.e., incorrect options that are designed to target specific
misconceptions or insufficient knowledge among students. To date, the task of
crafting high-quality distractors has largely remained a labor-intensive
process for teachers and learning content designers, which has limited
scalability. In this work, we explore the task of automated distractor and
corresponding feedback message generation in math MCQs using large language
models. We establish a formulation of these two tasks and propose a simple,
in-context learning-based solution. Moreover, we explore using two non-standard
metrics to evaluate the quality of the generated distractors and feedback
messages. We conduct extensive experiments on these tasks using a real-world
MCQ dataset that contains student response information. Our findings suggest
that there is a lot of room for improvement in automated distractor and
feedback generation. We also outline several directions for future work
"
CausalLM is not optimal for in-context learning,Nan Ding,http://arxiv.org/pdf/2308.06912v2.pdf,2023-08-14,"['cs.lg', 'cs.cl']",2308.06912v2.pdf,"  Recent empirical evidence indicates that transformer based in-context
learning performs better when using a prefix language model (prefixLM), in
which in-context samples can all attend to each other, compared to causal
language models (causalLM), which use auto-regressive attention that prohibits
in-context samples to attend to future samples. While this result is intuitive,
it is not understood from a theoretical perspective. In this paper we take a
theoretical approach and analyze the convergence behavior of prefixLM and
causalLM under a certain parameter construction. Our analysis shows that both
LM types converge to their stationary points at a linear rate, but that while
prefixLM converges to the optimal solution of linear regression, causalLM
convergence dynamics follows that of an online gradient descent algorithm,
which is not guaranteed to be optimal even as the number of samples grows
infinitely. We supplement our theoretical claims with empirical experiments
over synthetic and real tasks and using various types of transformers. Our
experiments verify that causalLM consistently underperforms prefixLM in all
settings.
"
Exploring Demonstration Ensembling for In-context Learning,Muhammad Khalifa,http://arxiv.org/pdf/2308.08780v2.pdf,2023-08-17,"['cs.cl', 'cs.ai']",2308.08780v2.pdf,"  In-context learning (ICL) operates by showing language models (LMs) examples
of input-output pairs for a given task, i.e., demonstrations. The standard
approach for ICL is to prompt the LM with concatenated demonstrations followed
by the test input. This approach suffers from some issues. First, concatenation
offers almost no control over the contribution of each demo to the model
prediction. This can be sub-optimal when some demonstrations are irrelevant to
the test example. Second, due to the input length limit of some transformer
models, it might be infeasible to fit many examples into the context,
especially when dealing with long-input tasks. In this work, we explore
Demonstration Ensembling (DENSE) as an alternative to simple concatenation.
DENSE predicts outputs using subsets (i.e., buckets) of the demonstrations and
then combines the output probabilities resulting from each subset to produce
the final prediction. We study different ensembling methods using GPT-j and
experiment on 12 language tasks. Our experiments show weighted max ensembling
to outperform vanilla concatenation by as large as 2.4 average points. Code
available at https://github.com/mukhal/icl-ensembling.
"
Context is Environment,Sharut Gupta,http://arxiv.org/pdf/2309.09888v2.pdf,2023-09-18,"['cs.lg', 'cs.ai', 'stat.ml']",2309.09888v2.pdf,"  Two lines of work are taking the central stage in AI research. On the one
hand, the community is making increasing efforts to build models that discard
spurious correlations and generalize better in novel test environments.
Unfortunately, the bitter lesson so far is that no proposal convincingly
outperforms a simple empirical risk minimization baseline. On the other hand,
large language models (LLMs) have erupted as algorithms able to learn
in-context, generalizing on-the-fly to eclectic contextual circumstances that
users enforce by means of prompting. In this paper, we argue that context is
environment, and posit that in-context learning holds the key to better domain
generalization. Via extensive theory and experiments, we show that paying
attention to context$\unicode{x2013}\unicode{x2013}$unlabeled examples as they
arrive$\unicode{x2013}\unicode{x2013}$allows our proposed In-Context Risk
Minimization (ICRM) algorithm to zoom-in on the test environment risk
minimizer, leading to significant out-of-distribution performance improvements.
From all of this, two messages are worth taking home. Researchers in domain
generalization should consider environment as context, and harness the adaptive
power of in-context learning. Researchers in LLMs should consider context as
environment, to better structure data towards generalization.
"
"Prompt, Condition, and Generate: Classification of Unsupported Claims  with In-Context Learning",Peter Ebert Christensen,http://arxiv.org/pdf/2309.10359v1.pdf,2023-09-19,['cs.cl'],2309.10359v1.pdf,"  Unsupported and unfalsifiable claims we encounter in our daily lives can
influence our view of the world. Characterizing, summarizing, and -- more
generally -- making sense of such claims, however, can be challenging. In this
work, we focus on fine-grained debate topics and formulate a new task of
distilling, from such claims, a countable set of narratives. We present a
crowdsourced dataset of 12 controversial topics, comprising more than 120k
arguments, claims, and comments from heterogeneous sources, each annotated with
a narrative label. We further investigate how large language models (LLMs) can
be used to synthesise claims using In-Context Learning. We find that generated
claims with supported evidence can be used to improve the performance of
narrative classification models and, additionally, that the same model can
infer the stance and aspect using a few training examples. Such a model can be
useful in applications which rely on narratives , e.g. fact-checking.
"
In-Context Learning for Text Classification with Many Labels,Aristides Milios,http://arxiv.org/pdf/2309.10954v1.pdf,2023-09-19,"['cs.cl', 'cs.lg']",2309.10954v1.pdf,"  In-context learning (ICL) using large language models for tasks with many
labels is challenging due to the limited context window, which makes it
difficult to fit a sufficient number of examples in the prompt. In this paper,
we use a pre-trained dense retrieval model to bypass this limitation, giving
the model only a partial view of the full label space for each inference call.
Testing with recent open-source LLMs (OPT, LLaMA), we set new state of the art
performance in few-shot settings for three common intent classification
datasets, with no finetuning. We also surpass fine-tuned performance on
fine-grained sentiment classification in certain cases. We analyze the
performance across number of in-context examples and different model scales,
showing that larger models are necessary to effectively and consistently make
use of larger context lengths for ICL. By running several ablations, we analyze
the model's use of: a) the similarity of the in-context examples to the current
input, b) the semantic content of the class names, and c) the correct
correspondence between examples and labels. We demonstrate that all three are
needed to varying degrees depending on the domain, contrary to certain recent
works.
"
Privacy-Preserving In-Context Learning with Differentially Private  Few-Shot Generation,Xinyu Tang,http://arxiv.org/pdf/2309.11765v1.pdf,2023-09-21,"['cs.lg', 'cs.cr']",2309.11765v1.pdf,"  We study the problem of in-context learning (ICL) with large language models
(LLMs) on private datasets. This scenario poses privacy risks, as LLMs may leak
or regurgitate the private examples demonstrated in the prompt. We propose a
novel algorithm that generates synthetic few-shot demonstrations from the
private dataset with formal differential privacy (DP) guarantees, and show
empirically that it can achieve effective ICL. We conduct extensive experiments
on standard benchmarks and compare our algorithm with non-private ICL and
zero-shot solutions. Our results demonstrate that our algorithm can achieve
competitive performance with strong privacy levels. These results open up new
possibilities for ICL with privacy protection for a broad range of
applications.
"
HRoT: Hybrid prompt strategy and Retrieval of Thought for Table-Text  Hybrid Question Answering,Tongxu Luo,http://arxiv.org/pdf/2309.12669v1.pdf,2023-09-22,['cs.cl'],2309.12669v1.pdf,"  Answering numerical questions over hybrid contents from the given tables and
text(TextTableQA) is a challenging task. Recently, Large Language Models (LLMs)
have gained significant attention in the NLP community. With the emergence of
large language models, In-Context Learning and Chain-of-Thought prompting have
become two particularly popular research topics in this field. In this paper,
we introduce a new prompting strategy called Hybrid prompt strategy and
Retrieval of Thought for TextTableQA. Through In-Context Learning, we prompt
the model to develop the ability of retrieval thinking when dealing with hybrid
data. Our method achieves superior performance compared to the fully-supervised
SOTA on the MultiHiertt dataset in the few-shot setting.
"
ALLURE: Auditing and Improving LLM-based Evaluation of Text using  Iterative In-Context-Learning,Hosein Hasanbeig,http://arxiv.org/pdf/2309.13701v2.pdf,2023-09-24,"['cs.cl', 'cs.ai', 'cs.hc']",2309.13701v2.pdf,"  From grading papers to summarizing medical documents, large language models
(LLMs) are evermore used for evaluation of text generated by humans and AI
alike. However, despite their extensive utility, LLMs exhibit distinct failure
modes, necessitating a thorough audit and improvement of their text evaluation
capabilities. Here we introduce ALLURE, a systematic approach to Auditing Large
Language Models Understanding and Reasoning Errors. ALLURE involves comparing
LLM-generated evaluations with annotated data, and iteratively incorporating
instances of significant deviation into the evaluator, which leverages
in-context learning (ICL) to enhance and improve robust evaluation of text by
LLMs. Through this iterative process, we refine the performance of the
evaluator LLM, ultimately reducing reliance on human annotators in the
evaluation process. We anticipate ALLURE to serve diverse applications of LLMs
in various domains related to evaluation of textual data, such as medical
summarization, education, and and productivity.
"
Dynamic Demonstrations Controller for In-Context Learning,Fei Zhao,http://arxiv.org/pdf/2310.00385v1.pdf,2023-09-30,"['cs.cl', 'cs.ai']",2310.00385v1.pdf,"  In-Context Learning (ICL) is a new paradigm for natural language processing
(NLP), where a large language model (LLM) observes a small number of
demonstrations and a test instance as its input, and directly makes predictions
without updating model parameters. Previous studies have revealed that ICL is
sensitive to the selection and the ordering of demonstrations. However, there
are few studies regarding the impact of the demonstration number on the ICL
performance within a limited input length of LLM, because it is commonly
believed that the number of demonstrations is positively correlated with model
performance. In this paper, we found this conclusion does not always hold true.
Through pilot experiments, we discover that increasing the number of
demonstrations does not necessarily lead to improved performance. Building upon
this insight, we propose a Dynamic Demonstrations Controller (D$^2$Controller),
which can improve the ICL performance by adjusting the number of demonstrations
dynamically. The experimental results show that D$^2$Controller yields a 5.4%
relative improvement on eight different sizes of LLMs across ten datasets.
Moreover, we also extend our method to previous ICL models and achieve
competitive results.
"
The Cost of Down-Scaling Language Models: Fact Recall Deteriorates  before In-Context Learning,Tian Jin,http://arxiv.org/pdf/2310.04680v1.pdf,2023-10-07,"['cs.cl', 'cs.ai', 'cs.lg']",2310.04680v1.pdf,"  How does scaling the number of parameters in large language models (LLMs)
affect their core capabilities? We study two natural scaling techniques --
weight pruning and simply training a smaller or larger model, which we refer to
as dense scaling -- and their effects on two core capabilities of LLMs: (a)
recalling facts presented during pre-training and (b) processing information
presented in-context during inference. By curating a suite of tasks that help
disentangle these two capabilities, we find a striking difference in how these
two abilities evolve due to scaling. Reducing the model size by more than 30\%
(via either scaling approach) significantly decreases the ability to recall
facts seen in pre-training. Yet, a 60--70\% reduction largely preserves the
various ways the model can process in-context information, ranging from
retrieving answers from a long context to learning parameterized functions from
in-context exemplars. The fact that both dense scaling and weight pruning
exhibit this behavior suggests that scaling model size has an inherently
disparate effect on fact recall and in-context learning.
"
Not All Demonstration Examples are Equally Beneficial: Reweighting  Demonstration Examples for In-Context Learning,Zhe Yang,http://arxiv.org/pdf/2310.08309v1.pdf,2023-10-12,['cs.cl'],2310.08309v1.pdf,"  Large Language Models (LLMs) have recently gained the In-Context Learning
(ICL) ability with the models scaling up, allowing them to quickly adapt to
downstream tasks with only a few demonstration examples prepended in the input
sequence. Nonetheless, the current practice of ICL treats all demonstration
examples equally, which still warrants improvement, as the quality of examples
is usually uneven. In this paper, we investigate how to determine approximately
optimal weights for demonstration examples and how to apply them during ICL. To
assess the quality of weights in the absence of additional validation data, we
design a masked self-prediction (MSP) score that exhibits a strong correlation
with the final ICL performance. To expedite the weight-searching process, we
discretize the continuous weight space and adopt beam search. With
approximately optimal weights obtained, we further propose two strategies to
apply them to demonstrations at different model positions. Experimental results
on 8 text classification tasks show that our approach outperforms conventional
ICL by a large margin. Our code are publicly available at
https:github.com/Zhe-Young/WICL.
"
How Many Pretraining Tasks Are Needed for In-Context Learning of Linear  Regression?,Jingfeng Wu,http://arxiv.org/pdf/2310.08391v1.pdf,2023-10-12,"['stat.ml', 'cs.lg']",2310.08391v1.pdf,"  Transformers pretrained on diverse tasks exhibit remarkable in-context
learning (ICL) capabilities, enabling them to solve unseen tasks solely based
on input contexts without adjusting model parameters. In this paper, we study
ICL in one of its simplest setups: pretraining a linearly parameterized
single-layer linear attention model for linear regression with a Gaussian
prior. We establish a statistical task complexity bound for the attention model
pretraining, showing that effective pretraining only requires a small number of
independent tasks. Furthermore, we prove that the pretrained model closely
matches the Bayes optimal algorithm, i.e., optimally tuned ridge regression, by
achieving nearly Bayes optimal risk on unseen tasks under a fixed context
length. These theoretical findings complement prior experimental research and
shed light on the statistical foundations of ICL.
"
Generative Calibration for In-context Learning,Zhongtao Jiang,http://arxiv.org/pdf/2310.10266v1.pdf,2023-10-16,['cs.cl'],2310.10266v1.pdf,"  As one of the most exciting features of large language models (LLMs),
in-context learning is a mixed blessing. While it allows users to
fast-prototype a task solver with only a few training examples, the performance
is generally sensitive to various configurations of the prompt such as the
choice or order of the training examples. In this paper, we for the first time
theoretically and empirically identify that such a paradox is mainly due to the
label shift of the in-context model to the data distribution, in which LLMs
shift the label marginal $p(y)$ while having a good label conditional $p(x|y)$.
With this understanding, we can simply calibrate the in-context predictive
distribution by adjusting the label marginal, which is estimated via
Monte-Carlo sampling over the in-context model, i.e., generation of LLMs. We
call our approach as generative calibration. We conduct exhaustive experiments
with 12 text classification tasks and 12 LLMs scaling from 774M to 33B,
generally find that the proposed method greatly and consistently outperforms
the ICL as well as state-of-the-art calibration methods, by up to 27% absolute
in macro-F1. Meanwhile, the proposed method is also stable under different
prompt configurations.
"
"Last One Standing: A Comparative Analysis of Security and Privacy of  Soft Prompt Tuning, LoRA, and In-Context Learning",Rui Wen,http://arxiv.org/pdf/2310.11397v1.pdf,2023-10-17,"['cs.cr', 'cs.lg']",2310.11397v1.pdf,"  Large Language Models (LLMs) are powerful tools for natural language
processing, enabling novel applications and user experiences. However, to
achieve optimal performance, LLMs often require adaptation with private data,
which poses privacy and security challenges. Several techniques have been
proposed to adapt LLMs with private data, such as Low-Rank Adaptation (LoRA),
Soft Prompt Tuning (SPT), and In-Context Learning (ICL), but their comparative
privacy and security properties have not been systematically investigated. In
this work, we fill this gap by evaluating the robustness of LoRA, SPT, and ICL
against three types of well-established attacks: membership inference, which
exposes data leakage (privacy); backdoor, which injects malicious behavior
(security); and model stealing, which can violate intellectual property
(privacy and security). Our results show that there is no silver bullet for
privacy and security in LLM adaptation and each technique has different
strengths and weaknesses.
"
MAGNIFICo: Evaluating the In-Context Learning Ability of Large Language  Models to Generalize to Novel Interpretations,Arkil Patel,http://arxiv.org/pdf/2310.11634v1.pdf,2023-10-18,['cs.cl'],2310.11634v1.pdf,"  Humans possess a remarkable ability to assign novel interpretations to
linguistic expressions, enabling them to learn new words and understand
community-specific connotations. However, Large Language Models (LLMs) have a
knowledge cutoff and are costly to finetune repeatedly. Therefore, it is
crucial for LLMs to learn novel interpretations in-context. In this paper, we
systematically analyse the ability of LLMs to acquire novel interpretations
using in-context learning. To facilitate our study, we introduce MAGNIFICo, an
evaluation suite implemented within a text-to-SQL semantic parsing framework
that incorporates diverse tokens and prompt settings to simulate real-world
complexity. Experimental results on MAGNIFICo demonstrate that LLMs exhibit a
surprisingly robust capacity for comprehending novel interpretations from
natural language descriptions as well as from discussions within long
conversations. Nevertheless, our findings also highlight the need for further
improvements, particularly when interpreting unfamiliar words or when composing
multiple novel interpretations simultaneously in the same example.
Additionally, our analysis uncovers the semantic predispositions in LLMs and
reveals the impact of recency bias for information presented in long contexts.
"
In-context Learning with Transformer Is Really Equivalent to a  Contrastive Learning Pattern,Ruifeng Ren,http://arxiv.org/pdf/2310.13220v1.pdf,2023-10-20,['cs.lg'],2310.13220v1.pdf,"  Pre-trained large language models based on Transformers have demonstrated
amazing in-context learning (ICL) abilities. Given several demonstration
examples, the models can implement new tasks without any parameter updates.
However, it is still an open question to understand the mechanism of ICL. In
this paper, we interpret the inference process of ICL as a gradient descent
process in a contrastive learning pattern. Firstly, leveraging kernel methods,
we establish the relationship between gradient descent and self-attention
mechanism under generally used softmax attention setting instead of linear
attention setting. Then, we analyze the corresponding gradient descent process
of ICL from the perspective of contrastive learning without negative samples
and discuss possible improvements of this contrastive learning pattern, based
on which the self-attention layer can be further modified. Finally, we design
experiments to support our opinions. To the best of our knowledge, our work is
the first to provide the understanding of ICL from the perspective of
contrastive learning and has the potential to facilitate future model design by
referring to related works on contrastive learning.
"
In-Context Learning Creates Task Vectors,Roee Hendel,http://arxiv.org/pdf/2310.15916v1.pdf,2023-10-24,['cs.cl'],2310.15916v1.pdf,"  In-context learning (ICL) in Large Language Models (LLMs) has emerged as a
powerful new learning paradigm. However, its underlying mechanism is still not
well understood. In particular, it is challenging to map it to the ""standard""
machine learning framework, where one uses a training set $S$ to find a
best-fitting function $f(x)$ in some hypothesis class. Here we make progress on
this problem by showing that the functions learned by ICL often have a very
simple structure: they correspond to the transformer LLM whose only inputs are
the query $x$ and a single ""task vector"" calculated from the training set.
Thus, ICL can be seen as compressing $S$ into a single task vector
$\boldsymbol{\theta}(S)$ and then using this task vector to modulate the
transformer to produce the output. We support the above claim via comprehensive
experiments across a range of models and tasks.
"
When Do Prompting and Prefix-Tuning Work? A Theory of Capabilities and  Limitations,Aleksandar Petrov,http://arxiv.org/pdf/2310.19698v1.pdf,2023-10-30,"['cs.lg', 'cs.cl']",2310.19698v1.pdf,"  Context-based fine-tuning methods, including prompting, in-context learning,
soft prompting (also known as prompt tuning), and prefix-tuning, have gained
popularity due to their ability to often match the performance of full
fine-tuning with a fraction of the parameters. Despite their empirical
successes, there is little theoretical understanding of how these techniques
influence the internal computation of the model and their expressiveness
limitations. We show that despite the continuous embedding space being more
expressive than the discrete token space, soft-prompting and prefix-tuning are
strictly less expressive than full fine-tuning, even with the same number of
learnable parameters. Concretely, context-based fine-tuning cannot change the
relative attention pattern over the content and can only bias the outputs of an
attention layer in a fixed direction. This suggests that while techniques like
prompting, in-context learning, soft prompting, and prefix-tuning can
effectively elicit skills present in the pretrained model, they cannot learn
novel tasks that require new attention patterns.
"
Which Examples to Annotate for In-Context Learning? Towards Effective  and Efficient Selection,Costas Mavromatis,http://arxiv.org/pdf/2310.20046v1.pdf,2023-10-30,['cs.cl'],2310.20046v1.pdf,"  Large Language Models (LLMs) can adapt to new tasks via in-context learning
(ICL). ICL is efficient as it does not require any parameter updates to the
trained LLM, but only few annotated examples as input for the LLM. In this
work, we investigate an active learning approach for ICL, where there is a
limited budget for annotating examples. We propose a model-adaptive
optimization-free algorithm, termed AdaICL, which identifies examples that the
model is uncertain about, and performs semantic diversity-based example
selection. Diversity-based sampling improves overall effectiveness, while
uncertainty sampling improves budget efficiency and helps the LLM learn new
information. Moreover, AdaICL poses its sampling strategy as a Maximum Coverage
problem, that dynamically adapts based on the model's feedback and can be
approximately solved via greedy algorithms. Extensive experiments on nine
datasets and seven LLMs show that AdaICL improves performance by 4.4% accuracy
points over SOTA (7.7% relative improvement), is up to 3x more budget-efficient
than performing annotations uniformly at random, while it outperforms SOTA with
2x fewer ICL examples.
"
DAIL: Data Augmentation for In-Context Learning via Self-Paraphrase,Dawei Li,http://arxiv.org/pdf/2311.03319v1.pdf,2023-11-06,"['cs.cl', 'cs.ai']",2311.03319v1.pdf,"  In-Context Learning (ICL) combined with pre-trained large language models has
achieved promising results on various NLP tasks. However, ICL requires
high-quality annotated demonstrations which might not be available in
real-world scenarios. To overcome this limitation, we propose \textbf{D}ata
\textbf{A}ugmentation for \textbf{I}n-Context \textbf{L}earning
(\textbf{DAIL}). DAIL leverages the intuition that large language models are
more familiar with the content generated by themselves. It first utilizes the
language model to generate paraphrases of the test sample and employs majority
voting to determine the final result based on individual predictions. Our
extensive empirical evaluation shows that DAIL outperforms the standard ICL
method and other ensemble-based methods in the low-resource scenario.
Additionally, we explore the use of voting consistency as a confidence score of
the model when the logits of predictions are inaccessible. We believe our work
will stimulate further research on ICL in low-resource settings.
"
In-Context Exemplars as Clues to Retrieving from Large Associative  Memory,Jiachen Zhao,http://arxiv.org/pdf/2311.03498v1.pdf,2023-11-06,"['cs.cl', 'cs.lg']",2311.03498v1.pdf,"  Recently, large language models (LLMs) have made remarkable progress in
natural language processing. The most representative ability of LLMs is
in-context learning (ICL), which enables LLMs to learn patterns from in-context
exemplars without training. The performance of ICL greatly depends on the
exemplars used. However, how to choose exemplars remains unclear due to the
lack of understanding of how in-context learning works. In this paper, we
present a novel perspective on ICL by conceptualizing it as contextual
retrieval from a model of associative memory. We establish a theoretical
framework of ICL based on Hopfield Networks. Based on our framework, we look
into how in-context exemplars influence the performance of ICL and propose more
efficient active exemplar selection. Our study sheds new light on the mechanism
of ICL by connecting it to memory retrieval, with potential implications for
advancing the understanding of LLMs.
"
Instruct Me More! Random Prompting for Visual In-Context Learning,Jiahao Zhang,http://arxiv.org/pdf/2311.03648v1.pdf,2023-11-07,['cs.cv'],2311.03648v1.pdf,"  Large-scale models trained on extensive datasets, have emerged as the
preferred approach due to their high generalizability across various tasks.
In-context learning (ICL), a popular strategy in natural language processing,
uses such models for different tasks by providing instructive prompts but
without updating model parameters. This idea is now being explored in computer
vision, where an input-output image pair (called an in-context pair) is
supplied to the model with a query image as a prompt to exemplify the desired
output. The efficacy of visual ICL often depends on the quality of the prompts.
We thus introduce a method coined Instruct Me More (InMeMo), which augments
in-context pairs with a learnable perturbation (prompt), to explore its
potential. Our experiments on mainstream tasks reveal that InMeMo surpasses the
current state-of-the-art performance. Specifically, compared to the baseline
without learnable prompt, InMeMo boosts mIoU scores by 7.35 and 15.13 for
foreground segmentation and single object detection tasks, respectively. Our
findings suggest that InMeMo offers a versatile and efficient way to enhance
the performance of visual ICL with lightweight training. Code is available at
https://github.com/Jackieam/InMeMo.
"
Selective Annotation Makes Language Models Better Few-Shot Learners,Hongjin Su,http://arxiv.org/pdf/2209.01975v1.pdf,2022-09-05,['cs.cl'],2209.01975v1.pdf,"  Many recent approaches to natural language tasks are built on the remarkable
abilities of large language models. Large language models can perform
in-context learning, where they learn a new task from a few task
demonstrations, without any parameter updates. This work examines the
implications of in-context learning for the creation of datasets for new
natural language tasks. Departing from recent in-context learning methods, we
formulate an annotation-efficient, two-step framework: selective annotation
that chooses a pool of examples to annotate from unlabeled data in advance,
followed by prompt retrieval that retrieves task examples from the annotated
pool at test time. Based on this framework, we propose an unsupervised,
graph-based selective annotation method, voke-k, to select diverse,
representative examples to annotate. Extensive experiments on 10 datasets
(covering classification, commonsense reasoning, dialogue, and text/code
generation) demonstrate that our selective annotation method improves the task
performance by a large margin. On average, vote-k achieves a 12.9%/11.4%
relative gain under an annotation budget of 18/100, as compared to randomly
selecting examples to annotate. Compared to state-of-the-art supervised
finetuning approaches, it yields similar performance with 10-100x less
annotation cost across 10 tasks. We further analyze the effectiveness of our
framework in various scenarios: language models with varying sizes, alternative
selective annotation methods, and cases where there is a test data domain
shift. We hope that our studies will serve as a basis for data annotations as
large language models are increasingly applied to new tasks. Our code is
available at https://github.com/HKUNLP/icl-selective-annotation.
"
In-context Example Selection with Influences,Tai Nguyen,http://arxiv.org/pdf/2302.11042v2.pdf,2023-02-21,"['cs.cl', 'cs.lg']",2302.11042v2.pdf,"  In-context learning (ICL) is a powerful paradigm emerged from large language
models (LLMs). Despite its promises, ICL performance is known to be highly
sensitive to input examples. In this work, we use $\textit{in-context
influences}$ to analyze few-shot ICL performance directly from the in-context
examples. Our proposed influence-based example selection method can identify
both positive and negative examples, outperforming several baselines when
evaluated on 9 SuperGLUE tasks. Our analysis uncovers up to a $16.3\%$
performance gap between using the most negative in-context examples compared to
the most positive. In a case study, we apply our influence-based framework to
quantify the phenomena of recency bias in example ordering for few-shot ICL.
"
In-Context Alignment: Chat with Vanilla Language Models Before  Fine-Tuning,Xiaochuang Han,http://arxiv.org/pdf/2308.04275v1.pdf,2023-08-08,"['cs.cl', 'cs.ai', 'cs.lg']",2308.04275v1.pdf,"  In this note, we explore inference-time alignment through in-context
learning. We consider a vanilla pretrained language model Llama-2 before any
fine-tuning and retrieve an average of 9 demonstration alignment examples when
the model is prompted to follow chat-style instructions. Compared to direct
prompting, the in-context alignment without changing model weights leads to a
7x increase in win-rate w.r.t. the text-davinci-003 model from OpenAI, making
the vanilla language model comparable to strong baselines with alignment
fine-tuning.
"
"Tabular Representation, Noisy Operators, and Impacts on Table Structure  Understanding Tasks in LLMs",Ananya Singha,http://arxiv.org/pdf/2310.10358v1.pdf,2023-10-16,"['cs.cl', 'cs.ai']",2310.10358v1.pdf,"  Large language models (LLMs) are increasingly applied for tabular tasks using
in-context learning. The prompt representation for a table may play a role in
the LLMs ability to process the table. Inspired by prior work, we generate a
collection of self-supervised structural tasks (e.g. navigate to a cell and
row; transpose the table) and evaluate the performance differences when using 8
formats. In contrast to past work, we introduce 8 noise operations inspired by
real-world messy data and adversarial inputs, and show that such operations can
impact LLM performance across formats for different structural understanding
tasks.
"
GPT-4 Vision on Medical Image Classification -- A Case Study on COVID-19  Dataset,Ruibo Chen,http://arxiv.org/pdf/2310.18498v1.pdf,2023-10-27,"['eess.iv', 'cs.cv', 'cs.lg']",2310.18498v1.pdf,"  This technical report delves into the application of GPT-4 Vision (GPT-4V) in
the nuanced realm of COVID-19 image classification, leveraging the
transformative potential of in-context learning to enhance diagnostic
processes.
"
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than  In-Context Learning,Haokun Liu,http://arxiv.org/pdf/2205.05638v2.pdf,2022-05-11,"['cs.lg', 'cs.ai', 'cs.cl']",2205.05638v2.pdf,"  Few-shot in-context learning (ICL) enables pre-trained language models to
perform a previously-unseen task without any gradient-based training by feeding
a small number of training examples as part of the input. ICL incurs
substantial computational, memory, and storage costs because it involves
processing all of the training examples every time a prediction is made.
Parameter-efficient fine-tuning (PEFT) (e.g. adapter modules, prompt tuning,
sparse update methods, etc.) offers an alternative paradigm where a small set
of parameters are trained to enable a model to perform the new task. In this
paper, we rigorously compare few-shot ICL and PEFT and demonstrate that the
latter offers better accuracy as well as dramatically lower computational
costs. Along the way, we introduce a new PEFT method called (IA)$^3$ that
scales activations by learned vectors, attaining stronger performance while
only introducing a relatively tiny amount of new parameters. We also propose a
simple recipe based on the T0 model called T-Few that can be applied to new
tasks without task-specific tuning or modifications. We validate the
effectiveness of T-Few on completely unseen tasks by applying it to the RAFT
benchmark, attaining super-human performance for the first time and
outperforming the state-of-the-art by 6% absolute. All of the code used in our
experiments is publicly available.
"
Evaluating the Impact of Model Scale for Compositional Generalization in  Semantic Parsing,Linlu Qiu,http://arxiv.org/pdf/2205.12253v2.pdf,2022-05-24,['cs.cl'],2205.12253v2.pdf,"  Despite their strong performance on many tasks, pre-trained language models
have been shown to struggle on out-of-distribution compositional
generalization. Meanwhile, recent work has shown considerable improvements on
many NLP tasks from model scaling. Can scaling up model size also improve
compositional generalization in semantic parsing? We evaluate encoder-decoder
models up to 11B parameters and decoder-only models up to 540B parameters, and
compare model scaling curves for three different methods for applying a
pre-trained language model to a new task: fine-tuning all parameters, prompt
tuning, and in-context learning. We observe that fine-tuning generally has flat
or negative scaling curves on out-of-distribution compositional generalization
in semantic parsing evaluations. In-context learning has positive scaling
curves, but is generally outperformed by much smaller fine-tuned models.
Prompt-tuning can outperform fine-tuning, suggesting further potential
improvements from scaling as it exhibits a more positive scaling curve.
Additionally, we identify several error trends that vary with model scale. For
example, larger models are generally better at modeling the syntax of the
output space, but are also more prone to certain types of overfitting. Overall,
our study highlights limitations of current techniques for effectively
leveraging model scale for compositional generalization, while our analysis
also suggests promising directions for future work.
"
Controllable Dialogue Simulation with In-Context Learning,Zekun Li,http://arxiv.org/pdf/2210.04185v4.pdf,2022-10-09,"['cs.cl', 'cs.ai']",2210.04185v4.pdf,"  Building dialogue systems requires a large corpus of annotated dialogues.
Such datasets are usually created via crowdsourcing, which is expensive and
time-consuming. In this paper, we propose \textsc{Dialogic}, a novel dialogue
simulation method based on large language model in-context learning to automate
dataset creation. Seeded with a few annotated dialogues, \textsc{Dialogic}
automatically selects in-context examples for demonstration and prompts GPT-3
to generate new dialogues and annotations in a controllable way. Our method can
rapidly expand a small set of dialogue data with minimum or zero \textit{human
involvement} and \textit{parameter update} and is thus much more cost-efficient
and time-saving than crowdsourcing. Experimental results on the MultiWOZ
dataset demonstrate that training a model on the simulated dialogues leads to
even better performance than using the same amount of human-generated dialogues
under the challenging low-resource settings, with as few as 85 dialogues as a
seed. When enough data is available, our method can still serve as an effective
data augmentation method. Human evaluation results also show that our simulated
dialogues have near-human fluency and annotation accuracy. The code and data
are available at \textbf{\url{https://github.com/Leezekun/dialogic}}.
"
XRICL: Cross-lingual Retrieval-Augmented In-Context Learning for  Cross-lingual Text-to-SQL Semantic Parsing,Peng Shi,http://arxiv.org/pdf/2210.13693v1.pdf,2022-10-25,['cs.cl'],2210.13693v1.pdf,"  In-context learning using large language models has recently shown surprising
results for semantic parsing tasks such as Text-to-SQL translation. Prompting
GPT-3 or Codex using several examples of question-SQL pairs can produce
excellent results, comparable to state-of-the-art finetuning-based models.
However, existing work primarily focuses on English datasets, and it is unknown
whether large language models can serve as competitive semantic parsers for
other languages. To bridge this gap, our work focuses on cross-lingual
Text-to-SQL semantic parsing for translating non-English utterances into SQL
queries based on an English schema. We consider a zero-shot transfer learning
setting with the assumption that we do not have any labeled examples in the
target language (but have annotated examples in English). This work introduces
the XRICL framework, which learns to retrieve relevant English exemplars for a
given query to construct prompts. We also include global translation exemplars
for a target language to facilitate the translation process for large language
models. To systematically evaluate our model, we construct two new benchmark
datasets, XSpider and XKaggle-dbqa, which include questions in Chinese,
Vietnamese, Farsi, and Hindi. Our experiments show that XRICL effectively
leverages large pre-trained language models to outperform existing baselines.
Data and code are publicly available at https://github.com/Impavidity/XRICL.
"
Images Speak in Images: A Generalist Painter for In-Context Visual  Learning,Xinlong Wang,http://arxiv.org/pdf/2212.02499v2.pdf,2022-12-05,['cs.cv'],2212.02499v2.pdf,"  In-context learning, as a new paradigm in NLP, allows the model to rapidly
adapt to various tasks with only a handful of prompts and examples. But in
computer vision, the difficulties for in-context learning lie in that tasks
vary significantly in the output representations, thus it is unclear how to
define the general-purpose task prompts that the vision model can understand
and transfer to out-of-domain tasks. In this work, we present Painter, a
generalist model which addresses these obstacles with an ""image""-centric
solution, that is, to redefine the output of core vision tasks as images, and
specify task prompts as also images. With this idea, our training process is
extremely simple, which performs standard masked image modeling on the stitch
of input and output image pairs. This makes the model capable of performing
tasks conditioned on visible image patches. Thus, during inference, we can
adopt a pair of input and output images from the same task as the input
condition, to indicate which task to perform. Without bells and whistles, our
generalist Painter can achieve competitive performance compared to
well-established task-specific models, on seven representative vision tasks
ranging from high-level visual understanding to low-level image processing. In
addition, Painter significantly outperforms recent generalist models on several
challenging tasks.
"
General-Purpose In-Context Learning by Meta-Learning Transformers,Louis Kirsch,http://arxiv.org/pdf/2212.04458v1.pdf,2022-12-08,"['cs.lg', 'cs.ai', 'cs.ne', 'stat.ml']",2212.04458v1.pdf,"  Modern machine learning requires system designers to specify aspects of the
learning pipeline, such as losses, architectures, and optimizers.
Meta-learning, or learning-to-learn, instead aims to learn those aspects, and
promises to unlock greater capabilities with less manual effort. One
particularly ambitious goal of meta-learning is to train general-purpose
in-context learning algorithms from scratch, using only black-box models with
minimal inductive bias. Such a model takes in training data, and produces
test-set predictions across a wide range of problems, without any explicit
definition of an inference model, training loss, or optimization algorithm. In
this paper we show that Transformers and other black-box models can be
meta-trained to act as general-purpose in-context learners. We characterize
phase transitions between algorithms that generalize, algorithms that memorize,
and algorithms that fail to meta-train at all, induced by changes in model
size, number of tasks, and meta-optimization. We further show that the
capabilities of meta-trained algorithms are bottlenecked by the accessible
state size (memory) determining the next prediction, unlike standard models
which are thought to be bottlenecked by parameter count. Finally, we propose
practical interventions such as biasing the training distribution that improve
the meta-training and meta-generalization of general-purpose learning
algorithms.
"
Demonstrate-Search-Predict: Composing retrieval and language models for  knowledge-intensive NLP,Omar Khattab,http://arxiv.org/pdf/2212.14024v2.pdf,2022-12-28,"['cs.cl', 'cs.ir']",2212.14024v2.pdf,"  Retrieval-augmented in-context learning has emerged as a powerful approach
for addressing knowledge-intensive tasks using frozen language models (LM) and
retrieval models (RM). Existing work has combined these in simple
""retrieve-then-read"" pipelines in which the RM retrieves passages that are
inserted into the LM prompt. To begin to fully realize the potential of frozen
LMs and RMs, we propose Demonstrate-Search-Predict (DSP), a framework that
relies on passing natural language texts in sophisticated pipelines between an
LM and an RM. DSP can express high-level programs that bootstrap pipeline-aware
demonstrations, search for relevant passages, and generate grounded
predictions, systematically breaking down problems into small transformations
that the LM and RM can handle more reliably. We have written novel DSP programs
for answering questions in open-domain, multi-hop, and conversational settings,
establishing in early evaluations new state-of-the-art in-context learning
results and delivering 37-120%, 8-39%, and 80-290% relative gains against the
vanilla LM (GPT-3.5), a standard retrieve-then-read pipeline, and a
contemporaneous self-ask pipeline, respectively. We release DSP at
https://github.com/stanfordnlp/dsp
"
How Does In-Context Learning Help Prompt Tuning?,Simeng Sun,http://arxiv.org/pdf/2302.11521v1.pdf,2023-02-22,['cs.cl'],2302.11521v1.pdf,"  Fine-tuning large language models is becoming ever more impractical due to
their rapidly-growing scale. This motivates the use of parameter-efficient
adaptation methods such as prompt tuning (PT), which adds a small number of
tunable embeddings to an otherwise frozen model, and in-context learning (ICL),
in which demonstrations of the task are provided to the model in natural
language without any additional training. Recently, Singhal et al. (2022)
propose ``instruction prompt tuning'' (IPT), which combines PT with ICL by
concatenating a natural language demonstration with learned prompt embeddings.
While all of these methods have proven effective on different tasks, how they
interact with each other remains unexplored. In this paper, we empirically
study when and how in-context examples improve prompt tuning by measuring the
effectiveness of ICL, PT, and IPT on five text generation tasks with multiple
base language models. We observe that (1) IPT does \emph{not} always outperform
PT, and in fact requires the in-context demonstration to be semantically
similar to the test input to yield improvements; (2) PT is unstable and
exhibits high variance, but combining PT and ICL (into IPT) consistently
reduces variance across all five tasks; and (3) prompts learned for a specific
source task via PT exhibit positive transfer when paired with in-context
examples of a different target task. Our results offer actionable insights on
choosing a suitable parameter-efficient adaptation method for a given task.
"
Larger language models do in-context learning differently,Jerry Wei,http://arxiv.org/pdf/2303.03846v2.pdf,2023-03-07,['cs.cl'],2303.03846v2.pdf,"  We study how in-context learning (ICL) in language models is affected by
semantic priors versus input-label mappings. We investigate two setups-ICL with
flipped labels and ICL with semantically-unrelated labels-across various model
families (GPT-3, InstructGPT, Codex, PaLM, and Flan-PaLM). First, experiments
on ICL with flipped labels show that overriding semantic priors is an emergent
ability of model scale. While small language models ignore flipped labels
presented in-context and thus rely primarily on semantic priors from
pretraining, large models can override semantic priors when presented with
in-context exemplars that contradict priors, despite the stronger semantic
priors that larger models may hold. We next study semantically-unrelated label
ICL (SUL-ICL), in which labels are semantically unrelated to their inputs
(e.g., foo/bar instead of negative/positive), thereby forcing language models
to learn the input-label mappings shown in in-context exemplars in order to
perform the task. The ability to do SUL-ICL also emerges primarily with scale,
and large-enough language models can even perform linear classification in a
SUL-ICL setting. Finally, we evaluate instruction-tuned models and find that
instruction tuning strengthens both the use of semantic priors and the capacity
to learn input-label mappings, but more of the former.
"
How Many Demonstrations Do You Need for In-context Learning?,Jiuhai Chen,http://arxiv.org/pdf/2303.08119v3.pdf,2023-03-14,['cs.ai'],2303.08119v3.pdf,"  Large language models (LLMs) are capable to perform complex reasoning by
in-context learning (ICL) when provided with a few input-output demonstrations
(demos) and more powerful when intermediate reasoning steps (""chain of thoughts
(CoT)"") of the demos are given. Is it necessary to use multi-demo in ICL? In
this paper, we study ICL using fewer demos for each test query on the tasks
in~\cite{wei2022chain}. Surprisingly, we do not observe significant degradation
when using only one randomly chosen demo. To study this phenomenon, for each
test query, we categorize demos into ""correct demos"" leading to the correct
answer, and ""wrong demos"" resulting in wrong answers. Our analysis reveals an
inherent bias in those widely studied datasets: most demos are correct for a
majority of test queries, which explains the good performance of using one
random demo. Moreover, ICL (with and w/o CoT) using only one correct demo
significantly outperforms all-demo ICL adopted by most previous works,
indicating the weakness of LLMs in finding correct demo(s) for input queries,
which is difficult to evaluate on the biased datasets. Furthermore, we observe
a counterintuitive behavior of ICL using multi-demo, i.e., its accuracy
degrades(improves) when given more correct(wrong) demos. This implies that ICL
can be easily misguided by interference among demos and their spurious
correlations. Our analyses highlight several fundamental challenges that need
to be addressed in LLMs training, ICL, and benchmark design.
"
Improving Visual Question Answering Models through Robustness Analysis  and In-Context Learning with a Chain of Basic Questions,Jia-Hong Huang,http://arxiv.org/pdf/2304.03147v1.pdf,2023-04-06,"['cs.cv', 'cs.ai']",2304.03147v1.pdf,"  Deep neural networks have been critical in the task of Visual Question
Answering (VQA), with research traditionally focused on improving model
accuracy. Recently, however, there has been a trend towards evaluating the
robustness of these models against adversarial attacks. This involves assessing
the accuracy of VQA models under increasing levels of noise in the input, which
can target either the image or the proposed query question, dubbed the main
question. However, there is currently a lack of proper analysis of this aspect
of VQA. This work proposes a new method that utilizes semantically related
questions, referred to as basic questions, acting as noise to evaluate the
robustness of VQA models. It is hypothesized that as the similarity of a basic
question to the main question decreases, the level of noise increases. To
generate a reasonable noise level for a given main question, a pool of basic
questions is ranked based on their similarity to the main question, and this
ranking problem is cast as a LASSO optimization problem. Additionally, this
work proposes a novel robustness measure, R_score, and two basic question
datasets to standardize the analysis of VQA model robustness. The experimental
results demonstrate that the proposed evaluation method effectively analyzes
the robustness of VQA models. Moreover, the experiments show that in-context
learning with a chain of basic questions can enhance model accuracy.
"
GeneGPT: Augmenting Large Language Models with Domain Tools for Improved  Access to Biomedical Information,Qiao Jin,http://arxiv.org/pdf/2304.09667v3.pdf,2023-04-19,"['cs.cl', 'cs.ai', 'q-bio.gn']",2304.09667v3.pdf,"  While large language models (LLMs) have been successfully applied to various
tasks, they still face challenges with hallucinations. Augmenting LLMs with
domain-specific tools such as database utilities can facilitate easier and more
precise access to specialized knowledge. In this paper, we present GeneGPT, a
novel method for teaching LLMs to use the Web APIs of the National Center for
Biotechnology Information (NCBI) for answering genomics questions.
Specifically, we prompt Codex to solve the GeneTuring tests with NCBI Web APIs
by in-context learning and an augmented decoding algorithm that can detect and
execute API calls. Experimental results show that GeneGPT achieves
state-of-the-art performance on eight tasks in the GeneTuring benchmark with an
average score of 0.83, largely surpassing retrieval-augmented LLMs such as the
new Bing (0.44), biomedical LLMs such as BioMedLM (0.08) and BioGPT (0.04), as
well as GPT-3 (0.16) and ChatGPT (0.12). Our further analyses suggest that: (1)
API demonstrations have good cross-task generalizability and are more useful
than documentations for in-context learning; (2) GeneGPT can generalize to
longer chains of API calls and answer multi-hop questions in GeneHop, a novel
dataset introduced in this work; (3) Different types of errors are enriched in
different tasks, providing valuable insights for future improvements.
"
DIN-SQL: Decomposed In-Context Learning of Text-to-SQL with  Self-Correction,Mohammadreza Pourreza,http://arxiv.org/pdf/2304.11015v3.pdf,2023-04-21,"['cs.cl', 'cs.ai', 'cs.db', 'cs.hc']",2304.11015v3.pdf,"  There is currently a significant gap between the performance of fine-tuned
models and prompting approaches using Large Language Models (LLMs) on the
challenging task of text-to-SQL, as evaluated on datasets such as Spider. To
improve the performance of LLMs in the reasoning process, we study how
decomposing the task into smaller sub-tasks can be effective. In particular, we
show that breaking down the generation problem into sub-problems and feeding
the solutions of those sub-problems into LLMs can be an effective approach for
significantly improving their performance. Our experiments with three LLMs show
that this approach consistently improves their simple few-shot performance by
roughly 10%, pushing the accuracy of LLMs towards SOTA or surpassing it. On the
holdout test set of Spider, the SOTA, in terms of execution accuracy, was 79.9
and the new SOTA at the time of this writing using our approach is 85.3. Our
approach with in-context learning beats many heavily fine-tuned models by at
least 5%. Additionally, when evaluated on the BIRD benchmark, our approach
achieved an execution accuracy of 55.9%, setting a new SOTA on its holdout test
set.
"
Few-shot In-context Learning for Knowledge Base Question Answering,Tianle Li,http://arxiv.org/pdf/2305.01750v2.pdf,2023-05-02,"['cs.cl', 'cs.ai']",2305.01750v2.pdf,"  Question answering over knowledge bases is considered a difficult problem due
to the challenge of generalizing to a wide variety of possible natural language
questions. Additionally, the heterogeneity of knowledge base schema items
between different knowledge bases often necessitates specialized training for
different knowledge base question-answering (KBQA) datasets. To handle
questions over diverse KBQA datasets with a unified training-free framework, we
propose KB-BINDER, which for the first time enables few-shot in-context
learning over KBQA tasks. Firstly, KB-BINDER leverages large language models
like Codex to generate logical forms as the draft for a specific question by
imitating a few demonstrations. Secondly, KB-BINDER grounds on the knowledge
base to bind the generated draft to an executable one with BM25 score matching.
The experimental results on four public heterogeneous KBQA datasets show that
KB-BINDER can achieve a strong performance with only a few in-context
demonstrations. Especially on GraphQA and 3-hop MetaQA, KB-BINDER can even
outperform the state-of-the-art trained models. On GrailQA and WebQSP, our
model is also on par with other fully-trained models. We believe KB-BINDER can
serve as an important baseline for future research. Our code is available at
https://github.com/ltl3A87/KB-BINDER.
"
How Do In-Context Examples Affect Compositional Generalization?,Shengnan An,http://arxiv.org/pdf/2305.04835v3.pdf,2023-05-08,"['cs.cl', 'cs.ai']",2305.04835v3.pdf,"  Compositional generalization--understanding unseen combinations of seen
primitives--is an essential reasoning capability in human intelligence. The AI
community mainly studies this capability by fine-tuning neural networks on lots
of training samples, while it is still unclear whether and how in-context
learning--the prevailing few-shot paradigm based on large language
models--exhibits compositional generalization. In this paper, we present CoFe,
a test suite to investigate in-context compositional generalization. We find
that the compositional generalization performance can be easily affected by the
selection of in-context examples, thus raising the research question what the
key factors are to make good in-context examples for compositional
generalization. We study three potential factors: similarity, diversity and
complexity. Our systematic experiments indicate that in-context examples should
be structurally similar to the test case, diverse from each other, and
individually simple. Furthermore, two strong limitations are observed:
in-context compositional generalization on fictional words is much weaker than
that on commonly used ones; it is still critical that the in-context examples
should cover required linguistic structures, even though the backbone model has
been pre-trained on large corpus. We hope our analysis would facilitate the
understanding and utilization of in-context learning paradigm.
"
Symbol tuning improves in-context learning in language models,Jerry Wei,http://arxiv.org/pdf/2305.08298v1.pdf,2023-05-15,['cs.cl'],2305.08298v1.pdf,"  We present symbol tuning - finetuning language models on in-context
input-label pairs where natural language labels (e.g., ""positive/negative
sentiment"") are replaced with arbitrary symbols (e.g., ""foo/bar""). Symbol
tuning leverages the intuition that when a model cannot use instructions or
natural language labels to figure out a task, it must instead do so by learning
the input-label mappings.
  We experiment with symbol tuning across Flan-PaLM models up to 540B
parameters and observe benefits across various settings. First, symbol tuning
boosts performance on unseen in-context learning tasks and is much more robust
to underspecified prompts, such as those without instructions or without
natural language labels. Second, symbol-tuned models are much stronger at
algorithmic reasoning tasks, with up to 18.2% better performance on the List
Functions benchmark and up to 15.3% better performance on the Simple Turing
Concepts benchmark. Finally, symbol-tuned models show large improvements in
following flipped-labels presented in-context, meaning that they are more
capable of using in-context information to override prior semantic knowledge.
"
Text Classification via Large Language Models,Xiaofei Sun,http://arxiv.org/pdf/2305.08377v3.pdf,2023-05-15,['cs.cl'],2305.08377v3.pdf,"  Despite the remarkable success of large-scale Language Models (LLMs) such as
GPT-3, their performances still significantly underperform fine-tuned models in
the task of text classification. This is due to (1) the lack of reasoning
ability in addressing complex linguistic phenomena (e.g., intensification,
contrast, irony etc); (2) limited number of tokens allowed in in-context
learning.
  In this paper, we introduce Clue And Reasoning Prompting (CARP). CARP adopts
a progressive reasoning strategy tailored to addressing the complex linguistic
phenomena involved in text classification: CARP first prompts LLMs to find
superficial clues (e.g., keywords, tones, semantic relations, references, etc),
based on which a diagnostic reasoning process is induced for final decisions.
To further address the limited-token issue, CARP uses a fine-tuned model on the
supervised dataset for $k$NN demonstration search in the in-context learning,
allowing the model to take the advantage of both LLM's generalization ability
and the task-specific evidence provided by the full labeled dataset.
Remarkably, CARP yields new SOTA performances on 4 out of 5 widely-used
text-classification benchmarks, 97.39 (+1.24) on SST-2, 96.40 (+0.72) on
AGNews, 98.78 (+0.25) on R8 and 96.95 (+0.6) on R52, and a performance
comparable to SOTA on MR (92.39 v.s. 93.3). More importantly, we find that CARP
delivers impressive abilities on low-resource and domain-adaptation setups.
Specifically, using 16 examples per class, CARP achieves comparable
performances to supervised models with 1,024 examples per class.
"
Exploring In-Context Learning Capabilities of Foundation Models for  Generating Knowledge Graphs from Text,Hanieh Khorashadizadeh,http://arxiv.org/pdf/2305.08804v1.pdf,2023-05-15,['cs.cl'],2305.08804v1.pdf,"  Knowledge graphs can represent information about the real-world using
entities and their relations in a structured and semantically rich manner and
they enable a variety of downstream applications such as question-answering,
recommendation systems, semantic search, and advanced analytics. However, at
the moment, building a knowledge graph involves a lot of manual effort and thus
hinders their application in some situations and the automation of this process
might benefit especially for small organizations. Automatically generating
structured knowledge graphs from a large volume of natural language is still a
challenging task and the research on sub-tasks such as named entity extraction,
relation extraction, entity and relation linking, and knowledge graph
construction aims to improve the state of the art of automatic construction and
completion of knowledge graphs from text. The recent advancement of foundation
models with billions of parameters trained in a self-supervised manner with
large volumes of training data that can be adapted to a variety of downstream
tasks has helped to demonstrate high performance on a large range of Natural
Language Processing (NLP) tasks. In this context, one emerging paradigm is
in-context learning where a language model is used as it is with a prompt that
provides instructions and some examples to perform a task without changing the
parameters of the model using traditional approaches such as fine-tuning. This
way, no computing resources are needed for re-training/fine-tuning the models
and the engineering effort is minimal. Thus, it would be beneficial to utilize
such capabilities for generating knowledge graphs from text.
"
"What In-Context Learning ""Learns"" In-Context: Disentangling Task  Recognition and Task Learning",Jane Pan,http://arxiv.org/pdf/2305.09731v1.pdf,2023-05-16,"['cs.cl', 'cs.lg']",2305.09731v1.pdf,"  Large language models (LLMs) exploit in-context learning (ICL) to solve tasks
with only a few demonstrations, but its mechanisms are not yet well-understood.
Some works suggest that LLMs only recall already learned concepts from
pre-training, while others hint that ICL performs implicit learning over
demonstrations. We characterize two ways through which ICL leverages
demonstrations. Task recognition (TR) captures the extent to which LLMs can
recognize a task through demonstrations -- even without ground-truth labels --
and apply their pre-trained priors, whereas task learning (TL) is the ability
to capture new input-label mappings unseen in pre-training. Using a wide range
of classification datasets and three LLM families (GPT-3, LLaMA and OPT), we
design controlled experiments to disentangle the roles of TR and TL in ICL. We
show that (1) models can achieve non-trivial performance with only TR, and TR
does not further improve with larger models or more demonstrations; (2) LLMs
acquire TL as the model scales, and TL's performance consistently improves with
more demonstrations in context. Our findings unravel two different forces
behind ICL and we advocate for discriminating them in future ICL research due
to their distinct nature.
"
Temporal Knowledge Graph Forecasting Without Knowledge Using In-Context  Learning,Dong-Ho Lee,http://arxiv.org/pdf/2305.10613v3.pdf,2023-05-17,['cs.cl'],2305.10613v3.pdf,"  Temporal knowledge graph (TKG) forecasting benchmarks challenge models to
predict future facts using knowledge of past facts. In this paper, we apply
large language models (LLMs) to these benchmarks using in-context learning
(ICL). We investigate whether and to what extent LLMs can be used for TKG
forecasting, especially without any fine-tuning or explicit modules for
capturing structural and temporal information. For our experiments, we present
a framework that converts relevant historical facts into prompts and generates
ranked predictions using token probabilities. Surprisingly, we observe that
LLMs, out-of-the-box, perform on par with state-of-the-art TKG models carefully
designed and trained for TKG forecasting. Our extensive evaluation presents
performances across several models and datasets with different characteristics,
compares alternative heuristics for preparing contextual information, and
contrasts to prominent TKG methods and simple frequency and recency baselines.
We also discover that using numerical indices instead of entity/relation names,
i.e., hiding semantic information, does not significantly affect the
performance ($\pm$0.4\% Hit@1). This shows that prior semantic knowledge is
unnecessary; instead, LLMs can leverage the existing patterns in the context to
achieve such performance. Our analysis also reveals that ICL enables LLMs to
learn irregular patterns from the historical context, going beyond simple
predictions based on common or recent information.
"
Learning In-context Learning for Named Entity Recognition,Jiawei Chen,http://arxiv.org/pdf/2305.11038v3.pdf,2023-05-18,['cs.cl'],2305.11038v3.pdf,"  Named entity recognition in real-world applications suffers from the
diversity of entity types, the emergence of new entity types, and the lack of
high-quality annotations. To address the above problems, this paper proposes an
in-context learning-based NER approach, which can effectively inject in-context
NER ability into PLMs and recognize entities of novel types on-the-fly using
only a few demonstrative instances. Specifically, we model PLMs as a
meta-function $\mathcal{ \lambda_ {\text{instruction, demonstrations, text}}.
M}$, and a new entity extractor can be implicitly constructed by applying new
instruction and demonstrations to PLMs, i.e., $\mathcal{ (\lambda . M)
}$(instruction, demonstrations) $\to$ $\mathcal{F}$ where $\mathcal{F}$ will be
a new entity extractor, i.e., $\mathcal{F}$: text $\to$ entities. To inject the
above in-context NER ability into PLMs, we propose a meta-function pre-training
algorithm, which pre-trains PLMs by comparing the (instruction,
demonstration)-initialized extractor with a surrogate golden extractor.
Experimental results on 4 few-shot NER datasets show that our method can
effectively inject in-context NER ability into PLMs and significantly
outperforms the PLMs+fine-tuning counterparts.
"
PlugMed: Improving Specificity in Patient-Centered Medical Dialogue  Generation using In-Context Learning,Chengfeng Dou,http://arxiv.org/pdf/2305.11508v2.pdf,2023-05-19,"['cs.cl', 'cs.ai', 'i.2.7']",2305.11508v2.pdf,"  The patient-centered medical dialogue systems strive to offer diagnostic
interpretation services to users who are less knowledgeable about medical
knowledge, through emphasizing the importance of providing responses specific
to the patients. It is difficult for the large language models (LLMs) to
guarantee the specificity of responses in spite of its promising performance
even in some tasks in medical field. Inspired by in-context learning, we
propose PlugMed, a Plug-and-Play Medical Dialogue System, for addressing this
challenge. PlugMed is equipped with two modules, the prompt generation (PG)
module and the response ranking (RR) module, to enhances LLMs' dialogue
strategies for improving the specificity of the dialogue. The PG module is
designed to stimulate the imitative ability of LLMs by providing them with real
dialogues from similar patients as prompts. The RR module incorporates
fine-tuned small model as response filter to enable the selection of
appropriate responses generated by LLMs. Furthermore, we introduce a new
evaluation method based on matching both user's intent and high-frequency
medical term to effectively assess the specificity of the responses. We conduct
experimental evaluations on three medical dialogue datasets, and the results,
including both automatic and human evaluation, demonstrate the effectiveness of
our approach.
"
ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via  Tool Embeddings,Shibo Hao,http://arxiv.org/pdf/2305.11554v3.pdf,2023-05-19,"['cs.cl', 'cs.lg']",2305.11554v3.pdf,"  Augmenting large language models (LLMs) with external tools has emerged as a
promising approach to solving complex problems. However, traditional methods,
which finetune LLMs with tool demonstration data, can be both costly and
restricted to a predefined set of tools. Recent in-context learning paradigm
alleviates these issues, but the limited context length only allows for a few
shots of demonstrations, leading to suboptimal understandings of the tools.
Moreover, when there are numerous tools to choose from, in-context learning
could completely fail to work. In this paper, we propose an alternative
approach, $\textbf{ToolkenGPT}$, which combines the benefits of both sides. Our
approach represents each $\underline{tool}$ as a to$\underline{ken}$
($\textit{toolken}$) and learns an embedding for it, enabling tool calls in the
same way as generating a regular word token. Once a toolken is triggered, the
LLM is prompted to complete arguments for the tool to execute. ToolkenGPT
offers the flexibility to plug in an arbitrary number of tools by expanding the
set of toolkens on the fly. In addition, it improves tool use by allowing
extensive demonstration data for learning the toolken embeddings. In diverse
domains, including numerical reasoning, knowledge-based question answering, and
embodied plan generation, our approach effectively augments LLMs with tools and
substantially outperforms various latest baselines. ToolkenGPT demonstrates the
promising ability to use relevant tools from a large tool set in complex
scenarios.
"
Iterative Forward Tuning Boosts In-context Learning in Language Models,Jiaxi Yang,http://arxiv.org/pdf/2305.13016v2.pdf,2023-05-22,['cs.cl'],2305.13016v2.pdf,"  Large language models (LLMs) have exhibited an emergent in-context learning
(ICL) ability. However, the ICL models that can solve ordinary cases are hardly
extended to solve more complex tasks by processing the demonstration examples
once. This single-turn ICL is incoordinate with the decision making process of
humans by learning from analogy. In this paper, we propose an effective and
efficient two-stage framework to boost ICL in LLMs by exploiting a dual form
between Transformer attention and gradient descent-based optimization.
Concretely, we divide the ICL process into ""Deep-Thinking"" and inference
stages. The ""Deep-Thinking"" stage performs iterative forward optimization of
demonstrations, which is expected to boost the reasoning abilities of LLMs at
test time by ""thinking"" demonstrations multiple times. It produces accumulated
meta-gradients by manipulating the Key-Value matrices in the self-attention
modules of the Transformer. Then, the inference stage only takes the test query
as input without concatenating demonstrations and applies the learned
meta-gradients through attention for output prediction. In this way,
demonstrations are not required during the inference stage since they are
already learned and stored in the definitive meta-gradients. LLMs can be
effectively and efficiently adapted to downstream tasks. Extensive experiments
on ten classification and multiple-choice datasets show that our method
achieves substantially better performance than standard ICL in terms of both
accuracy and efficiency.
"
Measuring Inductive Biases of In-Context Learning with Underspecified  Demonstrations,Chenglei Si,http://arxiv.org/pdf/2305.13299v1.pdf,2023-05-22,"['cs.cl', 'cs.ai', 'cs.lg']",2305.13299v1.pdf,"  In-context learning (ICL) is an important paradigm for adapting large
language models (LLMs) to new tasks, but the generalization behavior of ICL
remains poorly understood. We investigate the inductive biases of ICL from the
perspective of feature bias: which feature ICL is more likely to use given a
set of underspecified demonstrations in which two features are equally
predictive of the labels. First, we characterize the feature biases of GPT-3
models by constructing underspecified demonstrations from a range of NLP
datasets and feature combinations. We find that LLMs exhibit clear feature
biases - for example, demonstrating a strong bias to predict labels according
to sentiment rather than shallow lexical features, like punctuation. Second, we
evaluate the effect of different interventions that are designed to impose an
inductive bias in favor of a particular feature, such as adding a natural
language instruction or using semantically relevant label words. We find that,
while many interventions can influence the learner to prefer a particular
feature, it can be difficult to overcome strong prior biases. Overall, our
results provide a broader picture of the types of features that ICL may be more
likely to exploit and how to impose inductive biases that are better aligned
with the intended task.
"
Exploring Diverse In-Context Configurations for Image Captioning,Xu Yang,http://arxiv.org/pdf/2305.14800v5.pdf,2023-05-24,['cs.cv'],2305.14800v5.pdf,"  After discovering that Language Models (LMs) can be good in-context few-shot
learners, numerous strategies have been proposed to optimize in-context
sequence configurations. Recently, researchers in Vision-Language (VL) domains
also develop their few-shot learners, while they only use the simplest way,
ie., randomly sampling, to configure in-context image-text pairs. In order to
explore the effects of varying configurations on VL in-context learning, we
devised four strategies for image selection and four for caption assignment to
configure in-context image-text pairs for image captioning. Here Image
Captioning is used as the case study since it can be seen as the
visually-conditioned LM. Our comprehensive experiments yield two
counter-intuitive but valuable insights, highlighting the distinct
characteristics of VL in-context learning due to multi-modal synergy, as
compared to the NLP case. Furthermore, in our exploration of optimal
combination strategies, we observed an average performance enhancement of 20.9
of CIDEr scores compared to the baseline. The code is given in
https://github.com/yongliang-wu/ExploreCfg.
"
Estimating Large Language Model Capabilities without Labeled Test Data,Harvey Yiyun Fu,http://arxiv.org/pdf/2305.14802v2.pdf,2023-05-24,['cs.cl'],2305.14802v2.pdf,"  Large Language Models (LLMs) have the impressive ability to perform
in-context learning (ICL) from only a few examples, but the success of ICL
varies widely from task to task. Thus, it is important to quickly determine
whether ICL is applicable to a new task, but directly evaluating ICL accuracy
can be expensive in situations where test data is expensive to annotate -- the
exact situations where ICL is most appealing. In this paper, we propose the
task of ICL accuracy estimation, in which we predict the accuracy of an LLM
when doing in-context learning on a new task given only unlabeled test data for
that task. To perform ICL accuracy estimation, we propose a method that trains
a meta-model using LLM confidence scores as features. We compare our method to
several strong accuracy estimation baselines on a new benchmark that covers 4
LLMs and 3 task collections. The meta-model improves over all baselines across
8 out of 12 settings and achieves the same estimation performance as directly
evaluating on 40 collected labeled test examples per task. At the same time, no
existing approach provides an accurate and reliable ICL accuracy estimation in
every setting, highlighting the need for better ways to measure the uncertainty
of LLM predictions.
"
BUFFET: Benchmarking Large Language Models for Few-shot Cross-lingual  Transfer,Akari Asai,http://arxiv.org/pdf/2305.14857v1.pdf,2023-05-24,['cs.cl'],2305.14857v1.pdf,"  Despite remarkable advancements in few-shot generalization in natural
language processing, most models are developed and evaluated primarily in
English. To facilitate research on few-shot cross-lingual transfer, we
introduce a new benchmark, called BUFFET, which unifies 15 diverse tasks across
54 languages in a sequence-to-sequence format and provides a fixed set of
few-shot examples and instructions. BUFFET is designed to establish a rigorous
and equitable evaluation framework for few-shot cross-lingual transfer across a
broad range of tasks and languages. Using BUFFET, we perform thorough
evaluations of state-of-the-art multilingual large language models with
different transfer methods, namely in-context learning and fine-tuning. Our
findings reveal significant room for improvement in few-shot in-context
cross-lingual transfer. In particular, ChatGPT with in-context learning often
performs worse than much smaller mT5-base models fine-tuned on English task
data and few-shot in-language examples. Our analysis suggests various avenues
for future research in few-shot cross-lingual transfer, such as improved
pretraining, understanding, and future evaluations.
"
Adversarial Demonstration Attacks on Large Language Models,Jiongxiao Wang,http://arxiv.org/pdf/2305.14950v2.pdf,2023-05-24,"['cs.cl', 'cs.ai', 'cs.cr', 'cs.lg']",2305.14950v2.pdf,"  With the emergence of more powerful large language models (LLMs), such as
ChatGPT and GPT-4, in-context learning (ICL) has gained significant prominence
in leveraging these models for specific tasks by utilizing data-label pairs as
precondition prompts. While incorporating demonstrations can greatly enhance
the performance of LLMs across various tasks, it may introduce a new security
concern: attackers can manipulate only the demonstrations without changing the
input to perform an attack. In this paper, we investigate the security concern
of ICL from an adversarial perspective, focusing on the impact of
demonstrations. We propose a novel attack method named advICL, which aims to
manipulate only the demonstration without changing the input to mislead the
models. Our results demonstrate that as the number of demonstrations increases,
the robustness of in-context learning would decrease. Additionally, we also
identify the intrinsic property of the demonstrations is that they can be used
(prepended) with different inputs. As a result, it introduces a more practical
threat model in which an attacker can attack the test input example even
without knowing and manipulating it. To achieve it, we propose the transferable
version of advICL, named Transferable-advICL. Our experiment shows that the
adversarial demonstration generated by Transferable-advICL can successfully
attack the unseen test input examples. We hope that our study reveals the
critical security risks associated with ICL and underscores the need for
extensive research on the robustness of ICL, particularly given its increasing
significance in the advancement of LLMs.
"
Self-ICL: Zero-Shot In-Context Learning with Self-Generated  Demonstrations,Wei-Lin Chen,http://arxiv.org/pdf/2305.15035v2.pdf,2023-05-24,['cs.cl'],2305.15035v2.pdf,"  Large language models (LLMs) have exhibited striking in-context learning
(ICL) ability to adapt to target tasks with a few input-output demonstrations.
For better ICL, different methods are proposed to select representative
demonstrations from existing training corpora. However, such settings are not
aligned with real-world practices, as end-users usually query LMs without
access to demonstration pools. In this work, we introduce Self-ICL -- a simple
framework which bootstraps LMs' intrinsic capabilities to perform zero-shot
ICL. Given a test input, Self-ICL first prompts the model to generate
pseudo-inputs. Next, the model predicts pseudo-labels for the pseudo-inputs via
zero-shot prompting. Finally, we perform ICL for the test input with the
pseudo-input-label pairs as demonstrations. Evaluation on 23 BIG-Bench Hard
tasks shows Self-ICL outperforms zero-shot baselines on both average accuracy
and head-to-head comparison. Moreover, with zero-shot chain-of-thought,
Self-ICL achieves results comparable to using real demonstrations.
Additionally, we conduct a range of analyses to validate Self-ICL's
effectiveness and provide insights for its behaviors under different settings.
"
Measuring and Mitigating Constraint Violations of In-Context Learning  for Utterance-to-API Semantic Parsing,Shufan Wang,http://arxiv.org/pdf/2305.15338v1.pdf,2023-05-24,"['cs.ai', 'cs.cl']",2305.15338v1.pdf,"  In executable task-oriented semantic parsing, the system aims to translate
users' utterances in natural language to machine-interpretable programs (API
calls) that can be executed according to pre-defined API specifications. With
the popularity of Large Language Models (LLMs), in-context learning offers a
strong baseline for such scenarios, especially in data-limited regimes.
However, LLMs are known to hallucinate and therefore pose a formidable
challenge in constraining generated content. Thus, it remains uncertain if LLMs
can effectively perform task-oriented utterance-to-API generation where
respecting API's structural and task-specific constraints is crucial.
  In this work, we seek to measure, analyze and mitigate such constraints
violations. First, we identify the categories of various constraints in
obtaining API-semantics from task-oriented utterances, and define fine-grained
metrics that complement traditional ones. Second, we leverage these metrics to
conduct a detailed error analysis of constraints violations seen in
state-of-the-art LLMs, which motivates us to investigate two mitigation
strategies: Semantic-Retrieval of Demonstrations (SRD) and API-aware
Constrained Decoding (API-CD). Our experiments show that these strategies are
effective at reducing constraints violations and improving the quality of the
generated API calls, but require careful consideration given their
implementation complexity and latency.
"
What can Large Language Models do in chemistry? A comprehensive  benchmark on eight tasks,Taicheng Guo,http://arxiv.org/pdf/2305.18365v2.pdf,2023-05-27,"['cs.cl', 'cs.ai']",2305.18365v2.pdf,"  Large Language Models (LLMs) with strong abilities in natural language
processing tasks have emerged and have been applied in various kinds of areas
such as science, finance and software engineering. However, the capability of
LLMs to advance the field of chemistry remains unclear. In this paper, rather
than pursuing state-of-the-art performance, we aim to evaluate capabilities of
LLMs in a wide range of tasks across the chemistry domain. We identify three
key chemistry-related capabilities including understanding, reasoning and
explaining to explore in LLMs and establish a benchmark containing eight
chemistry tasks. Our analysis draws on widely recognized datasets facilitating
a broad exploration of the capacities of LLMs within the context of practical
chemistry. Five LLMs (GPT-4, GPT-3.5, Davinci-003, Llama and Galactica) are
evaluated for each chemistry task in zero-shot and few-shot in-context learning
settings with carefully selected demonstration examples and specially crafted
prompts. Our investigation found that GPT-4 outperformed other models and LLMs
exhibit different competitive levels in eight chemistry tasks. In addition to
the key findings from the comprehensive benchmark analysis, our work provides
insights into the limitation of current LLMs and the impact of in-context
learning settings on LLMs' performance across various chemistry tasks. The code
and datasets used in this study are available at
https://github.com/ChemFoundationModels/ChemLLMBench.
"
Mitigating Label Biases for In-context Learning,Yu Fei,http://arxiv.org/pdf/2305.19148v3.pdf,2023-05-28,"['cs.cl', 'cs.ai', 'cs.lg']",2305.19148v3.pdf,"  Various design settings for in-context learning (ICL), such as the choice and
order of the in-context examples, can bias a model toward a particular
prediction without being reflective of an understanding of the task. While many
studies discuss these design choices, there have been few systematic
investigations into categorizing them and mitigating their impact. In this
work, we define a typology for three types of label biases in ICL for text
classification: vanilla-label bias, context-label bias, and domain-label bias
(which we conceptualize and detect for the first time).
  Our analysis demonstrates that prior label bias calibration methods fall
short of addressing all three types of biases. Specifically, domain-label bias
restricts LLMs to random-level performance on many tasks regardless of the
choice of in-context examples. To mitigate the effect of these biases, we
propose a simple bias calibration method that estimates a language model's
label bias using random in-domain words from the task corpus. After controlling
for this estimated bias when making predictions, our novel domain-context
calibration significantly improves the ICL performance of GPT-J and GPT-3 on a
wide range of tasks. The gain is substantial on tasks with large domain-label
bias (up to 37% in Macro-F1). Furthermore, our results generalize to models
with different scales, pretraining methods, and manually-designed task
instructions, showing the prevalence of label biases in ICL.
"
"What and How does In-Context Learning Learn? Bayesian Model Averaging,  Parameterization, and Generalization",Yufeng Zhang,http://arxiv.org/pdf/2305.19420v2.pdf,2023-05-30,"['stat.ml', 'cs.lg']",2305.19420v2.pdf,"  In this paper, we conduct a comprehensive study of In-Context Learning (ICL)
by addressing several open questions: (a) What type of ICL estimator is learned
by large language models? (b) What is a proper performance metric for ICL and
what is the error rate? (c) How does the transformer architecture enable ICL?
To answer these questions, we adopt a Bayesian view and formulate ICL as a
problem of predicting the response corresponding to the current covariate,
given a number of examples drawn from a latent variable model. To answer (a),
we show that, without updating the neural network parameters, ICL implicitly
implements the Bayesian model averaging algorithm, which is proven to be
approximately parameterized by the attention mechanism. For (b), we analyze the
ICL performance from an online learning perspective and establish a
$\mathcal{O}(1/T)$ regret bound for perfectly pretrained ICL, where $T$ is the
number of examples in the prompt. To answer (c), we show that, in addition to
encoding Bayesian model averaging via attention, the transformer architecture
also enables a fine-grained statistical analysis of pretraining under realistic
assumptions. In particular, we prove that the error of pretrained model is
bounded by a sum of an approximation error and a generalization error, where
the former decays to zero exponentially as the depth grows, and the latter
decays to zero sublinearly with the number of tokens in the pretraining
dataset. Our results provide a unified understanding of the transformer and its
ICL ability with bounds on ICL regret, approximation, and generalization, which
deepens our knowledge of these essential aspects of modern language models.
"
Augmenting Language Models with Long-Term Memory,Weizhi Wang,http://arxiv.org/pdf/2306.07174v1.pdf,2023-06-12,['cs.cl'],2306.07174v1.pdf,"  Existing large language models (LLMs) can only afford fix-sized inputs due to
the input length limit, preventing them from utilizing rich long-context
information from past inputs. To address this, we propose a framework, Language
Models Augmented with Long-Term Memory (LongMem), which enables LLMs to
memorize long history. We design a novel decoupled network architecture with
the original backbone LLM frozen as a memory encoder and an adaptive residual
side-network as a memory retriever and reader. Such a decoupled memory design
can easily cache and update long-term past contexts for memory retrieval
without suffering from memory staleness. Enhanced with memory-augmented
adaptation training, LongMem can thus memorize long past context and use
long-term memory for language modeling. The proposed memory retrieval module
can handle unlimited-length context in its memory bank to benefit various
downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k
tokens and thus cache many-shot extra demonstration examples as long-form
memory for in-context learning. Experiments show that our method outperforms
strong long-context models on ChapterBreak, a challenging long-context modeling
benchmark, and achieves remarkable improvements on memory-augmented in-context
learning over LLMs. The results demonstrate that the proposed method is
effective in helping language models to memorize and utilize long-form
contents. Our code is open-sourced at https://aka.ms/LongMem.
"
Pretraining task diversity and the emergence of non-Bayesian in-context  learning for regression,Allan RaventĂłs,http://arxiv.org/pdf/2306.15063v2.pdf,2023-06-26,"['cs.lg', 'cs.ai', 'cs.cl']",2306.15063v2.pdf,"  Pretrained transformers exhibit the remarkable ability of in-context learning
(ICL): they can learn tasks from just a few examples provided in the prompt
without updating any weights. This raises a foundational question: can ICL
solve fundamentally $\textit{new}$ tasks that are very different from those
seen during pretraining? To probe this question, we examine ICL's performance
on linear regression while varying the diversity of tasks in the pretraining
dataset. We empirically demonstrate a $\textit{task diversity threshold}$ for
the emergence of ICL. Below this threshold, the pretrained transformer cannot
solve unseen regression tasks, instead behaving like a Bayesian estimator with
the $\textit{non-diverse pretraining task distribution}$ as the prior. Beyond
this threshold, the transformer significantly outperforms this estimator; its
behavior aligns with that of ridge regression, corresponding to a Gaussian
prior over $\textit{all tasks}$, including those not seen during pretraining.
Thus, when pretrained on data with task diversity greater than the threshold,
transformers $\textit{can}$ optimally solve fundamentally new tasks in-context.
Importantly, this capability hinges on it deviating from the Bayes optimal
estimator with the pretraining distribution as the prior. This study also
explores the effect of regularization, model capacity and task structure and
underscores, in a concrete example, the critical role of task diversity,
alongside data and model scale, in the emergence of ICL. Code is available at
https://github.com/mansheej/icl-task-diversity.
"
Understanding In-Context Learning via Supportive Pretraining Data,Xiaochuang Han,http://arxiv.org/pdf/2306.15091v1.pdf,2023-06-26,['cs.cl'],2306.15091v1.pdf,"  In-context learning (ICL) improves language models' performance on a variety
of NLP tasks by simply demonstrating a handful of examples at inference time.
It is not well understood why ICL ability emerges, as the model has never been
specifically trained on such demonstrations. Unlike prior work that explores
implicit mechanisms behind ICL, we study ICL via investigating the pretraining
data. Specifically, we first adapt an iterative, gradient-based approach to
find a small subset of pretraining data that supports ICL. We observe that a
continued pretraining on this small subset significantly improves the model's
ICL ability, by up to 18%. We then compare the supportive subset constrastively
with random subsets of pretraining data and discover: (1) The supportive
pretraining data to ICL do not have a higher domain relevance to downstream
tasks. (2) The supportive pretraining data have a higher mass of rarely
occurring, long-tail tokens. (3) The supportive pretraining data are
challenging examples where the information gain from long-range context is
below average, indicating learning to incorporate difficult long-range context
encourages ICL. Our work takes a first step towards understanding ICL via
analyzing instance-level pretraining data. Our insights have a potential to
enhance the ICL ability of language models by actively guiding the construction
of pretraining data in the future.
"
Schema-learning and rebinding as mechanisms of in-context learning and  emergence,Sivaramakrishnan Swaminathan,http://arxiv.org/pdf/2307.01201v1.pdf,2023-06-16,"['cs.cl', 'cs.ai']",2307.01201v1.pdf,"  In-context learning (ICL) is one of the most powerful and most unexpected
capabilities to emerge in recent transformer-based large language models
(LLMs). Yet the mechanisms that underlie it are poorly understood. In this
paper, we demonstrate that comparable ICL capabilities can be acquired by an
alternative sequence prediction learning method using clone-structured causal
graphs (CSCGs). Moreover, a key property of CSCGs is that, unlike
transformer-based LLMs, they are {\em interpretable}, which considerably
simplifies the task of explaining how ICL works. Specifically, we show that it
uses a combination of (a) learning template (schema) circuits for pattern
completion, (b) retrieving relevant templates in a context-sensitive manner,
and (c) rebinding of novel tokens to appropriate slots in the templates. We go
on to marshall evidence for the hypothesis that similar mechanisms underlie ICL
in LLMs. For example, we find that, with CSCGs as with LLMs, different
capabilities emerge at different levels of overparameterization, suggesting
that overparameterization helps in learning more complex template (schema)
circuits. By showing how ICL can be achieved with small models and datasets, we
open up a path to novel architectures, and take a vital step towards a more
general understanding of the mechanics behind this important capability.
"
Towards Understanding In-Context Learning with Contrastive  Demonstrations and Saliency Maps,Zongxia Li,http://arxiv.org/pdf/2307.05052v1.pdf,2023-07-11,"['cs.cl', 'cs.ai']",2307.05052v1.pdf,"  We investigate the role of various demonstration components in the in-context
learning (ICL) performance of large language models (LLMs). Specifically, we
explore the impacts of ground-truth labels, input distribution, and
complementary explanations, particularly when these are altered or perturbed.
We build on previous work, which offers mixed findings on how these elements
influence ICL. To probe these questions, we employ explainable NLP (XNLP)
methods and utilize saliency maps of contrastive demonstrations for both
qualitative and quantitative analysis. Our findings reveal that flipping
ground-truth labels significantly affects the saliency, though it's more
noticeable in larger LLMs. Our analysis of the input distribution at a granular
level reveals that changing sentiment-indicative terms in a sentiment analysis
task to neutral ones does not have as substantial an impact as altering
ground-truth labels. Finally, we find that the effectiveness of complementary
explanations in boosting ICL performance is task-dependent, with limited
benefits seen in sentiment analysis tasks compared to symbolic reasoning tasks.
These insights are critical for understanding the functionality of LLMs and
guiding the development of effective demonstrations, which is increasingly
relevant in light of the growing use of LLMs in applications such as ChatGPT.
Our research code is publicly available at https://github.com/paihengxu/XICL.
"
In-context learning for model-free system identification,Marco Forgione,http://arxiv.org/pdf/2308.13380v1.pdf,2023-08-25,"['eess.sy', 'cs.lg', 'cs.sy']",2308.13380v1.pdf,"  In traditional system identification, we estimate a model of an unknown
dynamical system based on given input/output sequences and available physical
knowledge. Yet, is it also possible to understand the intricacies of dynamical
systems not solely from their input/output patterns, but by observing the
behavior of other systems within the same class? This central question drives
the study presented in this paper.
  In response to this query, we introduce a novel paradigm for system
identification, addressing two primary tasks: one-step-ahead prediction and
multi-step simulation. Unlike conventional methods, we do not directly estimate
a model for the specific system. Instead, we pretrain a meta model that
represents a class of dynamical systems. This meta model is trained from a
potentially infinite stream of synthetic data, generated by systems randomly
extracted from a certain distribution. At its core, the meta model serves as an
implicit representation of the main characteristics of a class of dynamical
systems. When provided with a brief context from a new system - specifically, a
short input/output sequence - the meta model implicitly discerns its dynamics,
enabling predictions of its behavior.
  The proposed approach harnesses the power of Transformer architectures,
renowned for their in-context learning capabilities in Natural Language
Processing tasks. For one-step prediction, a GPT-like decoder-only architecture
is utilized, whereas the simulation problem employs an encoder-decoder
structure.
  Initial experimental results affirmatively answer our foundational question,
opening doors to fresh research avenues in system identification.
"
Ambiguity-Aware In-Context Learning with Large Language Models,Lingyu Gao,http://arxiv.org/pdf/2309.07900v1.pdf,2023-09-14,"['cs.cl', 'cs.ir']",2309.07900v1.pdf,"  In-context learning (ICL) i.e. showing LLMs only a few task-specific
demonstrations has led to downstream gains with no task-specific fine-tuning
required. However, LLMs are sensitive to the choice of prompts, and therefore a
crucial research question is how to select good demonstrations for ICL. One
effective strategy is leveraging semantic similarity between the ICL
demonstrations and test inputs by using a text retriever, which however is
sub-optimal as that does not consider the LLM's existing knowledge about that
task. From prior work (Min et al., 2022), we already know that labels paired
with the demonstrations bias the model predictions. This leads us to our
hypothesis whether considering LLM's existing knowledge about the task,
especially with respect to the output label space can help in a better
demonstration selection strategy. Through extensive experimentation on three
text classification tasks, we find that it is beneficial to not only choose
semantically similar ICL demonstrations but also to choose those demonstrations
that help resolve the inherent label ambiguity surrounding the test example.
Interestingly, we find that including demonstrations that the LLM previously
mis-classified and also fall on the test example's decision boundary, brings
the most performance gain.
"
Are Human-generated Demonstrations Necessary for In-context Learning?,Rui Li,http://arxiv.org/pdf/2309.14681v2.pdf,2023-09-26,"['cs.lg', 'cs.ai']",2309.14681v2.pdf,"  Despite the promising few-shot ability of large language models (LLMs), the
standard paradigm of In-context Learning (ICL) suffers the disadvantages of
susceptibility to selected demonstrations and the intricacy to generate these
demonstrations. In this paper, we raise the fundamental question that whether
human-generated demonstrations are necessary for ICL. To answer this question,
we propose self-contemplation prompting strategy (SEC), a paradigm free from
human-crafted demonstrations. The key point of SEC is that, instead of using
hand-crafted examples as demonstrations in ICL, SEC asks LLMs to first create
demonstrations on their own, based on which the final output is generated. SEC
is a flexible framework and can be adapted to both the vanilla ICL and the
chain-of-thought (CoT), but with greater ease: as the manual-generation process
of both examples and rationale can be saved. Extensive experiments in
arithmetic reasoning, commonsense reasoning, multi-task language understanding,
and code generation benchmarks, show that SEC, which does not require
hand-crafted demonstrations, significantly outperforms the zero-shot learning
strategy, and achieves comparable results to ICL with hand-crafted
demonstrations. This demonstrates that, for many tasks, contemporary LLMs
possess a sufficient level of competence to exclusively depend on their own
capacity for decision making, removing the need for external training data.
Code is available at https://github.com/ruili33/SEC.
"
Beyond Task Performance: Evaluating and Reducing the Flaws of Large  Multimodal Models with In-Context Learning,Mustafa Shukor,http://arxiv.org/pdf/2310.00647v1.pdf,2023-10-01,"['cs.cv', 'cs.mm']",2310.00647v1.pdf,"  Following the success of Large Language Models (LLMs), Large Multimodal
Models (LMMs), such as the Flamingo model and its subsequent competitors, have
started to emerge as natural steps towards generalist agents. However,
interacting with recent LMMs reveals major limitations that are hardly captured
by the current evaluation benchmarks. Indeed, task performances (e.g., VQA
accuracy) alone do not provide enough clues to understand their real
capabilities, limitations, and to which extent such models are aligned to human
expectations. To refine our understanding of those flaws, we deviate from the
current evaluation paradigm and propose the EvALign-ICL framework, in which we
(1) evaluate 8 recent open-source LMMs (based on the Flamingo architecture such
as OpenFlamingo and IDEFICS) on 5 different axes; hallucinations, abstention,
compositionality, explainability and instruction following. Our evaluation on
these axes reveals major flaws in LMMs. To efficiently address these problems,
and inspired by the success of in-context learning (ICL) in LLMs, (2) we
explore ICL as a solution and study how it affects these limitations. Based on
our ICL study, (3) we push ICL further and propose new multimodal ICL
approaches such as; Multitask-ICL, Chain-of-Hindsight-ICL, and
Self-Correcting-ICL. Our findings are as follows; (1) Despite their success,
LMMs have flaws that remain unsolved with scaling alone. (2) The effect of ICL
on LMMs flaws is nuanced; despite its effectiveness for improved
explainability, abstention, and instruction following, ICL does not improve
compositional abilities, and actually even amplifies hallucinations. (3) The
proposed ICL variants are promising as post-hoc approaches to efficiently
tackle some of those flaws. The code is available here:
https://evalign-icl.github.io/
"
Understanding In-Context Learning in Transformers and LLMs by Learning  to Learn Discrete Functions,Satwik Bhattamishra,http://arxiv.org/pdf/2310.03016v1.pdf,2023-10-04,"['cs.lg', 'cs.cl']",2310.03016v1.pdf,"  In order to understand the in-context learning phenomenon, recent works have
adopted a stylized experimental framework and demonstrated that Transformers
can learn gradient-based learning algorithms for various classes of real-valued
functions. However, the limitations of Transformers in implementing learning
algorithms, and their ability to learn other forms of algorithms are not well
understood. Additionally, the degree to which these capabilities are confined
to attention-based models is unclear. Furthermore, it remains to be seen
whether the insights derived from these stylized settings can be extrapolated
to pretrained Large Language Models (LLMs). In this work, we take a step
towards answering these questions by demonstrating the following: (a) On a
test-bed with a variety of Boolean function classes, we find that Transformers
can nearly match the optimal learning algorithm for 'simpler' tasks, while
their performance deteriorates on more 'complex' tasks. Additionally, we find
that certain attention-free models perform (almost) identically to Transformers
on a range of tasks. (b) When provided a teaching sequence, i.e. a set of
examples that uniquely identifies a function in a class, we show that
Transformers learn more sample-efficiently. Interestingly, our results show
that Transformers can learn to implement two distinct algorithms to solve a
single task, and can adaptively select the more sample-efficient algorithm
depending on the sequence of in-context examples. (c) Lastly, we show that
extant LLMs, e.g. LLaMA-2, GPT-4, can compete with nearest-neighbor baselines
on prediction tasks that are guaranteed to not be in their training set.
"
SEER : A Knapsack approach to Exemplar Selection for In-Context HybridQA,Jonathan Tonglet,http://arxiv.org/pdf/2310.06675v2.pdf,2023-10-10,['cs.cl'],2310.06675v2.pdf,"  Question answering over hybrid contexts is a complex task, which requires the
combination of information extracted from unstructured texts and structured
tables in various ways. Recently, In-Context Learning demonstrated significant
performance advances for reasoning tasks. In this paradigm, a large language
model performs predictions based on a small set of supporting exemplars. The
performance of In-Context Learning depends heavily on the selection procedure
of the supporting exemplars, particularly in the case of HybridQA, where
considering the diversity of reasoning chains and the large size of the hybrid
contexts becomes crucial. In this work, we present Selection of ExEmplars for
hybrid Reasoning (SEER), a novel method for selecting a set of exemplars that
is both representative and diverse. The key novelty of SEER is that it
formulates exemplar selection as a Knapsack Integer Linear Program. The
Knapsack framework provides the flexibility to incorporate diversity
constraints that prioritize exemplars with desirable attributes, and capacity
constraints that ensure that the prompt size respects the provided capacity
budgets. The effectiveness of SEER is demonstrated on FinQA and TAT-QA, two
real-world benchmarks for HybridQA, where it outperforms previous exemplar
selection methods.
"
How Do Transformers Learn In-Context Beyond Simple Functions? A Case  Study on Learning with Representations,Tianyu Guo,http://arxiv.org/pdf/2310.10616v1.pdf,2023-10-16,['cs.lg'],2310.10616v1.pdf,"  While large language models based on the transformer architecture have
demonstrated remarkable in-context learning (ICL) capabilities, understandings
of such capabilities are still in an early stage, where existing theory and
mechanistic understanding focus mostly on simple scenarios such as learning
simple function classes. This paper takes initial steps on understanding ICL in
more complex scenarios, by studying learning with representations. Concretely,
we construct synthetic in-context learning problems with a compositional
structure, where the label depends on the input through a possibly complex but
fixed representation function, composed with a linear function that differs in
each instance. By construction, the optimal ICL algorithm first transforms the
inputs by the representation function, and then performs linear ICL on top of
the transformed dataset. We show theoretically the existence of transformers
that approximately implement such algorithms with mild depth and size.
Empirically, we find trained transformers consistently achieve near-optimal ICL
performance in this setting, and exhibit the desired dissection where lower
layers transforms the dataset and upper layers perform linear ICL. Through
extensive probing and a new pasting experiment, we further reveal several
mechanisms within the trained transformers, such as concrete copying behaviors
on both the inputs and the representations, linear ICL capability of the upper
layers alone, and a post-ICL representation selection mechanism in a harder
mixture setting. These observed mechanisms align well with our theory and may
shed light on how transformers perform ICL in more realistic scenarios.
"
Demonstrations Are All You Need: Advancing Offensive Content  Paraphrasing using In-Context Learning,Anirudh Som,http://arxiv.org/pdf/2310.10707v1.pdf,2023-10-16,"['cs.cl', 'cs.ai']",2310.10707v1.pdf,"  Paraphrasing of offensive content is a better alternative to content removal
and helps improve civility in a communication environment. Supervised
paraphrasers; however, rely heavily on large quantities of labelled data to
help preserve meaning and intent. They also retain a large portion of the
offensiveness of the original content, which raises questions on their overall
usability. In this paper we aim to assist practitioners in developing usable
paraphrasers by exploring In-Context Learning (ICL) with large language models
(LLMs), i.e., using a limited number of input-label demonstration pairs to
guide the model in generating desired outputs for specific queries. Our study
focuses on key factors such as -- number and order of demonstrations, exclusion
of prompt instruction, and reduction in measured toxicity. We perform
principled evaluation on three datasets, including our proposed Context-Aware
Polite Paraphrase dataset, comprising of dialogue-style rude utterances, polite
paraphrases, and additional dialogue context. We evaluate our approach using
two closed source and one open source LLM. Our results reveal that ICL is
comparable to supervised methods in generation quality, while being
qualitatively better by 25% on human evaluation and attaining lower toxicity by
76%. Also, ICL-based paraphrasers only show a slight reduction in performance
even with just 10% training data.
"
O3D: Offline Data-driven Discovery and Distillation for Sequential  Decision-Making with Large Language Models,Yuchen Xiao,http://arxiv.org/pdf/2310.14403v1.pdf,2023-10-22,"['cs.ai', 'cs.cl']",2310.14403v1.pdf,"  Recent advancements in large language models (LLMs) have exhibited promising
performance in solving sequential decision-making problems. By imitating
few-shot examples provided in the prompts (i.e., in-context learning), an LLM
agent can interact with an external environment and complete given tasks
without additional training. However, such few-shot examples are often
insufficient to generate high-quality solutions for complex and long-horizon
tasks, while the limited context length cannot consume larger-scale
demonstrations. To this end, we propose an offline learning framework that
utilizes offline data at scale (e.g, logs of human interactions) to facilitate
the in-context learning performance of LLM agents. We formally define
LLM-powered policies with both text-based approaches and code-based approaches.
We then introduce an Offline Data-driven Discovery and Distillation (O3D)
framework to improve LLM-powered policies without finetuning. O3D automatically
discovers reusable skills and distills generalizable knowledge across multiple
tasks based on offline interaction data, advancing the capability of solving
downstream tasks. Empirical results under two interactive decision-making
benchmarks (ALFWorld and WebShop) demonstrate that O3D can notably enhance the
decision-making capabilities of LLMs through the offline discovery and
distillation process, and consistently outperform baselines across various LLMs
with both text-based-policy and code-based-policy.
"
Transformers Learn Higher-Order Optimization Methods for In-Context  Learning: A Study with Linear Models,Deqing Fu,http://arxiv.org/pdf/2310.17086v1.pdf,2023-10-26,"['cs.lg', 'cs.ai', 'cs.cl']",2310.17086v1.pdf,"  Transformers are remarkably good at in-context learning (ICL) -- learning
from demonstrations without parameter updates -- but how they perform ICL
remains a mystery. Recent work suggests that Transformers may learn in-context
by internally running Gradient Descent, a first-order optimization method. In
this paper, we instead demonstrate that Transformers learn to implement
higher-order optimization methods to perform ICL. Focusing on in-context linear
regression, we show that Transformers learn to implement an algorithm very
similar to Iterative Newton's Method, a higher-order optimization method,
rather than Gradient Descent. Empirically, we show that predictions from
successive Transformer layers closely match different iterations of Newton's
Method linearly, with each middle layer roughly computing 3 iterations. In
contrast, exponentially more Gradient Descent steps are needed to match an
additional Transformers layer; this suggests that Transformers have an
comparable rate of convergence with high-order methods such as Iterative
Newton, which are exponentially faster than Gradient Descent. We also show that
Transformers can learn in-context on ill-conditioned data, a setting where
Gradient Descent struggles but Iterative Newton succeeds. Finally, we show
theoretical results which support our empirical findings and have a close
correspondence with them: we prove that Transformers can implement $k$
iterations of Newton's method with $\mathcal{O}(k)$ layers.
"
Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time,Zichang Liu,http://arxiv.org/pdf/2310.17157v1.pdf,2023-10-26,['cs.lg'],2310.17157v1.pdf,"  Large language models (LLMs) with hundreds of billions of parameters have
sparked a new wave of exciting AI applications. However, they are
computationally expensive at inference time. Sparsity is a natural approach to
reduce this cost, but existing methods either require costly retraining, have
to forgo LLM's in-context learning ability, or do not yield wall-clock time
speedup on modern hardware. We hypothesize that contextual sparsity, which are
small, input-dependent sets of attention heads and MLP parameters that yield
approximately the same output as the dense model for a given input, can address
these issues. We show that contextual sparsity exists, that it can be
accurately predicted, and that we can exploit it to speed up LLM inference in
wall-clock time without compromising LLM's quality or in-context learning
ability. Based on these insights, we propose DejaVu, a system that uses a
low-cost algorithm to predict contextual sparsity on the fly given inputs to
each layer, along with an asynchronous and hardware-aware implementation that
speeds up LLM inference. We validate that DejaVu can reduce the inference
latency of OPT-175B by over 2X compared to the state-of-the-art
FasterTransformer, and over 6X compared to the widely used Hugging Face
implementation, without compromising model quality. The code is available at
https://github.com/FMInference/DejaVu.
"
Improving Input-label Mapping with Demonstration Replay for In-context  Learning,Zhuocheng Gong,http://arxiv.org/pdf/2310.19572v1.pdf,2023-10-30,['cs.cl'],2310.19572v1.pdf,"  In-context learning (ICL) is an emerging capability of large autoregressive
language models where a few input-label demonstrations are appended to the
input to enhance the model's understanding of downstream NLP tasks, without
directly adjusting the model parameters. The effectiveness of ICL can be
attributed to the strong language modeling capabilities of large language
models (LLMs), which enable them to learn the mapping between input and labels
based on in-context demonstrations. Despite achieving promising results, the
causal nature of language modeling in ICL restricts the attention to be
backward only, i.e., a token only attends to its previous tokens, failing to
capture the full input-label information and limiting the model's performance.
In this paper, we propose a novel ICL method called Repeated Demonstration with
Sliding Causal Attention, (RdSca). Specifically, we duplicate later
demonstrations and concatenate them to the front, allowing the model to
`observe' the later information even under the causal restriction. Besides, we
introduce sliding causal attention, which customizes causal attention to avoid
information leakage. Experimental results show that our method significantly
improves the input-label mapping in ICL demonstrations. We also conduct an
in-depth analysis of how to customize the causal attention without training,
which has been an unexplored area in previous research.
"
Pretraining Data Mixtures Enable Narrow Model Selection Capabilities in  Transformer Models,Steve Yadlowsky,http://arxiv.org/pdf/2311.00871v1.pdf,2023-11-01,"['cs.lg', 'cs.cl', 'stat.ml']",2311.00871v1.pdf,"  Transformer models, notably large language models (LLMs), have the remarkable
ability to perform in-context learning (ICL) -- to perform new tasks when
prompted with unseen input-output examples without any explicit model training.
In this work, we study how effectively transformers can bridge between their
pretraining data mixture, comprised of multiple distinct task families, to
identify and learn new tasks in-context which are both inside and outside the
pretraining distribution. Building on previous work, we investigate this
question in a controlled setting, where we study transformer models trained on
sequences of $(x, f(x))$ pairs rather than natural language. Our empirical
results show transformers demonstrate near-optimal unsupervised model selection
capabilities, in their ability to first in-context identify different task
families and in-context learn within them when the task families are
well-represented in their pretraining data. However when presented with tasks
or functions which are out-of-domain of their pretraining data, we demonstrate
various failure modes of transformers and degradation of their generalization
for even simple extrapolation tasks. Together our results highlight that the
impressive ICL abilities of high-capacity sequence models may be more closely
tied to the coverage of their pretraining data mixtures than inductive biases
that create fundamental generalization capabilities.
"
Large Language Models are Few-Shot Summarizers: Multi-Intent Comment  Generation via In-Context Learning,Mingyang Geng,http://arxiv.org/pdf/2304.11384v3.pdf,2023-04-22,['cs.se'],2304.11384v3.pdf,"  Code comment generation aims at generating natural language descriptions for
a code snippet to facilitate developers' program comprehension activities.
Despite being studied for a long time, a bottleneck for existing approaches is
that given a code snippet, they can only generate one comment while developers
usually need to know information from diverse perspectives such as what is the
functionality of this code snippet and how to use it. To tackle this
limitation, this study empirically investigates the feasibility of utilizing
large language models (LLMs) to generate comments that can fulfill developers'
diverse intents. Our intuition is based on the facts that (1) the code and its
pairwise comment are used during the pre-training process of LLMs to build the
semantic connection between the natural language and programming language, and
(2) comments in the real-world projects, which are collected for the
pre-training, usually contain different developers' intents. We thus postulate
that the LLMs can already understand the code from different perspectives after
the pre-training. Indeed, experiments on two large-scale datasets demonstrate
the rationale of our insights: by adopting the in-context learning paradigm and
giving adequate prompts to the LLM (e.g., providing it with ten or more
examples), the LLM can significantly outperform a state-of-the-art supervised
learning approach on generating comments with multiple intents. Results also
show that customized strategies for constructing the prompts and
post-processing strategies for reranking the results can both boost the LLM's
performances, which shed light on future research directions for using LLMs to
achieve comment generation.
"
Principle-Driven Self-Alignment of Language Models from Scratch with  Minimal Human Supervision,Zhiqing Sun,http://arxiv.org/pdf/2305.03047v1.pdf,2023-05-04,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.cy']",2305.03047v1.pdf,"  Recent AI-assistant agents, such as ChatGPT, predominantly rely on supervised
fine-tuning (SFT) with human annotations and reinforcement learning from human
feedback (RLHF) to align the output of large language models (LLMs) with human
intentions, ensuring they are helpful, ethical, and reliable. However, this
dependence can significantly constrain the true potential of AI-assistant
agents due to the high cost of obtaining human supervision and the related
issues on quality, reliability, diversity, self-consistency, and undesirable
biases. To address these challenges, we propose a novel approach called
SELF-ALIGN, which combines principle-driven reasoning and the generative power
of LLMs for the self-alignment of AI agents with minimal human supervision. Our
approach encompasses four stages: first, we use an LLM to generate synthetic
prompts, and a topic-guided method to augment the prompt diversity; second, we
use a small set of human-written principles for AI models to follow, and guide
the LLM through in-context learning from demonstrations (of principles
application) to produce helpful, ethical, and reliable responses to user's
queries; third, we fine-tune the original LLM with the high-quality
self-aligned responses so that the resulting model can generate desirable
responses for each query directly without the principle set and the
demonstrations anymore; and finally, we offer a refinement step to address the
issues of overly-brief or indirect responses. Applying SELF-ALIGN to the
LLaMA-65b base language model, we develop an AI assistant named Dromedary. With
fewer than 300 lines of human annotations (including < 200 seed prompts, 16
generic principles, and 5 exemplars for in-context learning). Dromedary
significantly surpasses the performance of several state-of-the-art AI systems,
including Text-Davinci-003 and Alpaca, on benchmark datasets with various
settings.
"
One for All: Towards Training One Graph Model for All Classification  Tasks,Hao Liu,http://arxiv.org/pdf/2310.00149v1.pdf,2023-09-29,['cs.lg'],2310.00149v1.pdf,"  Designing a single model that addresses multiple tasks has been a
long-standing objective in artificial intelligence. Recently, large language
models have demonstrated exceptional capability in integrating and solving
different tasks within the language domain. However, a unified model for
various tasks on graphs remains underexplored, primarily due to the challenges
unique to the graph learning domain. First, graph data from different areas
carry distinct attributes and follow different distributions. Such discrepancy
makes it hard to represent graphs in a single representation space. Second,
tasks on graphs diversify into node, link, and graph tasks, requiring distinct
embedding strategies. Finally, an appropriate graph prompting paradigm for
in-context learning is unclear. Striving to handle all the aforementioned
challenges, we propose One for All (OFA), the first general framework that can
use a single graph model to address the above challenges. Specifically, OFA
proposes text-attributed graphs to unify different graph data by describing
nodes and edges with natural language and uses language models to encode the
diverse and possibly cross-domain text attributes to feature vectors in the
same embedding space. Furthermore, OFA introduces the concept of
nodes-of-interest to standardize different tasks with a single task
representation. For in-context learning on graphs, OFA introduces a novel graph
prompting paradigm that appends prompting substructures to the input graph,
which enables it to address varied tasks without fine-tuning. We train the OFA
model using graph data from multiple domains (including citation networks,
molecular graphs, knowledge graphs, etc.) simultaneously and evaluate its
ability in supervised, few-shot, and zero-shot learning scenarios. OFA performs
well across different tasks, making it the first general-purpose graph
classification model across domains.
"
The Inductive Bias of In-Context Learning: Rethinking Pretraining  Example Design,Yoav Levine,http://arxiv.org/pdf/2110.04541v3.pdf,2021-10-09,"['cs.cl', 'cs.lg']",2110.04541v3.pdf,"  Pretraining Neural Language Models (NLMs) over a large corpus involves
chunking the text into training examples, which are contiguous text segments of
sizes processable by the neural architecture. We highlight a bias introduced by
this common practice: we prove that the pretrained NLM can model much stronger
dependencies between text segments that appeared in the same training example,
than it can between text segments that appeared in different training examples.
This intuitive result has a twofold role. First, it formalizes the motivation
behind a broad line of recent successful NLM training heuristics, proposed for
the pretraining and fine-tuning stages, which do not necessarily appear related
at first glance. Second, our result clearly indicates further improvements to
be made in NLM pretraining for the benefit of Natural Language Understanding
tasks. As an example, we propose ""kNN-Pretraining"": we show that including
semantically related non-neighboring sentences in the same pretraining example
yields improved sentence representations and open domain question answering
abilities. This theoretically motivated degree of freedom for pretraining
example design indicates new training schemes for self-improving
representations.
"
MAGMA -- Multimodal Augmentation of Generative Models through  Adapter-based Finetuning,Constantin Eichenberg,http://arxiv.org/pdf/2112.05253v2.pdf,2021-12-09,"['cs.cv', 'cs.cl', 'i.2.7; i.4.8; i.5.1']",2112.05253v2.pdf,"  Large-scale pretraining is fast becoming the norm in Vision-Language (VL)
modeling. However, prevailing VL approaches are limited by the requirement for
labeled data and the use of complex multi-step pretraining objectives. We
present MAGMA - a simple method for augmenting generative language models with
additional modalities using adapter-based finetuning. Building on Frozen, we
train a series of VL models that autoregressively generate text from arbitrary
combinations of visual and textual input. The pretraining is entirely
end-to-end using a single language modeling objective, simplifying optimization
compared to previous approaches. Importantly, the language model weights remain
unchanged during training, allowing for transfer of encyclopedic knowledge and
in-context learning abilities from language pretraining. MAGMA outperforms
Frozen on open-ended generative tasks, achieving state of the art results on
the OKVQA benchmark and competitive results on a range of other popular VL
benchmarks, while pretraining on 0.2% of the number of samples used to train
SimVLM.
"
Black-Box Tuning for Language-Model-as-a-Service,Tianxiang Sun,http://arxiv.org/pdf/2201.03514v4.pdf,2022-01-10,"['cs.cl', 'cs.ai']",2201.03514v4.pdf,"  Extremely large pre-trained language models (PTMs) such as GPT-3 are usually
released as a service. It allows users to design task-specific prompts to query
the PTMs through some black-box APIs. In such a scenario, which we call
Language-Model-as-a-Service (LMaaS), the gradients of PTMs are usually
unavailable. Can we optimize the task prompts by only accessing the model
inference APIs? This paper proposes the black-box tuning framework to optimize
the continuous prompt prepended to the input text via derivative-free
optimization. Instead of optimizing in the original high-dimensional prompt
space, which is intractable for traditional derivative-free optimization, we
perform optimization in a randomly generated subspace due to the low intrinsic
dimensionality of large PTMs. The experimental results show that the black-box
tuning with RoBERTa on a few labeled samples not only significantly outperforms
manual prompt and GPT-3's in-context learning, but also surpasses the
gradient-based counterparts, i.e., prompt tuning and full model tuning.
"
Contrastive Learning for Prompt-Based Few-Shot Language Learners,Yiren Jian,http://arxiv.org/pdf/2205.01308v1.pdf,2022-05-03,"['cs.cl', 'cs.ai']",2205.01308v1.pdf,"  The impressive performance of GPT-3 using natural language prompts and
in-context learning has inspired work on better fine-tuning of moderately-sized
models under this paradigm. Following this line of work, we present a
contrastive learning framework that clusters inputs from the same class for
better generality of models trained with only limited examples. Specifically,
we propose a supervised contrastive framework that clusters inputs from the
same class under different augmented ""views"" and repel the ones from different
classes. We create different ""views"" of an example by appending it with
different language prompts and contextual demonstrations. Combining a
contrastive loss with the standard masked language modeling (MLM) loss in
prompt-based few-shot learners, the experimental results show that our method
can improve over the state-of-the-art methods in a diverse set of 15 language
tasks. Our framework makes minimal assumptions on the task or the base model,
and can be applied to many recent methods with little modification. The code
will be made available at: https://github.com/yiren-jian/LM-SupCon.
"
Instruction Induction: From Few Examples to Natural Language Task  Descriptions,Or Honovich,http://arxiv.org/pdf/2205.10782v1.pdf,2022-05-22,['cs.cl'],2205.10782v1.pdf,"  Large language models are able to perform a task by conditioning on a few
input-output demonstrations - a paradigm known as in-context learning. We show
that language models can explicitly infer an underlying task from a few
demonstrations by prompting them to generate a natural language instruction
that fits the examples. To explore this ability, we introduce the instruction
induction challenge, compile a dataset consisting of 24 tasks, and define a
novel evaluation metric based on executing the generated instruction. We
discover that, to a large extent, the ability to generate instructions does
indeed emerge when using a model that is both large enough and aligned to
follow instructions; InstructGPT achieves 65.7% of human performance in our
execution-based metric, while the original GPT-3 model reaches only 9.8% of
human performance. This surprising result suggests that instruction induction
might be a viable learning paradigm in and of itself, where instead of fitting
a set of latent continuous parameters to the data, one searches for the best
description in the natural language hypothesis space.
"
Exploring Length Generalization in Large Language Models,Cem Anil,http://arxiv.org/pdf/2207.04901v2.pdf,2022-07-11,"['cs.cl', 'cs.lg']",2207.04901v2.pdf,"  The ability to extrapolate from short problem instances to longer ones is an
important form of out-of-distribution generalization in reasoning tasks, and is
crucial when learning from datasets where longer problem instances are rare.
These include theorem proving, solving quantitative mathematics problems, and
reading/summarizing novels. In this paper, we run careful empirical studies
exploring the length generalization capabilities of transformer-based language
models. We first establish that naively finetuning transformers on length
generalization tasks shows significant generalization deficiencies independent
of model scale. We then show that combining pretrained large language models'
in-context learning abilities with scratchpad prompting (asking the model to
output solution steps before producing an answer) results in a dramatic
improvement in length generalization. We run careful failure analyses on each
of the learning modalities and identify common sources of mistakes that
highlight opportunities in equipping language models with the ability to
generalize to longer problems.
"
Large Language Models are few(1)-shot Table Reasoners,Wenhu Chen,http://arxiv.org/pdf/2210.06710v2.pdf,2022-10-13,['cs.cl'],2210.06710v2.pdf,"  Recent literature has shown that large language models (LLMs) are generally
excellent few-shot reasoners to solve text reasoning tasks. However, the
capability of LLMs on table reasoning tasks is yet to be explored. In this
paper, we aim at understanding how well LLMs can perform table-related tasks
with few-shot in-context learning. Specifically, we evaluated LLMs on popular
table QA and fact verification datasets like WikiTableQuestion, FetaQA,
TabFact, and FEVEROUS and found that LLMs are competent at complex reasoning
over table structures, though these models are not pre-trained on any table
corpus. When combined with `chain of thoughts' prompting, LLMs can achieve very
strong performance with only a 1-shot demonstration, even on par with some SoTA
models. We show that LLMs are even more competent at generating comprehensive
long-form answers on FetaQA than tuned T5-large. We further manually studied
the reasoning chains elicited from LLMs and found that these reasoning chains
are highly consistent with the underlying semantic form. We believe that LLMs
can serve as a simple yet generic baseline for future research. The code and
data are released in https://github.com/wenhuchen/TableCoT.
"
Explanations from Large Language Models Make Small Reasoners Better,Shiyang Li,http://arxiv.org/pdf/2210.06726v1.pdf,2022-10-13,['cs.cl'],2210.06726v1.pdf,"  Integrating free-text explanations to in-context learning of large language
models (LLM) is shown to elicit strong reasoning capabilities along with
reasonable explanations. In this paper, we consider the problem of leveraging
the explanations generated by LLM to improve the training of small reasoners,
which are more favorable in real-production deployment due to their low cost.
We systematically explore three explanation generation approaches from LLM and
utilize a multi-task learning framework to facilitate small models to acquire
strong reasoning power together with explanation generation capabilities.
Experiments on multiple reasoning tasks show that our method can consistently
and significantly outperform finetuning baselines across different settings,
and even perform better than finetuning/prompting a 60x larger GPT-3 (175B)
model by up to 9.5% in accuracy. As a side benefit, human evaluation further
shows that our method can generate high-quality explanations to justify its
predictions, moving towards the goal of explainable AI.
"
Prompting Language Models for Linguistic Structure,Terra Blevins,http://arxiv.org/pdf/2211.07830v2.pdf,2022-11-15,['cs.cl'],2211.07830v2.pdf,"  Although pretrained language models (PLMs) can be prompted to perform a wide
range of language tasks, it remains an open question how much this ability
comes from generalizable linguistic understanding versus surface-level lexical
patterns. To test this, we present a structured prompting approach for
linguistic structured prediction tasks, allowing us to perform zero- and
few-shot sequence tagging with autoregressive PLMs. We evaluate this approach
on part-of-speech tagging, named entity recognition, and sentence chunking,
demonstrating strong few-shot performance in all cases. We also find that while
PLMs contain significant prior knowledge of task labels due to task leakage
into the pretraining corpus, structured prompting can also retrieve linguistic
structure with arbitrary labels. These findings indicate that the in-context
learning ability and linguistic knowledge of PLMs generalizes beyond
memorization of their training data.
"
Visual Programming: Compositional visual reasoning without training,Tanmay Gupta,http://arxiv.org/pdf/2211.11559v1.pdf,2022-11-18,"['cs.cv', 'cs.ai', 'cs.cl']",2211.11559v1.pdf,"  We present VISPROG, a neuro-symbolic approach to solving complex and
compositional visual tasks given natural language instructions. VISPROG avoids
the need for any task-specific training. Instead, it uses the in-context
learning ability of large language models to generate python-like modular
programs, which are then executed to get both the solution and a comprehensive
and interpretable rationale. Each line of the generated program may invoke one
of several off-the-shelf computer vision models, image processing routines, or
python functions to produce intermediate outputs that may be consumed by
subsequent parts of the program. We demonstrate the flexibility of VISPROG on 4
diverse tasks - compositional visual question answering, zero-shot reasoning on
image pairs, factual knowledge object tagging, and language-guided image
editing. We believe neuro-symbolic approaches like VISPROG are an exciting
avenue to easily and effectively expand the scope of AI systems to serve the
long tail of complex tasks that people may wish to perform.
"
Self-Prompting Large Language Models for Zero-Shot Open-Domain QA,Junlong Li,http://arxiv.org/pdf/2212.08635v2.pdf,2022-12-16,"['cs.cl', 'cs.ai']",2212.08635v2.pdf,"  Open-Domain Question Answering (ODQA) aims at answering factoid questions
without explicitly providing specific background documents. In a zero-shot
setting, this task is more challenging since no data is available to train
customized models like Retriever-Readers. Recently, Large Language Models
(LLMs) like GPT-3 have shown their power in zero-shot ODQA with direct
prompting methods, but these methods are still far from releasing the full
powerfulness of LLMs only in an implicitly invoking way. In this paper, we
propose a Self-Prompting framework to explicitly utilize the massive knowledge
stored in the parameters of LLMs and their strong instruction understanding
abilities. Concretely, we prompt LLMs step by step to generate multiple pseudo
QA pairs with background passages and explanations from scratch and then use
those generated elements for in-context learning. Experimental results show our
method surpasses previous SOTA methods significantly on three widely-used ODQA
datasets, and even achieves comparable performance with some Retriever-Reader
models fine-tuned on full training data.
"
"Don't Generate, Discriminate: A Proposal for Grounding Language Models  to Real-World Environments",Yu Gu,http://arxiv.org/pdf/2212.09736v2.pdf,2022-12-19,"['cs.cl', 'cs.ai', 'i.2.7']",2212.09736v2.pdf,"  A key missing capacity of current language models (LMs) is grounding to
real-world environments. Most existing work for grounded language understanding
uses LMs to directly generate plans that can be executed in the environment to
achieve the desired effects. It thereby casts the burden of ensuring
grammaticality, faithfulness, and controllability all on the LMs. We propose
Pangu, a generic framework for grounded language understanding that capitalizes
on the discriminative ability of LMs instead of their generative ability. Pangu
consists of a symbolic agent and a neural LM working in a concerted fashion:
The agent explores the environment to incrementally construct valid plans, and
the LM evaluates the plausibility of the candidate plans to guide the search
process. A case study on the challenging problem of knowledge base question
answering (KBQA), which features a massive environment, demonstrates the
remarkable effectiveness and flexibility of Pangu: A BERT-base LM is sufficient
for setting a new record on standard KBQA datasets, and larger LMs further
bring substantial gains. Pangu also enables, for the first time, effective
few-shot in-context learning for KBQA with large LMs such as Codex.
"
Ontologically Faithful Generation of Non-Player Character Dialogues,Nathaniel Weir,http://arxiv.org/pdf/2212.10618v2.pdf,2022-12-20,['cs.cl'],2212.10618v2.pdf,"  We introduce a language generation task grounded in a popular video game
environment. KNUDGE (KNowledge Constrained User-NPC Dialogue GEneration)
requires models to produce trees of dialogue between video game characters that
accurately reflect quest and entity specifications stated in natural language.
KNUDGE is constructed from side quest dialogues drawn directly from game data
of Obsidian Entertainment's The Outer Worlds, leading to real-world
complexities in generation: (1) dialogues are branching trees as opposed to
linear chains of utterances; (2) utterances must remain faithful to the game
lore -- character personas, backstories, and entity relationships; and (3) a
dialogue must accurately reveal new quest details to the human player. We
report results for a set of neural generation models using supervised and
in-context learning techniques; we find competent performance but room for
future work addressing the challenges of creating realistic, game-quality
dialogues.
"
Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers,Chengyi Wang,http://arxiv.org/pdf/2301.02111v1.pdf,2023-01-05,"['cs.cl', 'cs.sd', 'eess.as']",2301.02111v1.pdf,"  We introduce a language modeling approach for text to speech synthesis (TTS).
Specifically, we train a neural codec language model (called Vall-E) using
discrete codes derived from an off-the-shelf neural audio codec model, and
regard TTS as a conditional language modeling task rather than continuous
signal regression as in previous work. During the pre-training stage, we scale
up the TTS training data to 60K hours of English speech which is hundreds of
times larger than existing systems. Vall-E emerges in-context learning
capabilities and can be used to synthesize high-quality personalized speech
with only a 3-second enrolled recording of an unseen speaker as an acoustic
prompt. Experiment results show that Vall-E significantly outperforms the
state-of-the-art zero-shot TTS system in terms of speech naturalness and
speaker similarity. In addition, we find Vall-E could preserve the speaker's
emotion and acoustic environment of the acoustic prompt in synthesis. See
https://aka.ms/valle for demos of our work.
"
Batch Prompting: Efficient Inference with Large Language Model APIs,Zhoujun Cheng,http://arxiv.org/pdf/2301.08721v2.pdf,2023-01-19,"['cs.cl', 'cs.ai']",2301.08721v2.pdf,"  Performing inference on large volumes of samples with large language models
(LLMs) can be computationally and financially costly in industry and real-world
use. We propose batch prompting, a simple yet effective prompting approach that
enables the LLM to run inference in batches, instead of one sample at a time.
Our method reduces both token and time costs while retaining downstream
performance. We theoretically demonstrate that under a few-shot in-context
learning setting, the inference costs decrease almost inverse linearly with the
number of samples in each batch. We extensively validate the effectiveness of
batch prompting on ten datasets across commonsense QA, arithmetic reasoning,
and NLI/NLU: batch prompting significantly~(up to 5x with six samples in batch)
reduces the LLM (Codex) inference token and time costs while achieving better
or comparable performance. For state-of-the-art Chat-based LLMs, e.g., GPT-3.5
and GPT-4, we show the benefits of batch prompting also hold. Further analysis
shows that the number of samples in each batch and the complexity of tasks
affect its performance. Moreover, batch prompting can be applied across
different reasoning methods using LLMs. Our code can be found at the site
https://github.com/xlang-ai/batch-prompting.
"
Looped Transformers as Programmable Computers,Angeliki Giannou,http://arxiv.org/pdf/2301.13196v1.pdf,2023-01-30,"['cs.lg', 'cs.ai']",2301.13196v1.pdf,"  We present a framework for using transformer networks as universal computers
by programming them with specific weights and placing them in a loop. Our input
sequence acts as a punchcard, consisting of instructions and memory for data
read/writes. We demonstrate that a constant number of encoder layers can
emulate basic computing blocks, including embedding edit operations, non-linear
functions, function calls, program counters, and conditional branches. Using
these building blocks, we emulate a small instruction-set computer. This allows
us to map iterative algorithms to programs that can be executed by a looped,
13-layer transformer. We show how this transformer, instructed by its input,
can emulate a basic calculator, a basic linear algebra library, and in-context
learning algorithms that employ backpropagation. Our work highlights the
versatility of the attention mechanism, and demonstrates that even shallow
transformers can execute full-fledged, general-purpose programs.
"
Grounding Language Models to Images for Multimodal Inputs and Outputs,Jing Yu Koh,http://arxiv.org/pdf/2301.13823v4.pdf,2023-01-31,"['cs.cl', 'cs.ai', 'cs.cv', 'cs.lg']",2301.13823v4.pdf,"  We propose an efficient method to ground pretrained text-only language models
to the visual domain, enabling them to process arbitrarily interleaved
image-and-text data, and generate text interleaved with retrieved images. Our
method leverages the abilities of language models learnt from large scale
text-only pretraining, such as in-context learning and free-form text
generation. We keep the language model frozen, and finetune input and output
linear layers to enable cross-modality interactions. This allows our model to
process arbitrarily interleaved image-and-text inputs, and generate free-form
text interleaved with retrieved images. We achieve strong zero-shot performance
on grounded tasks such as contextual image retrieval and multimodal dialogue,
and showcase compelling interactive abilities. Our approach works with any
off-the-shelf language model and paves the way towards an effective, general
solution for leveraging pretrained language models in visually grounded
settings.
"
ProofNet: Autoformalizing and Formally Proving Undergraduate-Level  Mathematics,Zhangir Azerbayev,http://arxiv.org/pdf/2302.12433v1.pdf,2023-02-24,"['cs.cl', 'cs.ai', 'cs.lo']",2302.12433v1.pdf,"  We introduce ProofNet, a benchmark for autoformalization and formal proving
of undergraduate-level mathematics. The ProofNet benchmarks consists of 371
examples, each consisting of a formal theorem statement in Lean 3, a natural
language theorem statement, and a natural language proof. The problems are
primarily drawn from popular undergraduate pure mathematics textbooks and cover
topics such as real and complex analysis, linear algebra, abstract algebra, and
topology. We intend for ProofNet to be a challenging benchmark that will drive
progress in autoformalization and automatic theorem proving. We report baseline
results on statement autoformalization via in-context learning. Moreover, we
introduce two novel statement autoformalization methods: prompt retrieval and
distilled backtranslation.
"
Finding Support Examples for In-Context Learning,Xiaonan Li,http://arxiv.org/pdf/2302.13539v3.pdf,2023-02-27,['cs.cl'],2302.13539v3.pdf,"  Additionally, the strong dependency among in-context examples makes it an
NP-hard combinatorial optimization problem and enumerating all permutations is
infeasible. Hence we propose LENS, a fiLter-thEN-Search method to tackle this
challenge in two stages: First we filter the dataset to obtain informative
in-context examples individually. Specifically, we propose a novel metric,
InfoScore, to evaluate the example's in-context informativeness based on the
language model's feedback, and further propose a progressive filtering process
to filter out uninformative examples. Then we propose diversity-guided example
search which iteratively refines and evaluates the selected example
permutations, to find examples that fully depict the task. The experimental
results show that LENS significantly outperforms a wide range of baselines.
"
In-Context Instruction Learning,Seonghyeon Ye,http://arxiv.org/pdf/2302.14691v1.pdf,2023-02-28,"['cs.cl', 'cs.ai']",2302.14691v1.pdf,"  Instruction learning of Large Language Models (LLMs) has enabled zero-shot
task generalization. However, instruction learning has been predominantly
approached as a fine-tuning problem, including instruction tuning and
reinforcement learning from human feedback, where LLMs are multi-task
fine-tuned on various tasks with instructions. In this paper, we present a
surprising finding that applying in-context learning to instruction learning,
referred to as In-Context Instruction Learning (ICIL), significantly improves
the zero-shot task generalization performance for both pretrained and
instruction-fine-tuned models. One of the core advantages of ICIL is that it
uses a single fixed prompt to evaluate all tasks, which is a concatenation of
cross-task demonstrations. In particular, we demonstrate that the most powerful
instruction-fine-tuned baseline (text-davinci-003) also benefits from ICIL by
9.3%, indicating that the effect of ICIL is complementary to instruction-based
fine-tuning.
"
Speak Foreign Languages with Your Own Voice: Cross-Lingual Neural Codec  Language Modeling,Ziqiang Zhang,http://arxiv.org/pdf/2303.03926v1.pdf,2023-03-07,"['cs.cl', 'cs.ai', 'cs.sd', 'eess.as']",2303.03926v1.pdf,"  We propose a cross-lingual neural codec language model, VALL-E X, for
cross-lingual speech synthesis. Specifically, we extend VALL-E and train a
multi-lingual conditional codec language model to predict the acoustic token
sequences of the target language speech by using both the source language
speech and the target language text as prompts. VALL-E X inherits strong
in-context learning capabilities and can be applied for zero-shot cross-lingual
text-to-speech synthesis and zero-shot speech-to-speech translation tasks.
Experimental results show that it can generate high-quality speech in the
target language via just one speech utterance in the source language as a
prompt while preserving the unseen speaker's voice, emotion, and acoustic
environment. Moreover, VALL-E X effectively alleviates the foreign accent
problems, which can be controlled by a language ID. Audio samples are available
at \url{https://aka.ms/vallex}.
"
Self-planning Code Generation with Large Language Models,Xue Jiang,http://arxiv.org/pdf/2303.06689v2.pdf,2023-03-12,['cs.se'],2303.06689v2.pdf,"  Although large language models have demonstrated impressive ability in code
generation, they are still struggling to address the complicated intent
provided by humans. It is widely acknowledged that humans typically employ
planning to decompose complex problems and schedule the solution steps prior to
implementation. Thus we introduce planning into code generation to help the
model understand complex intent and reduce the difficulty of problem solving.
This paper proposes a self-planning code generation method with large language
model, which consists of two phases, namely planning phase and implementation
phase. Specifically, in the planning phase, the language model plans out the
solution steps from the intent combined with in-context learning. Then it
enters the implementation phase, where the model generates code step by step,
guided by the solution steps. The effectiveness of self-planning code
generation has been rigorously evaluated on multiple code generation datasets
and the results have demonstrated a marked superiority over naive direct
generation approaches with language model. The improvement in performance is
substantial, highlighting the significance of self-planning in code generation
tasks.
"
GPT is becoming a Turing machine: Here are some ways to program it,Ana Jojic,http://arxiv.org/pdf/2303.14310v1.pdf,2023-03-25,['cs.cl'],2303.14310v1.pdf,"  We demonstrate that, through appropriate prompting, GPT-3 family of models
can be triggered to perform iterative behaviours necessary to execute (rather
than just write or recall) programs that involve loops, including several
popular algorithms found in computer science curricula or software developer
interviews. We trigger execution and description of Iterations by Regimenting
Self-Attention (IRSA) in one (or a combination) of three ways: 1) Using strong
repetitive structure in an example of an execution path of a target program for
one particular input, 2) Prompting with fragments of execution paths, and 3)
Explicitly forbidding (skipping) self-attention to parts of the generated text.
On a dynamic program execution, IRSA leads to larger accuracy gains than
replacing the model with the much more powerful GPT-4. IRSA has promising
applications in education, as the prompts and responses resemble student
assignments in data structures and algorithms classes. Our findings hold
implications for evaluating LLMs, which typically target the in-context
learning: We show that prompts that may not even cover one full task example
can trigger algorithmic behaviour, allowing solving problems previously thought
of as hard for LLMs, such as logical puzzles. Consequently, prompt design plays
an even more critical role in LLM performance than previously recognized.
"
When Brain-inspired AI Meets AGI,Lin Zhao,http://arxiv.org/pdf/2303.15935v1.pdf,2023-03-28,['cs.ai'],2303.15935v1.pdf,"  Artificial General Intelligence (AGI) has been a long-standing goal of
humanity, with the aim of creating machines capable of performing any
intellectual task that humans can do. To achieve this, AGI researchers draw
inspiration from the human brain and seek to replicate its principles in
intelligent machines. Brain-inspired artificial intelligence is a field that
has emerged from this endeavor, combining insights from neuroscience,
psychology, and computer science to develop more efficient and powerful AI
systems. In this article, we provide a comprehensive overview of brain-inspired
AI from the perspective of AGI. We begin with the current progress in
brain-inspired AI and its extensive connection with AGI. We then cover the
important characteristics for both human intelligence and AGI (e.g., scaling,
multimodality, and reasoning). We discuss important technologies toward
achieving AGI in current AI systems, such as in-context learning and prompt
tuning. We also investigate the evolution of AGI systems from both algorithmic
and infrastructural perspectives. Finally, we explore the limitations and
future of AGI.
"
Larger Probes Tell a Different Story: Extending Psycholinguistic  Datasets Via In-Context Learning,Namrata Shivagunde,http://arxiv.org/pdf/2303.16445v1.pdf,2023-03-29,['cs.cl'],2303.16445v1.pdf,"  Language model probing is often used to test specific capabilities of these
models. However, conclusions from such studies may be limited when the probing
benchmarks are small and lack statistical power. In this work, we introduce
new, larger datasets for negation (NEG-1500-SIMP) and role reversal (ROLE-1500)
inspired by psycholinguistic studies. We dramatically extend existing NEG-136
and ROLE-88 benchmarks using GPT3, increasing their size from 18 and 44
sentence pairs to 750 each. We also create another version of extended negation
dataset (NEG-1500-SIMP-TEMP), created using template-based generation. It
consists of 770 sentence pairs. We evaluate 22 models on the extended datasets,
seeing model performance dip 20-57% compared to the original smaller
benchmarks. We observe high levels of negation sensitivity in models like BERT
and ALBERT demonstrating that previous findings might have been skewed due to
smaller test sets. Finally, we observe that while GPT3 has generated all the
examples in ROLE-1500 is only able to solve 24.6% of them during probing.
"
Is ChatGPT a Highly Fluent Grammatical Error Correction System? A  Comprehensive Evaluation,Tao Fang,http://arxiv.org/pdf/2304.01746v1.pdf,2023-04-04,['cs.cl'],2304.01746v1.pdf,"  ChatGPT, a large-scale language model based on the advanced GPT-3.5
architecture, has shown remarkable potential in various Natural Language
Processing (NLP) tasks. However, there is currently a dearth of comprehensive
study exploring its potential in the area of Grammatical Error Correction
(GEC). To showcase its capabilities in GEC, we design zero-shot
chain-of-thought (CoT) and few-shot CoT settings using in-context learning for
ChatGPT. Our evaluation involves assessing ChatGPT's performance on five
official test sets in three different languages, along with three
document-level GEC test sets in English. Our experimental results and human
evaluations demonstrate that ChatGPT has excellent error detection capabilities
and can freely correct errors to make the corrected sentences very fluent,
possibly due to its over-correction tendencies and not adhering to the
principle of minimal edits. Additionally, its performance in non-English and
low-resource settings highlights its potential in multilingual GEC tasks.
However, further analysis of various types of errors at the document-level has
shown that ChatGPT cannot effectively correct agreement, coreference, tense
errors across sentences, and cross-sentence boundary errors.
"
SegGPT: Segmenting Everything In Context,Xinlong Wang,http://arxiv.org/pdf/2304.03284v1.pdf,2023-04-06,['cs.cv'],2304.03284v1.pdf,"  We present SegGPT, a generalist model for segmenting everything in context.
We unify various segmentation tasks into a generalist in-context learning
framework that accommodates different kinds of segmentation data by
transforming them into the same format of images. The training of SegGPT is
formulated as an in-context coloring problem with random color mapping for each
data sample. The objective is to accomplish diverse tasks according to the
context, rather than relying on specific colors. After training, SegGPT can
perform arbitrary segmentation tasks in images or videos via in-context
inference, such as object instance, stuff, part, contour, and text. SegGPT is
evaluated on a broad range of tasks, including few-shot semantic segmentation,
video object segmentation, semantic segmentation, and panoptic segmentation.
Our results show strong capabilities in segmenting in-domain and out-of-domain
targets, either qualitatively or quantitatively.
"
Extractive Summarization via ChatGPT for Faithful Summary Generation,Haopeng Zhang,http://arxiv.org/pdf/2304.04193v2.pdf,2023-04-09,['cs.cl'],2304.04193v2.pdf,"  Extractive summarization is a crucial task in natural language processing
that aims to condense long documents into shorter versions by directly
extracting sentences. The recent introduction of large language models has
attracted significant interest in the NLP community due to its remarkable
performance on a wide range of downstream tasks. This paper first presents a
thorough evaluation of ChatGPT's performance on extractive summarization and
compares it with traditional fine-tuning methods on various benchmark datasets.
Our experimental analysis reveals that ChatGPT exhibits inferior extractive
summarization performance in terms of ROUGE scores compared to existing
supervised systems, while achieving higher performance based on LLM-based
evaluation metrics. In addition, we explore the effectiveness of in-context
learning and chain-of-thought reasoning for enhancing its performance.
Furthermore, we find that applying an extract-then-generate pipeline with
ChatGPT yields significant performance improvements over abstractive baselines
in terms of summary faithfulness. These observations highlight potential
directions for enhancing ChatGPT's capabilities in faithful summarization using
two-stage approaches.
"
Towards Robust Prompts on Vision-Language Models,Jindong Gu,http://arxiv.org/pdf/2304.08479v1.pdf,2023-04-17,['cs.cv'],2304.08479v1.pdf,"  With the advent of vision-language models (VLMs) that can perform in-context
and prompt-based learning, how can we design prompting approaches that robustly
generalize to distribution shift and can be used on novel classes outside the
support set of the prompts? In this work, we first define two types of
robustness to distribution shift on VLMs, namely, robustness on base classes
(the classes included in the support set of prompts) and robustness on novel
classes. Then, we study the robustness of existing in-context learning and
prompt learning approaches, where we find that prompt learning performs
robustly on test images from base classes, while it does not generalize well on
images from novel classes. We propose robust prompt learning by integrating
multiple-scale image features into the prompt, which improves both types of
robustness. Comprehensive experiments are conducted to study the defined
robustness on six benchmarks and show the effectiveness of our proposal.
"
A Latent Space Theory for Emergent Abilities in Large Language Models,Hui Jiang,http://arxiv.org/pdf/2304.09960v3.pdf,2023-04-19,"['cs.cl', 'cs.ai', 'cs.lg']",2304.09960v3.pdf,"  Languages are not created randomly but rather to communicate information.
There is a strong association between languages and their underlying meanings,
resulting in a sparse joint distribution that is heavily peaked according to
their correlations. Moreover, these peak values happen to match with the
marginal distribution of languages due to the sparsity. With the advent of LLMs
trained on big data and large models, we can now precisely assess the marginal
distribution of languages, providing a convenient means of exploring the sparse
structures in the joint distribution for effective inferences. In this paper,
we categorize languages as either unambiguous or {\epsilon}-ambiguous and
present quantitative results to demonstrate that the emergent abilities of
LLMs, such as language understanding, in-context learning, chain-of-thought
prompting, and effective instruction fine-tuning, can all be attributed to
Bayesian inference on the sparse joint distribution of languages.
"
Understanding and Predicting Human Label Variation in Natural Language  Inference through Explanation,Nan-Jiang Jiang,http://arxiv.org/pdf/2304.12443v1.pdf,2023-04-24,['cs.cl'],2304.12443v1.pdf,"  Human label variation (Plank 2022), or annotation disagreement, exists in
many natural language processing (NLP) tasks. To be robust and trusted, NLP
models need to identify such variation and be able to explain it. To this end,
we created the first ecologically valid explanation dataset with diverse
reasoning, LiveNLI. LiveNLI contains annotators' highlights and free-text
explanations for the label(s) of their choice for 122 English Natural Language
Inference items, each with at least 10 annotations. We used its explanations
for chain-of-thought prompting, and found there is still room for improvement
in GPT-3's ability to predict label distribution with in-context learning.
"
"Stance Detection With Supervised, Zero-Shot, and Few-Shot Applications",Michael Burnham,http://arxiv.org/pdf/2305.01723v1.pdf,2023-05-02,['cs.cl'],2305.01723v1.pdf,"  Stance detection is the identification of an author's beliefs about a subject
from a document. Researchers widely rely on sentiment analysis to accomplish
this. However, recent research has show that sentiment analysis is only loosely
correlated with stance, if at all. This paper advances methods in text analysis
by precisely defining the task of stance detection, providing a generalized
framework for the task, and then presenting three distinct approaches for
performing stance detection: supervised classification, zero-shot
classification with NLI classifiers, and in-context learning. In doing so, I
demonstrate how zero-shot and few-shot language classifiers can replace human
labelers for a variety of tasks and discuss how their application and
limitations differ from supervised classifiers. Finally, I demonstrate an
application of zero-shot stance detection by replicating Block Jr et al.
(2022).
"
WangLab at MEDIQA-Chat 2023: Clinical Note Generation from  Doctor-Patient Conversations using Large Language Models,John Giorgi,http://arxiv.org/pdf/2305.02220v2.pdf,2023-05-03,"['cs.cl', 'cs.ai', 'cs.lg']",2305.02220v2.pdf,"  This paper describes our submission to the MEDIQA-Chat 2023 shared task for
automatic clinical note generation from doctor-patient conversations. We report
results for two approaches: the first fine-tunes a pre-trained language model
(PLM) on the shared task data, and the second uses few-shot in-context learning
(ICL) with a large language model (LLM). Both achieve high performance as
measured by automatic metrics (e.g. ROUGE, BERTScore) and ranked second and
first, respectively, of all submissions to the shared task. Expert human
scrutiny indicates that notes generated via the ICL-based approach with GPT-4
are preferred about as often as human-written notes, making it a promising path
toward automated note generation from doctor-patient conversations.
"
Otter: A Multi-Modal Model with In-Context Instruction Tuning,Bo Li,http://arxiv.org/pdf/2305.03726v1.pdf,2023-05-05,"['cs.cv', 'cs.cl']",2305.03726v1.pdf,"  Large language models (LLMs) have demonstrated significant universal
capabilities as few/zero-shot learners in various tasks due to their
pre-training on vast amounts of text data, as exemplified by GPT-3, which
boosted to InstrctGPT and ChatGPT, effectively following natural language
instructions to accomplish real-world tasks. In this paper, we propose to
introduce instruction tuning into multi-modal models, motivated by the Flamingo
model's upstream interleaved format pretraining dataset. We adopt a similar
approach to construct our MultI-Modal In-Context Instruction Tuning (MIMIC-IT)
dataset. We then introduce Otter, a multi-modal model based on OpenFlamingo
(open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and
showcasing improved instruction-following ability and in-context learning. We
also optimize OpenFlamingo's implementation for researchers, democratizing the
required training resources from 1$\times$ A100 GPU to 4$\times$ RTX-3090 GPUs,
and integrate both OpenFlamingo and Otter into Huggingface Transformers for
more researchers to incorporate the models into their customized training and
inference pipelines.
"
How Good are Commercial Large Language Models on African Languages?,Jessica Ojo,http://arxiv.org/pdf/2305.06530v1.pdf,2023-05-11,"['cs.cl', 'cs.ai', 'cs.lg']",2305.06530v1.pdf,"  Recent advancements in Natural Language Processing (NLP) has led to the
proliferation of large pretrained language models. These models have been shown
to yield good performance, using in-context learning, even on unseen tasks and
languages. They have also been exposed as commercial APIs as a form of
language-model-as-a-service, with great adoption. However, their performance on
African languages is largely unknown. We present a preliminary analysis of
commercial large language models on two tasks (machine translation and text
classification) across eight African languages, spanning different language
families and geographical areas. Our results suggest that commercial language
models produce below-par performance on African languages. We also find that
they perform better on text classification than machine translation. In
general, our findings present a call-to-action to ensure African languages are
well represented in commercial large language models, given their growing
popularity.
"
Chain-of-Dictionary Prompting Elicits Translation in Large Language  Models,Hongyuan Lu,http://arxiv.org/pdf/2305.06575v3.pdf,2023-05-11,['cs.cl'],2305.06575v3.pdf,"  Large language models (LLMs) have shown surprisingly good performance in
multilingual neural machine translation (MNMT) even when trained without
parallel data. Yet, despite the fact that the amount of training data is
gigantic, they still struggle with translating rare words, particularly for
low-resource languages. Even worse, it is usually unrealistic to retrieve
relevant demonstrations for in-context learning with low-resource languages on
LLMs, which restricts the practical use of LLMs for translation -- how should
we mitigate this problem? To this end, we present a novel method, CoD, which
augments LLMs with prior knowledge with the chains of multilingual dictionaries
for a subset of input words to elicit translation abilities for LLMs. Extensive
experiments indicate that augmenting ChatGPT with CoD elicits large gains by up
to 13x chrF++ points for MNMT (3.08 to 42.63 for English to Serbian written in
Cyrillic script) on FLORES-200 full devtest set. We further demonstrate the
importance of chaining the multilingual dictionaries, as well as the
superiority of CoD to few-shot demonstration for low-resource languages.
"
Is ChatGPT a Good Causal Reasoner? A Comprehensive Evaluation,Jinglong Gao,http://arxiv.org/pdf/2305.07375v4.pdf,2023-05-12,"['cs.cl', 'cs.ai']",2305.07375v4.pdf,"  Causal reasoning ability is crucial for numerous NLP applications. Despite
the impressive emerging ability of ChatGPT in various NLP tasks, it is unclear
how well ChatGPT performs in causal reasoning. In this paper, we conduct the
first comprehensive evaluation of the ChatGPT's causal reasoning capabilities.
Experiments show that ChatGPT is not a good causal reasoner, but a good causal
explainer. Besides, ChatGPT has a serious hallucination on causal reasoning,
possibly due to the reporting biases between causal and non-causal
relationships in natural language, as well as ChatGPT's upgrading processes,
such as RLHF. The In-Context Learning (ICL) and Chain-of-Thought (CoT)
techniques can further exacerbate such causal hallucination. Additionally, the
causal reasoning ability of ChatGPT is sensitive to the words used to express
the causal concept in prompts, and close-ended prompts perform better than
open-ended prompts. For events in sentences, ChatGPT excels at capturing
explicit causality rather than implicit causality, and performs better in
sentences with lower event density and smaller lexical distance between events.
The code is available on https://github.com/ArrogantL/ChatGPT4CausalReasoning .
"
AutoTrial: Prompting Language Models for Clinical Trial Design,Zifeng Wang,http://arxiv.org/pdf/2305.11366v2.pdf,2023-05-19,['cs.cl'],2305.11366v2.pdf,"  Clinical trials are critical for drug development. Constructing the
appropriate eligibility criteria (i.e., the inclusion/exclusion criteria for
patient recruitment) is essential for the trial's success. Proper design of
clinical trial protocols should consider similar precedent trials and their
eligibility criteria to ensure sufficient patient coverage. In this paper, we
present a method named AutoTrial to aid the design of clinical eligibility
criteria using language models. It allows (1) controllable generation under
instructions via a hybrid of discrete and neural prompting, (2) scalable
knowledge incorporation via in-context learning, and (3) explicit reasoning
chains to provide rationales for understanding the outputs. Experiments on over
70K clinical trials verify that AutoTrial generates high-quality criteria texts
that are fluent and coherent and with high accuracy in capturing the relevant
clinical concepts to the target trial. It is noteworthy that our method, with a
much smaller parameter size, gains around 60% winning rate against the GPT-3.5
baselines via human evaluations.
"
Cross-Lingual Supervision improves Large Language Models Pre-training,Andrea Schioppa,http://arxiv.org/pdf/2305.11778v1.pdf,2023-05-19,"['cs.cl', 'cs.lg']",2305.11778v1.pdf,"  The recent rapid progress in pre-training Large Language Models has relied on
using self-supervised language modeling objectives like next token prediction
or span corruption. On the other hand, Machine Translation Systems are mostly
trained using cross-lingual supervision that requires aligned data between
source and target languages. We demonstrate that pre-training Large Language
Models on a mixture of a self-supervised Language Modeling objective and the
supervised Machine Translation objective, therefore including cross-lingual
parallel data during pre-training, yields models with better in-context
learning abilities. As pre-training is a very resource-intensive process and a
grid search on the best mixing ratio between the two objectives is
prohibitively expensive, we propose a simple yet effective strategy to learn it
during pre-training.
"
"How to Prompt LLMs for Text-to-SQL: A Study in Zero-shot, Single-domain,  and Cross-domain Settings",Shuaichen Chang,http://arxiv.org/pdf/2305.11853v2.pdf,2023-05-19,['cs.cl'],2305.11853v2.pdf,"  Large language models (LLMs) with in-context learning have demonstrated
remarkable capability in the text-to-SQL task. Previous research has prompted
LLMs with various demonstration-retrieval strategies and intermediate reasoning
steps to enhance the performance of LLMs. However, those works often employ
varied strategies when constructing the prompt text for text-to-SQL inputs,
such as databases and demonstration examples. This leads to a lack of
comparability in both the prompt constructions and their primary contributions.
Furthermore, selecting an effective prompt construction has emerged as a
persistent problem for future research. To address this limitation, we
comprehensively investigate the impact of prompt constructions across various
settings and provide insights for future work.
"
Fact-Checking Complex Claims with Program-Guided Reasoning,Liangming Pan,http://arxiv.org/pdf/2305.12744v1.pdf,2023-05-22,"['cs.cl', 'cs.ai']",2305.12744v1.pdf,"  Fact-checking real-world claims often requires collecting multiple pieces of
evidence and applying complex multi-step reasoning. In this paper, we present
Program-Guided Fact-Checking (ProgramFC), a novel fact-checking model that
decomposes complex claims into simpler sub-tasks that can be solved using a
shared library of specialized functions. We first leverage the in-context
learning ability of large language models to generate reasoning programs to
guide the verification process. Afterward, we execute the program by delegating
each sub-task to the corresponding sub-task handler. This process makes our
model both explanatory and data-efficient, providing clear explanations of its
reasoning process and requiring minimal training data. We evaluate ProgramFC on
two challenging fact-checking datasets and show that it outperforms seven
fact-checking baselines across different settings of evidence availability,
with explicit output programs that benefit human debugging. Our codes and data
are publicly available at https://github.com/mbzuai-nlp/ProgramFC.
"
ExplainCPE: A Free-text Explanation Benchmark of Chinese Pharmacist  Examination,Dongfang Li,http://arxiv.org/pdf/2305.12945v2.pdf,2023-05-22,['cs.cl'],2305.12945v2.pdf,"  As ChatGPT and GPT-4 spearhead the development of Large Language Models
(LLMs), more researchers are investigating their performance across various
tasks. But more research needs to be done on the interpretability capabilities
of LLMs, that is, the ability to generate reasons after an answer has been
given. Existing explanation datasets are mostly English-language general
knowledge questions, which leads to insufficient thematic and linguistic
diversity. To address the language bias and lack of medical resources in
generating rationales QA datasets, we present ExplainCPE (over 7k instances), a
challenging medical benchmark in Simplified Chinese. We analyzed the errors of
ChatGPT and GPT-4, pointing out the limitations of current LLMs in
understanding text and computational reasoning. During the experiment, we also
found that different LLMs have different preferences for in-context learning.
ExplainCPE presents a significant challenge, but its potential for further
investigation is promising, and it can be used to evaluate the ability of a
model to generate explanations. AI safety and trustworthiness need more
attention, and this work makes the first step to explore the medical
interpretability of LLMs.The dataset is available at
https://github.com/HITsz-TMG/ExplainCPE.
"
MAILEX: Email Event and Argument Extraction,Saurabh Srivastava,http://arxiv.org/pdf/2305.13469v2.pdf,2023-05-22,"['cs.cl', 'cs.ai']",2305.13469v2.pdf,"  In this work, we present the first dataset, MailEx, for performing event
extraction from conversational email threads. To this end, we first proposed a
new taxonomy covering 10 event types and 76 arguments in the email domain. Our
final dataset includes 1.5K email threads and ~4K emails, which are annotated
with totally ~8K event instances. To understand the task challenges, we
conducted a series of experiments comparing three types of approaches, i.e.,
fine-tuned sequence labeling, fine-tuned generative extraction, and few-shot
in-context learning. Our results showed that the task of email event extraction
is far from being addressed, due to challenges lying in, e.g., extracting
non-continuous, shared trigger spans, extracting non-named entity arguments,
and modeling the email conversational history. Our work thus suggests more
future investigations in this domain-specific event extraction task.
"
Can ChatGPT Detect Intent? Evaluating Large Language Models for Spoken  Language Understanding,Mutian He,http://arxiv.org/pdf/2305.13512v2.pdf,2023-05-22,"['cs.cl', 'cs.ai', 'cs.sd', 'eess.as']",2305.13512v2.pdf,"  Recently, large pretrained language models have demonstrated strong language
understanding capabilities. This is particularly reflected in their zero-shot
and in-context learning abilities on downstream tasks through prompting. To
assess their impact on spoken language understanding (SLU), we evaluate several
such models like ChatGPT and OPT of different sizes on multiple benchmarks. We
verify the emergent ability unique to the largest models as they can reach
intent classification accuracy close to that of supervised models with zero or
few shots on various languages given oracle transcripts. By contrast, the
results for smaller models fitting a single GPU fall far behind. We note that
the error cases often arise from the annotation scheme of the dataset;
responses from ChatGPT are still reasonable. We show, however, that the model
is worse at slot filling, and its performance is sensitive to ASR errors,
suggesting serious challenges for the application of those textual models on
SLU.
"
LogicLLM: Exploring Self-supervised Logic-enhanced Training for Large  Language Models,Fangkai Jiao,http://arxiv.org/pdf/2305.13718v2.pdf,2023-05-23,['cs.cl'],2305.13718v2.pdf,"  Existing efforts to improve logical reasoning ability of language models have
predominantly relied on supervised fine-tuning, hindering generalization to new
domains and/or tasks. The development of Large Langauge Models (LLMs) has
demonstrated the capacity of compressing abundant knowledge into a single
proxy, enabling them to tackle multiple tasks effectively. Our preliminary
experiments, nevertheless, show that LLMs do not show capability on logical
reasoning. The performance of LLMs on logical reasoning benchmarks is far
behind the existing state-of-the-art baselines. In this paper, we make the
first attempt to investigate the feasibility of incorporating logical knowledge
through self-supervised post-training, and activating it via in-context
learning, which we termed as LogicLLM. Specifically, we devise an
auto-regressive objective variant of MERIt and integrate it with two LLM
series, i.e., FLAN-T5 and LLaMA, with parameter size ranging from 3 billion to
13 billion. The results on two challenging logical reasoning benchmarks
demonstrate the effectiveness of LogicLLM. Besides, we conduct extensive
ablation studies to analyze the key factors in designing logic-oriented proxy
tasks.
"
Make a Choice! Knowledge Base Question Answering with In-Context  Learning,Chuanyuan Tan,http://arxiv.org/pdf/2305.13972v1.pdf,2023-05-23,['cs.cl'],2305.13972v1.pdf,"  Question answering over knowledge bases (KBQA) aims to answer factoid
questions with a given knowledge base (KB). Due to the large scale of KB,
annotated data is impossible to cover all fact schemas in KB, which poses a
challenge to the generalization ability of methods that require a sufficient
amount of annotated data. Recently, LLMs have shown strong few-shot performance
in many NLP tasks. We expect LLM can help existing methods improve their
generalization ability, especially in low-resource situations. In this paper,
we present McL-KBQA, a framework that incorporates the few-shot ability of LLM
into the KBQA method via ICL-based multiple choice and then improves the
effectiveness of the QA tasks. Experimental results on two KBQA datasets
demonstrate the competitive performance of McL-KBQA with strong improvements in
generalization. We expect to explore a new way to QA tasks from KBQA in
conjunction with LLM, how to generate answers normatively and correctly with
strong generalization.
"
CTQScorer: Combining Multiple Features for In-context Example Selection  for Machine Translation,Aswanth Kumar,http://arxiv.org/pdf/2305.14105v2.pdf,2023-05-23,"['cs.cl', 'cs.ai']",2305.14105v2.pdf,"  Large language models have demonstrated the capability to perform on machine
translation when the input is prompted with a few examples (in-context
learning). Translation quality depends on various features of the selected
examples, such as their quality and relevance, but previous work has
predominantly focused on individual features in isolation. In this paper, we
propose a general framework for combining different features influencing
example selection. We learn a regression model, CTQ Scorer (Contextual
Translation Quality), that selects examples based on multiple features in order
to maximize the translation quality. On multiple language pairs and language
models, we show that CTQ Scorer helps significantly outperform random selection
as well as strong single-factor baselines reported in the literature. We also
see an improvement of over 2.5 COMET points on average with respect to a strong
BM25 retrieval-based baseline.
"
Empowering LLM-based Machine Translation with Cultural Awareness,Binwei Yao,http://arxiv.org/pdf/2305.14328v1.pdf,2023-05-23,['cs.cl'],2305.14328v1.pdf,"  Traditional neural machine translation (NMT) systems often fail to translate
sentences that contain culturally specific information. Most previous NMT
methods have incorporated external cultural knowledge during training, which
requires fine-tuning on low-frequency items specific to the culture. Recent
in-context learning utilizes lightweight prompts to guide large language models
(LLMs) to perform machine translation, however, whether such an approach works
in terms of injecting culture awareness into machine translation remains
unclear. To this end, we introduce a new data curation pipeline to construct a
culturally relevant parallel corpus, enriched with annotations of
cultural-specific entities. Additionally, we design simple but effective
prompting strategies to assist this LLM-based translation. Extensive
experiments show that our approaches can largely help incorporate cultural
knowledge into LLM-based machine translation, outperforming traditional NMT
systems in translating cultural-specific sentences.
"
Self-Checker: Plug-and-Play Modules for Fact-Checking with Large  Language Models,Miaoran Li,http://arxiv.org/pdf/2305.14623v1.pdf,2023-05-24,['cs.cl'],2305.14623v1.pdf,"  Fact-checking is an essential task in NLP that is commonly utilized for
validating the factual accuracy of claims. Prior work has mainly focused on
fine-tuning pre-trained languages models on specific datasets, which can be
computationally intensive and time-consuming. With the rapid development of
large language models (LLMs), such as ChatGPT and GPT-3, researchers are now
exploring their in-context learning capabilities for a wide range of tasks. In
this paper, we aim to assess the capacity of LLMs for fact-checking by
introducing Self-Checker, a framework comprising a set of plug-and-play modules
that facilitate fact-checking by purely prompting LLMs in an almost zero-shot
setting. This framework provides a fast and efficient way to construct
fact-checking systems in low-resource environments. Empirical results
demonstrate the potential of Self-Checker in utilizing LLMs for fact-checking.
However, there is still significant room for improvement compared to SOTA
fine-tuned models, which suggests that LLM adoption could be a promising
approach for future fact-checking research.
"
ExpertPrompting: Instructing Large Language Models to be Distinguished  Experts,Benfeng Xu,http://arxiv.org/pdf/2305.14688v1.pdf,2023-05-24,"['cs.cl', 'cs.ai']",2305.14688v1.pdf,"  The answering quality of an aligned large language model (LLM) can be
drastically improved if treated with proper crafting of prompts. In this paper,
we propose ExpertPrompting to elicit the potential of LLMs to answer as
distinguished experts. We first utilize In-Context Learning to automatically
synthesize detailed and customized descriptions of the expert identity for each
specific instruction, and then ask LLMs to provide answer conditioned on such
agent background. Based on this augmented prompting strategy, we produce a new
set of instruction-following data using GPT-3.5, and train a competitive
open-source chat assistant called ExpertLLaMA. We employ GPT4-based evaluation
to show that 1) the expert data is of significantly higher quality than vanilla
answers, and 2) ExpertLLaMA outperforms existing open-source opponents and
achieves 96\% of the original ChatGPT's capability. All data and the
ExpertLLaMA model will be made publicly available at
\url{https://github.com/OFA-Sys/ExpertLLaMA}.
"
Adapting Language Models to Compress Contexts,Alexis Chevalier,http://arxiv.org/pdf/2305.14788v2.pdf,2023-05-24,['cs.cl'],2305.14788v2.pdf,"  Transformer-based language models (LMs) are powerful and widely-applicable
tools, but their usefulness is constrained by a finite context window and the
expensive computational cost of processing long text documents. We propose to
adapt pre-trained LMs into AutoCompressors. These language models are capable
of compressing long contexts into compact summary vectors, which are then
accessible to the model as soft prompts. Summary vectors are trained with an
unsupervised objective, whereby long documents are processed in segments, and
summary vectors from all previous segments are used in language modeling. We
fine-tune OPT and Llama-2 models on sequences of up to 30,720 tokens and show
that AutoCompressors can utilize long contexts to improve perplexity. We
evaluate AutoCompressors on in-context learning by compressing task
demonstrations and find that summary vectors are good substitutes for
plain-text demonstrations, increasing accuracy while reducing inference costs.
Finally, we explore the benefits of pre-computing summary vectors for large
corpora by applying summary vectors to retrievalaugmented language modeling and
a passage re-ranking task. Overall, AutoCompressors emerge as a simple and
inexpensive solution to extend the context window of LMs while speeding up
inference over long contexts.
"
ByteSized32: A Corpus and Challenge Task for Generating Task-Specific  World Models Expressed as Text Games,Ruoyao Wang,http://arxiv.org/pdf/2305.14879v2.pdf,2023-05-24,"['cs.cl', 'cs.ai']",2305.14879v2.pdf,"  In this work, we investigate the capacity of language models to generate
explicit, interpretable, and interactive world models of scientific and
common-sense reasoning tasks. We operationalize this as a task of generating
text games, expressed as hundreds of lines of Python code. To facilitate this
task, we introduce ByteSized32 (Code: github.com/cognitiveailab/BYTESIZED32), a
corpus of 32 reasoning-focused text games totaling 20k lines of Python code. We
empirically demonstrate that GPT-4 can use these games as templates for
single-shot in-context learning, successfully producing runnable games on
unseen topics in 28% of cases. When allowed to self-reflect on program errors,
game runnability substantially increases to 57%. While evaluating simulation
fidelity is labor-intensive, we introduce a suite of automated metrics to
assess game fidelity, technical validity, adherence to task specifications, and
winnability, showing a high degree of agreement with expert human ratings. We
pose this as a challenge task to spur further development at the juncture of
world modeling and code generation.
"
Getting Sick After Seeing a Doctor? Diagnosing and Mitigating Knowledge  Conflicts in Event Temporal Reasoning,Tianqing Fang,http://arxiv.org/pdf/2305.14970v1.pdf,2023-05-24,"['cs.cl', 'cs.ai']",2305.14970v1.pdf,"  Event temporal reasoning aims at identifying the temporal relations between
two or more events. However, knowledge conflicts arise when there is a mismatch
between the actual temporal relations of events in the context and the prior
knowledge or biases learned by the model. We first systematically define
distinct kinds of bias in event temporal reasoning, which include event
relation prior bias, tense bias, narrative bias, and dependency bias, as
indicators to study knowledge conflicts. To mitigate such event-related
knowledge conflict, we introduce a Counterfactual Data Augmentation based
method that can be applied to both Pre-trained Language Models (PLMs) and Large
Language Models (LLMs) either as additional training data or demonstrations for
In-Context Learning. Experiments suggest the importance of mitigating knowledge
conflicts in event temporal reasoning tasks for reducing hallucination and
highlight the potential of counterfactual data augmentation for improving model
performance.
"
Boosting Cross-lingual Transferability in Multilingual Models via  In-Context Learning,Sunkyoung Kim,http://arxiv.org/pdf/2305.15233v1.pdf,2023-05-24,"['cs.cl', 'cs.ai']",2305.15233v1.pdf,"  Existing cross-lingual transfer (CLT) prompting methods are only concerned
with monolingual demonstration examples in the source language. In this paper,
we propose In-CLT, a novel cross-lingual transfer prompting method that
leverages both source and target languages to construct the demonstration
examples. We conduct comprehensive evaluations on multilingual benchmarks,
focusing on question answering tasks. Experiment results show that In-CLT
prompt not only improves multilingual models' cross-lingual transferability,
but also demonstrates remarkable unseen language generalization ability. In-CLT
prompting, in particular, improves model performance by 10 to 20\% points on
average when compared to prior cross-lingual transfer approaches. We also
observe the surprising performance gain on the other multilingual benchmarks,
especially in reasoning tasks. Furthermore, we investigate the relationship
between lexical similarity and pre-training corpora in terms of the
cross-lingual transfer gap.
"
A Mechanism for Solving Relational Tasks in Transformer Language Models,Jack Merullo,http://arxiv.org/pdf/2305.16130v2.pdf,2023-05-25,"['cs.cl', 'cs.lg']",2305.16130v2.pdf,"  A primary criticism towards language models (LMs) is their inscrutability.
This paper presents evidence that, despite their size and complexity, LMs
sometimes exploit a simple computational mechanism to solve one-to-one
relational tasks (e.g., capital_of(Poland)=Warsaw). We investigate a range of
language model sizes (from 124M parameters to 176B parameters) in an in-context
learning setting, and find that for a variety of tasks (involving capital
cities, upper-casing, and past-tensing) a key part of the mechanism reduces to
a simple linear update typically applied by the feedforward (FFN) networks.
These updates also tend to promote the output of the relation in a
content-independent way (e.g., encoding Poland:Warsaw::China:Beijing),
revealing a predictable pattern that these models take in solving these tasks.
We further show that this mechanism is specific to tasks that require retrieval
from pretraining memory, rather than retrieval from local context. Our results
contribute to a growing body of work on the mechanistic interpretability of
LLMs, and offer reason to be optimistic that, despite the massive and
non-linear nature of the models, the strategies they ultimately use to solve
tasks can sometimes reduce to familiar and even intuitive algorithms.
"
Large Language Models Are Partially Primed in Pronoun Interpretation,Suet-Ying Lam,http://arxiv.org/pdf/2305.16917v1.pdf,2023-05-26,['cs.cl'],2305.16917v1.pdf,"  While a large body of literature suggests that large language models (LLMs)
acquire rich linguistic representations, little is known about whether they
adapt to linguistic biases in a human-like way. The present study probes this
question by asking whether LLMs display human-like referential biases using
stimuli and procedures from real psycholinguistic experiments. Recent
psycholinguistic studies suggest that humans adapt their referential biases
with recent exposure to referential patterns; closely replicating three
relevant psycholinguistic experiments from Johnson & Arnold (2022) in an
in-context learning (ICL) framework, we found that InstructGPT adapts its
pronominal interpretations in response to the frequency of referential patterns
in the local discourse, though in a limited fashion: adaptation was only
observed relative to syntactic but not semantic biases. By contrast, FLAN-UL2
fails to generate meaningful patterns. Our results provide further evidence
that contemporary LLMs discourse representations are sensitive to syntactic
patterns in the local context but less so to semantic patterns. Our data and
code are available at \url{https://github.com/zkx06111/llm_priming}.
"
A Mechanism for Sample-Efficient In-Context Learning for Sparse  Retrieval Tasks,Jacob Abernethy,http://arxiv.org/pdf/2305.17040v1.pdf,2023-05-26,"['cs.lg', 'cs.cl']",2305.17040v1.pdf,"  We study the phenomenon of \textit{in-context learning} (ICL) exhibited by
large language models, where they can adapt to a new learning task, given a
handful of labeled examples, without any explicit parameter optimization. Our
goal is to explain how a pre-trained transformer model is able to perform ICL
under reasonable assumptions on the pre-training process and the downstream
tasks. We posit a mechanism whereby a transformer can achieve the following:
(a) receive an i.i.d. sequence of examples which have been converted into a
prompt using potentially-ambiguous delimiters, (b) correctly segment the prompt
into examples and labels, (c) infer from the data a \textit{sparse linear
regressor} hypothesis, and finally (d) apply this hypothesis on the given test
example and return a predicted label. We establish that this entire procedure
is implementable using the transformer mechanism, and we give sample complexity
guarantees for this learning framework. Our empirical findings validate the
challenge of segmentation, and we show a correspondence between our posited
mechanisms and observed attention maps for step (c).
"
Augmenting Large Language Model Translators via Translation Memories,Yongyu Mu,http://arxiv.org/pdf/2305.17367v1.pdf,2023-05-27,['cs.cl'],2305.17367v1.pdf,"  Using translation memories (TMs) as prompts is a promising approach to
in-context learning of machine translation models. In this work, we take a step
towards prompting large language models (LLMs) with TMs and making them better
translators. We find that the ability of LLMs to ``understand'' prompts is
indeed helpful for making better use of TMs. Experiments show that the results
of a pre-trained LLM translator can be greatly improved by using high-quality
TM-based prompts. These results are even comparable to those of the
state-of-the-art NMT systems which have access to large-scale in-domain
bilingual data and are well tuned on the downstream tasks.
"
In-Context Analogical Reasoning with Pre-Trained Language Models,Xiaoyang Hu,http://arxiv.org/pdf/2305.17626v2.pdf,2023-05-28,"['cs.ai', 'cs.cl', 'cs.lg']",2305.17626v2.pdf,"  Analogical reasoning is a fundamental capacity of human cognition that allows
us to reason abstractly about novel situations by relating them to past
experiences. While it is thought to be essential for robust reasoning in AI
systems, conventional approaches require significant training and/or
hard-coding of domain knowledge to be applied to benchmark tasks. Inspired by
cognitive science research that has found connections between human language
and analogy-making, we explore the use of intuitive language-based abstractions
to support analogy in AI systems. Specifically, we apply large pre-trained
language models (PLMs) to visual Raven's Progressive Matrices (RPM), a common
relational reasoning test. By simply encoding the perceptual features of the
problem into language form, we find that PLMs exhibit a striking capacity for
zero-shot relational reasoning, exceeding human performance and nearing
supervised vision-based methods. We explore different encodings that vary the
level of abstraction over task features, finding that higher-level abstractions
further strengthen PLMs' analogical reasoning. Our detailed analysis reveals
insights on the role of model complexity, in-context learning, and prior
knowledge in solving RPM tasks.
"
Towards Explainable Conversational Recommender Systems,Shuyu Guo,http://arxiv.org/pdf/2305.18363v1.pdf,2023-05-27,"['cs.ir', 'cs.ai']",2305.18363v1.pdf,"  Explanations in conventional recommender systems have demonstrated benefits
in helping the user understand the rationality of the recommendations and
improving the system's efficiency, transparency, and trustworthiness. In the
conversational environment, multiple contextualized explanations need to be
generated, which poses further challenges for explanations. To better measure
explainability in conversational recommender systems (CRS), we propose ten
evaluation perspectives based on concepts from conventional recommender systems
together with the characteristics of CRS. We assess five existing CRS benchmark
datasets using these metrics and observe the necessity of improving the
explanation quality of CRS. To achieve this, we conduct manual and automatic
approaches to extend these dialogues and construct a new CRS dataset, namely
Explainable Recommendation Dialogues (E-ReDial). It includes 756 dialogues with
over 2,000 high-quality rewritten explanations. We compare two baseline
approaches to perform explanation generation based on E-ReDial. Experimental
results suggest that models trained on E-ReDial can significantly improve
explainability while introducing knowledge into the models can further improve
the performance. GPT-3 in the in-context learning setting can generate more
realistic and diverse movie descriptions. In contrast, T5 training on E-ReDial
can better generate clear reasons for recommendations based on user
preferences. E-ReDial is available at https://github.com/Superbooming/E-ReDial.
"
Grammar Prompting for Domain-Specific Language Generation with Large  Language Models,Bailin Wang,http://arxiv.org/pdf/2305.19234v3.pdf,2023-05-30,"['cs.cl', 'cs.ai']",2305.19234v3.pdf,"  Large language models (LLMs) can learn to perform a wide range of natural
language tasks from just a handful of in-context examples. However, for
generating strings from highly structured languages (e.g., semantic parsing to
complex domain-specific languages), it is challenging for the LLM to generalize
from just a few exemplars. We propose \emph{grammar prompting}, a simple
approach to enable LLMs to use external knowledge and domain-specific
constraints, expressed through a grammar in Backus--Naur Form (BNF), during
in-context learning. Grammar prompting augments each demonstration example with
a specialized grammar that is minimally sufficient for generating the
particular output example, where the specialized grammar is a subset of the
full DSL grammar. For inference, the LLM first predicts a BNF grammar given a
test input, and then generates the output according to the rules of the
grammar. Experiments demonstrate that grammar prompting can enable LLMs to
perform competitively on a diverse set of DSL generation tasks, including
semantic parsing (SMCalFlow, Overnight, GeoQuery), PDDL planning, and
SMILES-based molecule generation.
"
Contextual Vision Transformers for Robust Representation Learning,Yujia Bao,http://arxiv.org/pdf/2305.19402v2.pdf,2023-05-30,"['cs.cv', 'cs.ai', 'cs.cl']",2305.19402v2.pdf,"  We introduce Contextual Vision Transformers (ContextViT), a method designed
to generate robust image representations for datasets experiencing shifts in
latent factors across various groups. Derived from the concept of in-context
learning, ContextViT incorporates an additional context token to encapsulate
group-specific information. This integration allows the model to adjust the
image representation in accordance with the group-specific context.
Specifically, for a given input image, ContextViT maps images with identical
group membership into this context token, which is appended to the input image
tokens. Additionally, we introduce a context inference network to predict such
tokens on-the-fly, given a batch of samples from the group. This enables
ContextViT to adapt to new testing distributions during inference time. We
demonstrate the efficacy of ContextViT across a wide range of applications. In
supervised fine-tuning, we show that augmenting pre-trained ViTs with our
proposed context conditioning mechanism results in consistent improvements in
out-of-distribution generalization on iWildCam and FMoW. We also investigate
self-supervised representation learning with ContextViT. Our experiments on the
Camelyon17 pathology imaging benchmark and the JUMP-CP microscopy imaging
benchmark demonstrate that ContextViT excels in learning stable image
featurizations amidst distribution shift, consistently outperforming its ViT
counterpart.
"
Self-Verification Improves Few-Shot Clinical Information Extraction,Zelalem Gero,http://arxiv.org/pdf/2306.00024v1.pdf,2023-05-30,"['cs.cl', 'cs.lg']",2306.00024v1.pdf,"  Extracting patient information from unstructured text is a critical task in
health decision-support and clinical research. Large language models (LLMs)
have shown the potential to accelerate clinical curation via few-shot
in-context learning, in contrast to supervised learning which requires much
more costly human annotations. However, despite drastic advances in modern LLMs
such as GPT-4, they still struggle with issues regarding accuracy and
interpretability, especially in mission-critical domains such as health. Here,
we explore a general mitigation framework using self-verification, which
leverages the LLM to provide provenance for its own extraction and check its
own outputs. This is made possible by the asymmetry between verification and
generation, where the latter is often much easier than the former. Experimental
results show that our method consistently improves accuracy for various LLMs in
standard clinical information extraction tasks. Additionally, self-verification
yields interpretations in the form of a short text span corresponding to each
output, which makes it very efficient for human experts to audit the results,
paving the way towards trustworthy extraction of clinical information in
resource-constrained scenarios. To facilitate future research in this
direction, we release our code and prompts.
"
ChatGPT for Zero-shot Dialogue State Tracking: A Solution or an  Opportunity?,Michael Heck,http://arxiv.org/pdf/2306.01386v1.pdf,2023-06-02,"['cs.cl', 'cs.ai']",2306.01386v1.pdf,"  Recent research on dialogue state tracking (DST) focuses on methods that
allow few- and zero-shot transfer to new domains or schemas. However,
performance gains heavily depend on aggressive data augmentation and
fine-tuning of ever larger language model based architectures. In contrast,
general purpose language models, trained on large amounts of diverse data, hold
the promise of solving any kind of task without task-specific training. We
present preliminary experimental results on the ChatGPT research preview,
showing that ChatGPT achieves state-of-the-art performance in zero-shot DST.
Despite our findings, we argue that properties inherent to general purpose
models limit their ability to replace specialized systems. We further theorize
that the in-context learning capabilities of such models will likely become
powerful tools to support the development of dedicated and dynamic dialogue
state trackers.
"
Prompt to be Consistent is Better than Self-Consistent? Few-Shot and  Zero-Shot Fact Verification with Pre-trained Language Models,Fengzhu Zeng,http://arxiv.org/pdf/2306.02569v1.pdf,2023-06-05,['cs.cl'],2306.02569v1.pdf,"  Few-shot or zero-shot fact verification only relies on a few or no labeled
training examples. In this paper, we propose a novel method called ProToCo, to
\underline{Pro}mpt pre-trained language models (PLMs) \underline{To} be
\underline{Co}nsistent, for improving the factuality assessment capability of
PLMs in the few-shot and zero-shot settings. Given a claim-evidence pair,
ProToCo generates multiple variants of the claim with different relations and
frames a simple consistency mechanism as constraints for making compatible
predictions across these variants. We update PLMs by using parameter-efficient
fine-tuning (PEFT), leading to more accurate predictions in few-shot and
zero-shot fact verification tasks. Our experiments on three public verification
datasets show that ProToCo significantly outperforms state-of-the-art few-shot
fact verification baselines. With a small number of unlabeled instances,
ProToCo also outperforms the strong zero-shot learner T0 on zero-shot
verification. Compared to large PLMs using in-context learning (ICL) method,
ProToCo outperforms OPT-30B and the Self-Consistency-enabled OPT-6.7B model in
both few- and zero-shot settings.
"
STEPS: A Benchmark for Order Reasoning in Sequential Tasks,Weizhi Wang,http://arxiv.org/pdf/2306.04441v1.pdf,2023-06-07,['cs.cl'],2306.04441v1.pdf,"  Various human activities can be abstracted into a sequence of actions in
natural text, i.e. cooking, repairing, manufacturing, etc. Such action
sequences heavily depend on the executing order, while disorder in action
sequences leads to failure of further task execution by robots or AI agents.
Therefore, to verify the order reasoning capability of current neural models in
sequential tasks, we propose a challenging benchmark , named STEPS. STEPS
involves two subtask settings, focusing on determining the rationality of given
next step in recipes and selecting the reasonable step from the multi-choice
question, respectively. We describe the data construction and task
formulations, and benchmark most of significant Large Language Models (LLMs).
The experimental results demonstrate 1) The commonsense reasoning of action
orders in sequential tasks are challenging to resolve via zero-shot prompting
or few-shot in-context learning for LLMs; 2) Prompting method still
significantly lags behind tuning-based method on STEPS.
"
Modular Visual Question Answering via Code Generation,Sanjay Subramanian,http://arxiv.org/pdf/2306.05392v1.pdf,2023-06-08,['cs.cl'],2306.05392v1.pdf,"  We present a framework that formulates visual question answering as modular
code generation. In contrast to prior work on modular approaches to VQA, our
approach requires no additional training and relies on pre-trained language
models (LMs), visual models pre-trained on image-caption pairs, and fifty VQA
examples used for in-context learning. The generated Python programs invoke and
compose the outputs of the visual models using arithmetic and conditional
logic. Our approach improves accuracy on the COVR dataset by at least 3% and on
the GQA dataset by roughly 2% compared to the few-shot baseline that does not
employ code generation.
"
Measuring and Modifying Factual Knowledge in Large Language Models,Pouya Pezeshkpour,http://arxiv.org/pdf/2306.06264v1.pdf,2023-06-09,"['cs.cl', 'cs.lg']",2306.06264v1.pdf,"  Large Language Models (LLMs) store an extensive amount of factual knowledge
obtained from vast collections of text. To effectively utilize these models for
downstream tasks, it is crucial to have reliable methods for measuring their
knowledge. However, existing approaches for knowledge measurement have certain
limitations, and despite recent efforts, they fail to provide accurate
measurements and the necessary insights for modifying the knowledge within
LLMs. In this work, we employ information theory-based measurements to provide
a framework estimating the factual knowledge contained within large language
models. More specifically, we measure knowledge by analyzing the LLM's
prediction probability distribution before and after instilling the target
knowledge, employing metrics such as entropy and KL-divergence. Introducing our
metrics, we first assess their accuracy in comparison to previous ranking-based
methods, surpassing them by over $35\%$ in a synthetic experiment. Then, we
explore two prominent methods of knowledge instillation, discovering that LLMs
exhibit limitations in capturing new knowledge under specific circumstances for
one of these methods. Lastly, we demonstrate the applicability of our methods
in extracting unlearned and mislearned facts in LLMs through their application
to in-context learning. We make code and data for all methods and experiments
in this paper publicly available.
"
A Survey on Multimodal Large Language Models,Shukang Yin,http://arxiv.org/pdf/2306.13549v1.pdf,2023-06-23,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",2306.13549v1.pdf,"  Multimodal Large Language Model (MLLM) recently has been a new rising
research hotspot, which uses powerful Large Language Models (LLMs) as a brain
to perform multimodal tasks. The surprising emergent capabilities of MLLM, such
as writing stories based on images and OCR-free math reasoning, are rare in
traditional methods, suggesting a potential path to artificial general
intelligence. In this paper, we aim to trace and summarize the recent progress
of MLLM. First of all, we present the formulation of MLLM and delineate its
related concepts. Then, we discuss the key techniques and applications,
including Multimodal Instruction Tuning (M-IT), Multimodal In-Context Learning
(M-ICL), Multimodal Chain of Thought (M-CoT), and LLM-Aided Visual Reasoning
(LAVR). Finally, we discuss existing challenges and point out promising
research directions. In light of the fact that the era of MLLM has only just
begun, we will keep updating this survey and hope it can inspire more research.
An associated GitHub link collecting the latest papers is available at
https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models.
"
Potential Benefits of Employing Large Language Models in Research in  Moral Education and Development,Hyemin Han,http://arxiv.org/pdf/2306.13805v2.pdf,2023-06-23,"['cs.cy', 'cs.ai']",2306.13805v2.pdf,"  Recently, computer scientists have developed large language models (LLMs) by
training prediction models with large-scale language corpora and human
reinforcements. The LLMs have become one promising way to implement artificial
intelligence with accuracy in various fields. Interestingly, recent LLMs
possess emergent functional features that emulate sophisticated human
cognition, especially in-context learning and the chain of thought, which were
unavailable in previous prediction models. In this paper, I will examine how
LLMs might contribute to moral education and development research. To achieve
this goal, I will review the most recently published conference papers and
ArXiv preprints to overview the novel functional features implemented in LLMs.
I also intend to conduct brief experiments with ChatGPT to investigate how LLMs
behave while addressing ethical dilemmas and external feedback. The results
suggest that LLMs might be capable of solving dilemmas based on reasoning and
revising their reasoning process with external input. Furthermore, a
preliminary experimental result from the moral exemplar test may demonstrate
that exemplary stories can elicit moral elevation in LLMs as do they among
human participants. I will discuss the potential implications of LLMs on
research on moral education and development with the results.
"
DisasterResponseGPT: Large Language Models for Accelerated Plan of  Action Development in Disaster Response Scenarios,Vinicius G. Goecks,http://arxiv.org/pdf/2306.17271v1.pdf,2023-06-29,"['cs.lg', 'i.2.7; j.7; k.4.0']",2306.17271v1.pdf,"  The development of plans of action in disaster response scenarios is a
time-consuming process. Large Language Models (LLMs) offer a powerful solution
to expedite this process through in-context learning. This study presents
DisasterResponseGPT, an algorithm that leverages LLMs to generate valid plans
of action quickly by incorporating disaster response and planning guidelines in
the initial prompt. In DisasterResponseGPT, users input the scenario
description and receive a plan of action as output. The proposed method
generates multiple plans within seconds, which can be further refined following
the user's feedback. Preliminary results indicate that the plans of action
developed by DisasterResponseGPT are comparable to human-generated ones while
offering greater ease of modification in real-time. This approach has the
potential to revolutionize disaster response operations by enabling rapid
updates and adjustments during the plan's execution.
"
Meta-Reasoning: Semantics-Symbol Deconstruction For Large Language  Models,Yiming Wang,http://arxiv.org/pdf/2306.17820v2.pdf,2023-06-30,['cs.cl'],2306.17820v2.pdf,"  Neural-symbolic methods have shown their effectiveness in enhancing the
reasoning abilities of large language models (LLMs). However, existing methods
primarily rely on mapping natural languages to more syntactically complete
formal languages (e.g., Python and SQL). Those approaches necessitate that
reasoning tasks be convertible into programs, which cater more to the computer
execution mindset and deviate from human reasoning habits. To expand the
real-world applicability and flexibility of symbolic methods, we propose
Meta-Reasoning from the scope of linguistics itself. This method empowers LLMs
to deconstruct questions and effectively capture more generalized knowledge
autonomously. We find that Meta-Reasoning achieves improved in-context learning
efficiency, reasoning accuracy, and output stability in six arithmetic and
symbolic reasoning tasks. In particular, when applied to symbolic reasoning
tasks such as Tracking Shuffled Objects, GPT-3 (text-davinci-002) surpasses the
few-shot Chain-of-Thought prompting approach (+37.7%), with 99% accuracy after
a single demonstration of Meta-Reasoning.
"
Assessing the efficacy of large language models in generating accurate  teacher responses,Yann Hicke,http://arxiv.org/pdf/2307.04274v1.pdf,2023-07-09,"['cs.cl', 'cs.lg']",2307.04274v1.pdf,"  (Tack et al., 2023) organized the shared task hosted by the 18th Workshop on
Innovative Use of NLP for Building Educational Applications on generation of
teacher language in educational dialogues. Following the structure of the
shared task, in this study, we attempt to assess the generative abilities of
large language models in providing informative and helpful insights to
students, thereby simulating the role of a knowledgeable teacher. To this end,
we present an extensive evaluation of several benchmarking generative models,
including GPT-4 (few-shot, in-context learning), fine-tuned GPT-2, and
fine-tuned DialoGPT. Additionally, to optimize for pedagogical quality, we
fine-tuned the Flan-T5 model using reinforcement learning. Our experimental
findings on the Teacher-Student Chatroom Corpus subset indicate the efficacy of
GPT-4 over other fine-tuned models, measured using BERTScore and DialogRPT.
  We hypothesize that several dataset characteristics, including sampling,
representativeness, and dialog completeness, pose significant challenges to
fine-tuning, thus contributing to the poor generalizability of the fine-tuned
models. Finally, we note the need for these generative models to be evaluated
with a metric that relies not only on dialog coherence and matched language
modeling distribution but also on the model's ability to showcase pedagogical
skills.
"
Unsupervised Calibration through Prior Adaptation for Text  Classification using Large Language Models,Lautaro Estienne,http://arxiv.org/pdf/2307.06713v3.pdf,2023-07-13,"['cs.cl', 'cs.lg']",2307.06713v3.pdf,"  A wide variety of natural language tasks are currently being addressed with
large-scale language models (LLMs). These models are usually trained with a
very large amount of unsupervised text data and adapted to perform a downstream
natural language task using methods like fine-tuning, calibration or in-context
learning. In this work, we propose an approach to adapt the prior class
distribution to perform text classification tasks without the need for labelled
samples and only few in-domain sample queries. The proposed approach treats the
LLM as a black box, adding a stage where the model posteriors are calibrated to
the task. Results show that these methods outperform the un-adapted model for
different number of training shots in the prompt and a previous approach were
calibration is performed without using any adaptation data.
"
Reasoning before Responding: Integrating Commonsense-based Causality  Explanation for Empathetic Response Generation,Yahui Fu,http://arxiv.org/pdf/2308.00085v2.pdf,2023-07-28,"['cs.cl', 'cs.ai']",2308.00085v2.pdf,"  Recent approaches to empathetic response generation try to incorporate
commonsense knowledge or reasoning about the causes of emotions to better
understand the user's experiences and feelings. However, these approaches
mainly focus on understanding the causalities of context from the user's
perspective, ignoring the system's perspective. In this paper, we propose a
commonsense-based causality explanation approach for diverse empathetic
response generation that considers both the user's perspective (user's desires
and reactions) and the system's perspective (system's intentions and
reactions). We enhance ChatGPT's ability to reason for the system's perspective
by integrating in-context learning with commonsense knowledge. Then, we
integrate the commonsense-based causality explanation with both ChatGPT and a
T5-based model. Experimental evaluations demonstrate that our method
outperforms other comparable methods on both automatic and human evaluations.
"
Baby's CoThought: Leveraging Large Language Models for Enhanced  Reasoning in Compact Models,Zheyu Zhang,http://arxiv.org/pdf/2308.01684v2.pdf,2023-08-03,['cs.cl'],2308.01684v2.pdf,"  Large Language Models (LLMs) demonstrate remarkable performance on a variety
of natural language understanding (NLU) tasks, primarily due to their
in-context learning ability. This ability could be applied to building babylike
models, i.e. models at small scales, improving training efficiency. In this
paper, we propose a ""CoThought"" pipeline, which efficiently trains smaller
""baby"" language models (BabyLMs) by leveraging the Chain of Thought prompting
of LLMs. Our pipeline restructures a dataset of less than 100M in size using
GPT-3.5-turbo, transforming it into task-oriented, human-readable texts that
are comparable to the school texts for language learners. The BabyLM is then
pretrained on this restructured dataset in a RoBERTa fashion. In evaluations
across 4 benchmarks, our BabyLM outperforms the vanilla RoBERTa in 10
linguistic, NLU, and question-answering tasks by more than 3 points, showing a
superior ability to extract contextual information. These results suggest that
compact LMs pretrained on small, LLM-restructured data can better understand
tasks and achieve improved performance.
"
FLIRT: Feedback Loop In-context Red Teaming,Ninareh Mehrabi,http://arxiv.org/pdf/2308.04265v1.pdf,2023-08-08,['cs.ai'],2308.04265v1.pdf,"  Warning: this paper contains content that may be inappropriate or offensive.
  As generative models become available for public use in various applications,
testing and analyzing vulnerabilities of these models has become a priority.
Here we propose an automatic red teaming framework that evaluates a given model
and exposes its vulnerabilities against unsafe and inappropriate content
generation. Our framework uses in-context learning in a feedback loop to red
team models and trigger them into unsafe content generation. We propose
different in-context attack strategies to automatically learn effective and
diverse adversarial prompts for text-to-image models. Our experiments
demonstrate that compared to baseline approaches, our proposed strategy is
significantly more effective in exposing vulnerabilities in Stable Diffusion
(SD) model, even when the latter is enhanced with safety features. Furthermore,
we demonstrate that the proposed framework is effective for red teaming
text-to-text models, resulting in significantly higher toxic response
generation rate compared to previously reported numbers.
"
JEN-1: Text-Guided Universal Music Generation with Omnidirectional  Diffusion Models,Peike Li,http://arxiv.org/pdf/2308.04729v1.pdf,2023-08-09,"['cs.sd', 'cs.ai', 'cs.lg', 'cs.mm', 'eess.as']",2308.04729v1.pdf,"  Music generation has attracted growing interest with the advancement of deep
generative models. However, generating music conditioned on textual
descriptions, known as text-to-music, remains challenging due to the complexity
of musical structures and high sampling rate requirements. Despite the task's
significance, prevailing generative models exhibit limitations in music
quality, computational efficiency, and generalization. This paper introduces
JEN-1, a universal high-fidelity model for text-to-music generation. JEN-1 is a
diffusion model incorporating both autoregressive and non-autoregressive
training. Through in-context learning, JEN-1 performs various generation tasks
including text-guided music generation, music inpainting, and continuation.
Evaluations demonstrate JEN-1's superior performance over state-of-the-art
methods in text-music alignment and music quality while maintaining
computational efficiency. Our demos are available at
http://futureverse.com/research/jen/demos/jen1
"
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language  Models,Bilgehan Sel,http://arxiv.org/pdf/2308.10379v2.pdf,2023-08-20,"['cs.cl', 'cs.ai']",2308.10379v2.pdf,"  Current literature, aiming to surpass the ""Chain-of-Thought"" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application.
"
Building Emotional Support Chatbots in the Era of LLMs,Zhonghua Zheng,http://arxiv.org/pdf/2308.11584v1.pdf,2023-08-17,"['cs.cl', 'cs.ai']",2308.11584v1.pdf,"  The integration of emotional support into various conversational scenarios
presents profound societal benefits, such as social interactions, mental health
counseling, and customer service. However, there are unsolved challenges that
hinder real-world applications in this field, including limited data
availability and the absence of well-accepted model training paradigms. This
work endeavors to navigate these challenges by harnessing the capabilities of
Large Language Models (LLMs). We introduce an innovative methodology that
synthesizes human insights with the computational prowess of LLMs to curate an
extensive emotional support dialogue dataset. Our approach is initiated with a
meticulously designed set of dialogues spanning diverse scenarios as generative
seeds. By utilizing the in-context learning potential of ChatGPT, we
recursively generate an ExTensible Emotional Support dialogue dataset, named
ExTES. Following this, we deploy advanced tuning techniques on the LLaMA model,
examining the impact of diverse training strategies, ultimately yielding an LLM
meticulously optimized for emotional support interactions. An exhaustive
assessment of the resultant model showcases its proficiency in offering
emotional support, marking a pivotal step in the realm of emotional support
bots and paving the way for subsequent research and implementations.
"
Diffusion Language Models Can Perform Many Tasks with Scaling and  Instruction-Finetuning,Jiasheng Ye,http://arxiv.org/pdf/2308.12219v2.pdf,2023-08-23,"['cs.cl', 'cs.ai', 'cs.lg']",2308.12219v2.pdf,"  The recent surge of generative AI has been fueled by the generative power of
diffusion probabilistic models and the scalable capabilities of large language
models. Despite their potential, it remains elusive whether diffusion language
models can solve general language tasks comparable to their autoregressive
counterparts. This paper demonstrates that scaling diffusion models w.r.t.
data, sizes, and tasks can effectively make them strong language learners. We
build competent diffusion language models at scale by first acquiring knowledge
from massive data via masked language modeling pretraining thanks to their
intrinsic connections. We then reprogram pretrained masked language models into
diffusion language models via diffusive adaptation, wherein task-specific
finetuning and instruction finetuning are explored to unlock their versatility
in solving general language tasks. Experiments show that scaling diffusion
language models consistently improves performance across downstream language
tasks. We further discover that instruction finetuning can elicit zero-shot and
few-shot in-context learning abilities that help tackle many unseen tasks by
following natural language instructions, and show promise in advanced and
challenging abilities such as reasoning.
"
Large Language Model as Autonomous Decision Maker,Yining Ye,http://arxiv.org/pdf/2308.12519v1.pdf,2023-08-24,['cs.cl'],2308.12519v1.pdf,"  While large language models (LLMs) exhibit impressive language understanding
and in-context learning abilities, their decision-making ability still heavily
relies on the guidance of task-specific expert knowledge when solving
real-world tasks. To unleash the potential of LLMs as autonomous decision
makers, this paper presents an approach JuDec to endow LLMs with the
self-judgment ability, enabling LLMs to achieve autonomous judgment and
exploration for decision making. Specifically, in JuDec, Elo-based
Self-Judgment Mechanism is designed to assign Elo scores to decision steps to
judge their values and utilities via pairwise comparisons between two solutions
and then guide the decision-searching process toward the optimal solution
accordingly. Experimental results on the ToolBench dataset demonstrate JuDec's
superiority over baselines, achieving over 10% improvement in Pass Rate on
diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT
API calls), highlighting its effectiveness and efficiency.
"
Breaking the Bank with ChatGPT: Few-Shot Text Classification for Finance,Lefteris Loukas,http://arxiv.org/pdf/2308.14634v1.pdf,2023-08-28,"['cs.cl', 'cs.ai', 'cs.lg', 'q-fin.cp']",2308.14634v1.pdf,"  We propose the use of conversational GPT models for easy and quick few-shot
text classification in the financial domain using the Banking77 dataset. Our
approach involves in-context learning with GPT-3.5 and GPT-4, which minimizes
the technical expertise required and eliminates the need for expensive GPU
computing while yielding quick and accurate results. Additionally, we fine-tune
other pre-trained, masked language models with SetFit, a recent contrastive
learning technique, to achieve state-of-the-art results both in full-data and
few-shot settings. Our findings show that querying GPT-3.5 and GPT-4 can
outperform fine-tuned, non-generative models even with fewer examples. However,
subscription fees associated with these solutions may be considered costly for
small organizations. Lastly, we find that generative models perform better on
the given task when shown representative samples selected by a human expert
rather than when shown random ones. We conclude that a) our proposed methods
offer a practical solution for few-shot tasks in datasets with limited label
availability, and b) our state-of-the-art results can inspire future work in
the area.
"
Gender-specific Machine Translation with Large Language Models,Eduardo Sánchez,http://arxiv.org/pdf/2309.03175v1.pdf,2023-09-06,['cs.cl'],2309.03175v1.pdf,"  Decoder-only Large Language Models (LLMs) have demonstrated potential in
machine translation (MT), albeit with performance slightly lagging behind
traditional encoder-decoder Neural Machine Translation (NMT) systems. However,
LLMs offer a unique advantage: the ability to control the properties of the
output through prompts. In this study, we harness this flexibility to explore
LLaMa's capability to produce gender-specific translations for languages with
grammatical gender. Our results indicate that LLaMa can generate
gender-specific translations with competitive accuracy and gender bias
mitigation when compared to NLLB, a state-of-the-art multilingual NMT system.
Furthermore, our experiments reveal that LLaMa's translations are robust,
showing significant performance drops when evaluated against opposite-gender
references in gender-ambiguous datasets but maintaining consistency in less
ambiguous contexts. This research provides insights into the potential and
challenges of using LLMs for gender-specific translations and highlights the
importance of in-context learning to elicit new tasks in LLMs.
"
Improving Open Information Extraction with Large Language Models: A  Study on Demonstration Uncertainty,Chen Ling,http://arxiv.org/pdf/2309.03433v1.pdf,2023-09-07,['cs.cl'],2309.03433v1.pdf,"  Open Information Extraction (OIE) task aims at extracting structured facts
from unstructured text, typically in the form of (subject, relation, object)
triples. Despite the potential of large language models (LLMs) like ChatGPT as
a general task solver, they lag behind state-of-the-art (supervised) methods in
OIE tasks due to two key issues. First, LLMs struggle to distinguish irrelevant
context from relevant relations and generate structured output due to the
restrictions on fine-tuning the model. Second, LLMs generates responses
autoregressively based on probability, which makes the predicted relations lack
confidence. In this paper, we assess the capabilities of LLMs in improving the
OIE task. Particularly, we propose various in-context learning strategies to
enhance LLM's instruction-following ability and a demonstration uncertainty
quantification module to enhance the confidence of the generated relations. Our
experiments on three OIE benchmark datasets show that our approach holds its
own against established supervised methods, both quantitatively and
qualitatively.
"
EPA: Easy Prompt Augmentation on Large Language Models via Multiple  Sources and Multiple Targets,Hongyuan Lu,http://arxiv.org/pdf/2309.04725v1.pdf,2023-09-09,['cs.cl'],2309.04725v1.pdf,"  Large language models (LLMs) have shown promising performance on various NLP
tasks via task prompting. And their performance can be further improved by
appending task demonstrations to the head of the prompt. And usually, a better
performance can be achieved with more demonstrations. However, asking the users
to write the demonstrations can be cumbersome. As a simple yet cost-effective
workaround, this paper proposes a novel method called EPA (\textbf{E}asy
\textbf{P}rompt \textbf{A}ugmentation)\footnote{While this paper considers
augmenting prompts via demonstrations, we name it EPA as the name EDA is
already taken by a well-known NLP method \citep{wei-zou-2019-eda}.} that
effectively minimizes user efforts in writing demonstrations while improving
the model performance at the same time. EPA achieves these goals by
automatically augmenting the demonstrations with multiple sources/targets,
where each of them paraphrases each other. This is well motivated as augmenting
data via paraphrasing effectively improves neural language models. EPA thus
employs paraphrasing as an augmentation method for in-context learning.
Extensive experiments indicate that EPA effectively improves both NLU and NLG
tasks, covering from natural language inference to machine translation in
translating tens of languages.\footnote{Code and data will be released upon
publication.}
"
CONVERSER: Few-Shot Conversational Dense Retrieval with Synthetic Data  Generation,Chao-Wei Huang,http://arxiv.org/pdf/2309.06748v1.pdf,2023-09-13,"['cs.cl', 'cs.ir']",2309.06748v1.pdf,"  Conversational search provides a natural interface for information retrieval
(IR). Recent approaches have demonstrated promising results in applying dense
retrieval to conversational IR. However, training dense retrievers requires
large amounts of in-domain paired data. This hinders the development of
conversational dense retrievers, as abundant in-domain conversations are
expensive to collect. In this paper, we propose CONVERSER, a framework for
training conversational dense retrievers with at most 6 examples of in-domain
dialogues. Specifically, we utilize the in-context learning capability of large
language models to generate conversational queries given a passage in the
retrieval corpus. Experimental results on conversational retrieval benchmarks
OR-QuAC and TREC CAsT 19 show that the proposed CONVERSER achieves comparable
performance to fully-supervised models, demonstrating the effectiveness of our
proposed framework in few-shot conversational dense retrieval. All source code
and generated datasets are available at https://github.com/MiuLab/CONVERSER
"
Speech-to-Speech Translation with Discrete-Unit-Based Style Transfer,Yongqi Wang,http://arxiv.org/pdf/2309.07566v1.pdf,2023-09-14,"['cs.sd', 'cs.ai', 'eess.as']",2309.07566v1.pdf,"  Direct speech-to-speech translation (S2ST) with discrete self-supervised
representations has achieved remarkable accuracy, but is unable to preserve the
speaker timbre of the source speech during translation. Meanwhile, the scarcity
of high-quality speaker-parallel data poses a challenge for learning style
transfer between source and target speech. We propose an S2ST framework with an
acoustic language model based on discrete units from a self-supervised model
and a neural codec for style transfer. The acoustic language model leverages
self-supervised in-context learning, acquiring the ability for style transfer
without relying on any speaker-parallel data, thereby overcoming the issue of
data scarcity. By using extensive training data, our model achieves zero-shot
cross-lingual style transfer on previously unseen source languages. Experiments
show that our model generates translated speeches with high fidelity and style
similarity. Audio samples are available at http://stylelm.github.io/ .
"
"Bridging Topic, Domain, and Language Shifts: An Evaluation of  Comprehensive Out-of-Distribution Scenarios",Andreas Waldis,http://arxiv.org/pdf/2309.08316v1.pdf,2023-09-15,['cs.cl'],2309.08316v1.pdf,"  Language models (LMs) excel in in-distribution (ID) scenarios where train and
test data are independent and identically distributed. However, their
performance often degrades in real-world applications like argument mining.
Such degradation happens when new topics emerge, or other text domains and
languages become relevant. To assess LMs' generalization abilities in such
out-of-distribution (OOD) scenarios, we simulate such distribution shifts by
deliberately withholding specific instances for testing, as from the social
media domain or the topic Solar Energy.
  Unlike prior studies focusing on specific shifts and metrics in isolation, we
comprehensively analyze OOD generalization. We define three metrics to pinpoint
generalization flaws and propose eleven classification tasks covering topic,
domain, and language shifts. Overall, we find superior performance of
prompt-based fine-tuning, notably when train and test splits primarily differ
semantically. Simultaneously, in-context learning is more effective than
prompt-based or vanilla fine-tuning for tasks when training data embodies heavy
discrepancies in label distribution compared to testing data. This reveals a
crucial drawback of gradient-based learning: it biases LMs regarding such
structural obstacles.
"
Neural Machine Translation Models Can Learn to be Few-shot Learners,Raphael Reinauer,http://arxiv.org/pdf/2309.08590v1.pdf,2023-09-15,['cs.cl'],2309.08590v1.pdf,"  The emergent ability of Large Language Models to use a small number of
examples to learn to perform in novel domains and tasks, also called in-context
learning (ICL). In this work, we show that a much smaller model can be trained
to perform ICL by fine-tuning towards a specialized training objective,
exemplified on the task of domain adaptation for neural machine translation.
With this capacity for ICL, the model can take advantage of relevant few-shot
examples to adapt its output towards the domain. We compare the quality of this
domain adaptation to traditional supervised techniques and ICL with a
40B-parameter Large Language Model. Our approach allows efficient batch
inference on a mix of domains and outperforms state-of-the-art baselines in
terms of both translation quality and immediate adaptation rate, i.e. the
ability to reproduce a specific term after being shown a single example.
"
Few-Shot Adaptation for Parsing Contextual Utterances with LLMs,Kevin Lin,http://arxiv.org/pdf/2309.10168v1.pdf,2023-09-18,['cs.cl'],2309.10168v1.pdf,"  We evaluate the ability of semantic parsers based on large language models
(LLMs) to handle contextual utterances. In real-world settings, there typically
exists only a limited number of annotated contextual utterances due to
annotation cost, resulting in an imbalance compared to non-contextual
utterances. Therefore, parsers must adapt to contextual utterances with a few
training examples. We examine four major paradigms for doing so in
conversational semantic parsing i.e., Parse-with-Utterance-History,
Parse-with-Reference-Program, Parse-then-Resolve, and Rewrite-then-Parse. To
facilitate such cross-paradigm comparisons, we construct
SMCalFlow-EventQueries, a subset of contextual examples from SMCalFlow with
additional annotations. Experiments with in-context learning and fine-tuning
suggest that Rewrite-then-Parse is the most promising paradigm when
holistically considering parsing accuracy, annotation cost, and error types.
"
Toward Unified Controllable Text Generation via Regular Expression  Instruction,Xin Zheng,http://arxiv.org/pdf/2309.10447v2.pdf,2023-09-19,"['cs.cl', 'cs.ai']",2309.10447v2.pdf,"  Controllable text generation is a fundamental aspect of natural language
generation, with numerous methods proposed for different constraint types.
However, these approaches often require significant architectural or decoding
modifications, making them challenging to apply to additional constraints or
resolve different constraint combinations. To address this, our paper
introduces Regular Expression Instruction (REI), which utilizes an
instruction-based mechanism to fully exploit regular expressions' advantages to
uniformly model diverse constraints. Specifically, our REI supports all popular
fine-grained controllable generation constraints, i.e., lexical, positional,
and length, as well as their complex combinations, via regular expression-style
instructions. Our method only requires fine-tuning on medium-scale language
models or few-shot, in-context learning on large language models, and requires
no further adjustment when applied to various constraint combinations.
Experiments demonstrate that our straightforward approach yields high success
rates and adaptability to various constraints while maintaining competitiveness
in automatic metrics and outperforming most previous baselines.
"
Language Modeling Is Compression,Grégoire Delétang,http://arxiv.org/pdf/2309.10668v1.pdf,2023-09-19,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.it', 'math.it']",2309.10668v1.pdf,"  It has long been established that predictive models can be transformed into
lossless compressors and vice versa. Incidentally, in recent years, the machine
learning community has focused on training increasingly large and powerful
self-supervised (language) models. Since these large language models exhibit
impressive predictive capabilities, they are well-positioned to be strong
compressors. In this work, we advocate for viewing the prediction problem
through the lens of compression and evaluate the compression capabilities of
large (foundation) models. We show that large language models are powerful
general-purpose predictors and that the compression viewpoint provides novel
insights into scaling laws, tokenization, and in-context learning. For example,
Chinchilla 70B, while trained primarily on text, compresses ImageNet patches to
43.4% and LibriSpeech samples to 16.4% of their raw size, beating
domain-specific compressors like PNG (58.5%) or FLAC (30.3%), respectively.
Finally, we show that the prediction-compression equivalence allows us to use
any compressor (like gzip) to build a conditional generative model.
"
Language-Oriented Communication with Semantic Coding and Knowledge  Distillation for Text-to-Image Generation,Hyelin Nam,http://arxiv.org/pdf/2309.11127v1.pdf,2023-09-20,"['eess.sp', 'cs.ai', 'cs.cl']",2309.11127v1.pdf,"  By integrating recent advances in large language models (LLMs) and generative
models into the emerging semantic communication (SC) paradigm, in this article
we put forward to a novel framework of language-oriented semantic communication
(LSC). In LSC, machines communicate using human language messages that can be
interpreted and manipulated via natural language processing (NLP) techniques
for SC efficiency. To demonstrate LSC's potential, we introduce three
innovative algorithms: 1) semantic source coding (SSC) which compresses a text
prompt into its key head words capturing the prompt's syntactic essence while
maintaining their appearance order to keep the prompt's context; 2) semantic
channel coding (SCC) that improves robustness against errors by substituting
head words with their lenghthier synonyms; and 3) semantic knowledge
distillation (SKD) that produces listener-customized prompts via in-context
learning the listener's language style. In a communication task for progressive
text-to-image generation, the proposed methods achieve higher perceptual
similarities with fewer transmissions while enhancing robustness in noisy
communication channels.
"
Towards Effective Disambiguation for Machine Translation with Large  Language Models,Vivek Iyer,http://arxiv.org/pdf/2309.11668v2.pdf,2023-09-20,['cs.cl'],2309.11668v2.pdf,"  Resolving semantic ambiguity has long been recognised as a central challenge
in the field of Machine Translation. Recent work on benchmarking translation
performance on ambiguous sentences has exposed the limitations of conventional
Neural Machine Translation (NMT) systems, which fail to handle many such cases.
Large language models (LLMs) have emerged as a promising alternative,
demonstrating comparable performance to traditional NMT models while
introducing new paradigms for controlling the target outputs. In this paper, we
study the capabilities of LLMs to translate ""ambiguous sentences"" - i.e. those
containing highly polysemous words and/or rare word senses. We also propose two
ways to improve their disambiguation capabilities, through a) in-context
learning and b) fine-tuning on carefully curated ambiguous datasets.
Experiments show that our methods can match or outperform state-of-the-art
systems such as DeepL and NLLB in four out of five language directions. Our
research provides valuable insights into effectively adapting LLMs to become
better disambiguators during Machine Translation. We release our curated
disambiguation corpora and resources at
https://data.statmt.org/ambiguous-europarl.
"
In-context Interference in Chat-based Large Language Models,Eric Nuertey Coleman,http://arxiv.org/pdf/2309.12727v1.pdf,2023-09-22,"['cs.ai', 'cs.cl']",2309.12727v1.pdf,"  Large language models (LLMs) have had a huge impact on society due to their
impressive capabilities and vast knowledge of the world. Various applications
and tools have been created that allow users to interact with these models in a
black-box scenario. However, one limitation of this scenario is that users
cannot modify the internal knowledge of the model, and the only way to add or
modify internal knowledge is by explicitly mentioning it to the model during
the current interaction. This learning process is called in-context training,
and it refers to training that is confined to the user's current session or
context. In-context learning has significant applications, but also has
limitations that are seldom studied. In this paper, we present a study that
shows how the model can suffer from interference between information that
continually flows in the context, causing it to forget previously learned
knowledge, which can reduce the model's performance. Along with showing the
problem, we propose an evaluation benchmark based on the bAbI dataset.
"
Affect Recognition in Conversations Using Large Language Models,Shutong Feng,http://arxiv.org/pdf/2309.12881v1.pdf,2023-09-22,['cs.cl'],2309.12881v1.pdf,"  Affect recognition, encompassing emotions, moods, and feelings, plays a
pivotal role in human communication. In the realm of conversational artificial
intelligence (AI), the ability to discern and respond to human affective cues
is a critical factor for creating engaging and empathetic interactions. This
study delves into the capacity of large language models (LLMs) to recognise
human affect in conversations, with a focus on both open-domain chit-chat
dialogues and task-oriented dialogues. Leveraging three diverse datasets,
namely IEMOCAP, EmoWOZ, and DAIC-WOZ, covering a spectrum of dialogues from
casual conversations to clinical interviews, we evaluated and compared LLMs'
performance in affect recognition. Our investigation explores the zero-shot and
few-shot capabilities of LLMs through in-context learning (ICL) as well as
their model capacities through task-specific fine-tuning. Additionally, this
study takes into account the potential impact of automatic speech recognition
(ASR) errors on LLM predictions. With this work, we aim to shed light on the
extent to which LLMs can replicate human-like affect recognition capabilities
in conversations.
"
Calibrating LLM-Based Evaluator,Yuxuan Liu,http://arxiv.org/pdf/2309.13308v1.pdf,2023-09-23,['cs.cl'],2309.13308v1.pdf,"  Recent advancements in large language models (LLMs) on language modeling and
emergent capabilities make them a promising reference-free evaluator of natural
language generation quality, and a competent alternative to human evaluation.
However, hindered by the closed-source or high computational demand to host and
tune, there is a lack of practice to further calibrate an off-the-shelf
LLM-based evaluator towards better human alignment. In this work, we propose
AutoCalibrate, a multi-stage, gradient-free approach to automatically calibrate
and align an LLM-based evaluator toward human preference. Instead of explicitly
modeling human preferences, we first implicitly encompass them within a set of
human labels. Then, an initial set of scoring criteria is drafted by the
language model itself, leveraging in-context learning on different few-shot
examples. To further calibrate this set of criteria, we select the best
performers and re-draft them with self-refinement. Our experiments on multiple
text quality evaluation datasets illustrate a significant improvement in
correlation with expert evaluation through calibration. Our comprehensive
qualitative analysis conveys insightful intuitions and observations on the
essence of effective scoring criteria.
"
MedEdit: Model Editing for Medical Question Answering with External  Knowledge Bases,Yucheng Shi,http://arxiv.org/pdf/2309.16035v1.pdf,2023-09-27,"['cs.cl', 'cs.ai']",2309.16035v1.pdf,"  Large Language Models (LLMs), although powerful in general domains, often
perform poorly on domain-specific tasks like medical question answering (QA).
Moreover, they tend to function as ""black-boxes,"" making it challenging to
modify their behavior. Addressing this, our study delves into model editing
utilizing in-context learning, aiming to improve LLM responses without the need
for fine-tuning or retraining. Specifically, we propose a comprehensive
retrieval strategy to extract medical facts from an external knowledge base,
and then we incorporate them into the query prompt for the LLM. Focusing on
medical QA using the MedQA-SMILE dataset, we evaluate the impact of different
retrieval models and the number of facts provided to the LLM. Notably, our
edited Vicuna model exhibited an accuracy improvement from 44.46% to 48.54%.
This work underscores the potential of model editing to enhance LLM
performance, offering a practical approach to mitigate the challenges of
black-box LLMs.
"
A Prefrontal Cortex-inspired Architecture for Planning in Large Language  Models,Taylor Webb,http://arxiv.org/pdf/2310.00194v1.pdf,2023-09-30,"['cs.ai', 'cs.ne']",2310.00194v1.pdf,"  Large language models (LLMs) demonstrate impressive performance on a wide
variety of tasks, but they often struggle with tasks that require multi-step
reasoning or goal-directed planning. To address this, we take inspiration from
the human brain, in which planning is accomplished via the recurrent
interaction of specialized modules in the prefrontal cortex (PFC). These
modules perform functions such as conflict monitoring, state prediction, state
evaluation, task decomposition, and task coordination. We find that LLMs are
sometimes capable of carrying out these functions in isolation, but struggle to
autonomously coordinate them in the service of a goal. Therefore, we propose a
black box architecture with multiple LLM-based (GPT-4) modules. The
architecture improves planning through the interaction of specialized
PFC-inspired modules that break down a larger problem into multiple brief
automated calls to the LLM. We evaluate the combined architecture on two
challenging planning tasks -- graph traversal and Tower of Hanoi -- finding
that it yields significant improvements over standard LLM methods (e.g.,
zero-shot prompting or in-context learning). These results demonstrate the
benefit of utilizing knowledge from cognitive neuroscience to improve planning
in LLMs.
"
Towards LLM-based Fact Verification on News Claims with a Hierarchical  Step-by-Step Prompting Method,Xuan Zhang,http://arxiv.org/pdf/2310.00305v1.pdf,2023-09-30,['cs.cl'],2310.00305v1.pdf,"  While large pre-trained language models (LLMs) have shown their impressive
capabilities in various NLP tasks, they are still under-explored in the
misinformation domain. In this paper, we examine LLMs with in-context learning
(ICL) for news claim verification, and find that only with 4-shot demonstration
examples, the performance of several prompting methods can be comparable with
previous supervised models. To further boost performance, we introduce a
Hierarchical Step-by-Step (HiSS) prompting method which directs LLMs to
separate a claim into several subclaims and then verify each of them via
multiple questions-answering steps progressively. Experiment results on two
public misinformation datasets show that HiSS prompting outperforms
state-of-the-art fully-supervised approach and strong few-shot ICL-enabled
baselines.
"
Text Data Augmentation in Low-Resource Settings via Fine-Tuning of Large  Language Models,Jean Kaddour,http://arxiv.org/pdf/2310.01119v1.pdf,2023-10-02,"['cs.cl', 'cs.lg']",2310.01119v1.pdf,"  The in-context learning ability of large language models (LLMs) enables them
to generalize to novel downstream tasks with relatively few labeled examples.
However, they require enormous computational resources to be deployed.
Alternatively, smaller models can solve specific tasks if fine-tuned with
enough labeled examples. These examples, however, are expensive to obtain. In
pursuit of the best of both worlds, we study the annotation and generation of
fine-tuning training data via fine-tuned teacher LLMs to improve the downstream
performance of much smaller models. In four text classification and two text
generation tasks, we find that both data generation and annotation dramatically
improve the respective downstream model's performance, occasionally
necessitating only a minor fraction of the original training dataset.
"
Fool Your (Vision and) Language Model With Embarrassingly Simple  Permutations,Yongshuo Zong,http://arxiv.org/pdf/2310.01651v1.pdf,2023-10-02,['cs.lg'],2310.01651v1.pdf,"  Large language and vision-language models are rapidly being deployed in
practice thanks to their impressive capabilities in instruction following,
in-context learning, and so on. This raises an urgent need to carefully analyse
their robustness so that stakeholders can understand if and when such models
are trustworthy enough to be relied upon in any given application. In this
paper, we highlight a specific vulnerability in popular models, namely
permutation sensitivity in multiple-choice question answering (MCQA).
Specifically, we show empirically that popular models are vulnerable to
adversarial permutation in answer sets for multiple-choice prompting, which is
surprising as models should ideally be as invariant to prompt permutation as
humans are. These vulnerabilities persist across various model sizes, and exist
in very recent language and vision-language models. Code is available at
\url{https://github.com/ys-zong/FoolyourVLLMs}.
"
Improving Automatic VQA Evaluation Using Large Language Models,Oscar Mañas,http://arxiv.org/pdf/2310.02567v1.pdf,2023-10-04,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",2310.02567v1.pdf,"  8 years after the visual question answering (VQA) task was proposed, accuracy
remains the primary metric for automatic evaluation. VQA Accuracy has been
effective so far in the IID evaluation setting. However, our community is
undergoing a shift towards open-ended generative models and OOD evaluation. In
this new paradigm, the existing VQA Accuracy metric is overly stringent and
underestimates the performance of VQA systems. Thus, there is a need to develop
more robust automatic VQA metrics that serve as a proxy for human judgment. In
this work, we propose to leverage the in-context learning capabilities of
instruction-tuned large language models (LLMs) to build a better VQA metric. We
formulate VQA evaluation as an answer-rating task where the LLM is instructed
to score the accuracy of a candidate answer given a set of reference answers.
We demonstrate the proposed metric better correlates with human judgment
compared to existing metrics across several VQA models and benchmarks. We hope
wide adoption of our metric will contribute to better estimating the research
progress on the VQA task.
"
A Language-Agent Approach to Formal Theorem-Proving,Amitayush Thakur,http://arxiv.org/pdf/2310.04353v1.pdf,2023-10-06,"['cs.lg', 'cs.ai', 'cs.lo', 'cs.pl']",2310.04353v1.pdf,"  Language agents, which use a large language model (LLM) capable of in-context
learning to interact with an external environment, have recently emerged as a
promising approach to control tasks. We present the first language-agent
approach to formal theorem-proving. Our method, COPRA, uses a high-capacity,
black-box LLM (GPT-4) as part of a policy for a stateful backtracking search.
During the search, the policy can select proof tactics and retrieve lemmas and
definitions from an external database. Each selected tactic is executed in the
underlying proof framework, and the execution feedback is used to build the
prompt for the next policy invocation. The search also tracks selected
information from its history and uses it to reduce hallucinations and
unnecessary LLM queries.
  We evaluate COPRA on the miniF2F benchmark for Lean and a set of Coq tasks
from the Compcert project. On these benchmarks, COPRA is significantly better
than one-shot invocations of GPT-4, as well as state-of-the-art models
fine-tuned on proof data, at finding correct proofs quickly.
"
Guideline Learning for In-context Information Extraction,Chaoxu Pang,http://arxiv.org/pdf/2310.05066v2.pdf,2023-10-08,"['cs.cl', 'cs.lg']",2310.05066v2.pdf,"  Large language models (LLMs) can perform a new task by merely conditioning on
task instructions and a few input-output examples, without optimizing any
parameters. This is called In-Context Learning (ICL). In-context Information
Extraction (IE) has recently garnered attention in the research community.
However, the performance of In-context IE generally lags behind the
state-of-the-art supervised expert models. We highlight a key reason for this
shortfall: underspecified task description. The limited-length context
struggles to thoroughly express the intricate IE task instructions and various
edge cases, leading to misalignment in task comprehension with humans. In this
paper, we propose a Guideline Learning (GL) framework for In-context IE which
reflectively learns and follows guidelines. During the learning phrase, GL
automatically synthesizes a set of guidelines based on a few error cases, and
during inference, GL retrieves helpful guidelines for better ICL. Moreover, we
propose a self-consistency-based active learning method to enhance the
efficiency of GL. Experiments on event extraction and relation extraction show
that GL can significantly improve the performance of in-context IE.
"
Harnessing the Power of Large Language Models for Empathetic Response  Generation: Empirical Investigations and Improvements,Yushan Qian,http://arxiv.org/pdf/2310.05140v1.pdf,2023-10-08,"['cs.cl', 'cs.ai']",2310.05140v1.pdf,"  Empathetic dialogue is an indispensable part of building harmonious social
relationships and contributes to the development of a helpful AI. Previous
approaches are mainly based on fine small-scale language models. With the
advent of ChatGPT, the application effect of large language models (LLMs) in
this field has attracted great attention. This work empirically investigates
the performance of LLMs in generating empathetic responses and proposes three
improvement methods of semantically similar in-context learning, two-stage
interactive generation, and combination with the knowledge base. Extensive
experiments show that LLMs can significantly benefit from our proposed methods
and is able to achieve state-of-the-art performance in both automatic and human
evaluations. Additionally, we explore the possibility of GPT-4 simulating human
evaluators.
"
LLMLingua: Compressing Prompts for Accelerated Inference of Large  Language Models,Huiqiang Jiang,http://arxiv.org/pdf/2310.05736v1.pdf,2023-10-09,"['cs.cl', 'cs.lg']",2310.05736v1.pdf,"  Large language models (LLMs) have been applied in various applications due to
their astonishing capabilities. With advancements in technologies such as
chain-of-thought (CoT) prompting and in-context learning (ICL), the prompts fed
to LLMs are becoming increasingly lengthy, even exceeding tens of thousands of
tokens. To accelerate model inference and reduce cost, this paper presents
LLMLingua, a coarse-to-fine prompt compression method that involves a budget
controller to maintain semantic integrity under high compression ratios, a
token-level iterative compression algorithm to better model the interdependence
between compressed contents, and an instruction tuning based method for
distribution alignment between language models. We conduct experiments and
analysis over four datasets from different scenarios, i.e., GSM8K, BBH,
ShareGPT, and Arxiv-March23; showing that the proposed approach yields
state-of-the-art performance and allows for up to 20x compression with little
performance loss. Our code is available at https://aka.ms/LLMLingua.
"
Selective Demonstrations for Cross-domain Text-to-SQL,Shuaichen Chang,http://arxiv.org/pdf/2310.06302v1.pdf,2023-10-10,['cs.cl'],2310.06302v1.pdf,"  Large language models (LLMs) with in-context learning have demonstrated
impressive generalization capabilities in the cross-domain text-to-SQL task,
without the use of in-domain annotations. However, incorporating in-domain
demonstration examples has been found to greatly enhance LLMs' performance. In
this paper, we delve into the key factors within in-domain examples that
contribute to the improvement and explore whether we can harness these benefits
without relying on in-domain annotations. Based on our findings, we propose a
demonstration selection framework ODIS which utilizes both out-of-domain
examples and synthetically generated in-domain examples to construct
demonstrations. By retrieving demonstrations from hybrid sources, ODIS
leverages the advantages of both, showcasing its effectiveness compared to
baseline methods that rely on a single data source. Furthermore, ODIS
outperforms state-of-the-art approaches on two cross-domain text-to-SQL
datasets, with improvements of 1.1 and 11.8 points in execution accuracy,
respectively.
"
Jailbreak and Guard Aligned Language Models with Only Few In-Context  Demonstrations,Zeming Wei,http://arxiv.org/pdf/2310.06387v1.pdf,2023-10-10,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.cr']",2310.06387v1.pdf,"  Large Language Models (LLMs) have shown remarkable success in various tasks,
but concerns about their safety and the potential for generating malicious
content have emerged. In this paper, we explore the power of In-Context
Learning (ICL) in manipulating the alignment ability of LLMs. We find that by
providing just few in-context demonstrations without fine-tuning, LLMs can be
manipulated to increase or decrease the probability of jailbreaking, i.e.
answering malicious prompts. Based on these observations, we propose In-Context
Attack (ICA) and In-Context Defense (ICD) methods for jailbreaking and guarding
aligned language model purposes. ICA crafts malicious contexts to guide models
in generating harmful outputs, while ICD enhances model robustness by
demonstrations of rejecting to answer harmful prompts. Our experiments show the
effectiveness of ICA and ICD in increasing or reducing the success rate of
adversarial jailbreaking attacks. Overall, we shed light on the potential of
ICL to influence LLM behavior and provide a new perspective for enhancing the
safety and alignment of LLMs.
"
Humans and language models diverge when predicting repeating text,Aditya R. Vaidya,http://arxiv.org/pdf/2310.06408v2.pdf,2023-10-10,['cs.cl'],2310.06408v2.pdf,"  Language models that are trained on the next-word prediction task have been
shown to accurately model human behavior in word prediction and reading speed.
In contrast with these findings, we present a scenario in which the performance
of humans and LMs diverges. We collected a dataset of human next-word
predictions for five stimuli that are formed by repeating spans of text. Human
and GPT-2 LM predictions are strongly aligned in the first presentation of a
text span, but their performance quickly diverges when memory (or in-context
learning) begins to play a role. We traced the cause of this divergence to
specific attention heads in a middle layer. Adding a power-law recency bias to
these attention heads yielded a model that performs much more similarly to
humans. We hope that this scenario will spur future work in bringing LMs closer
to human behavior.
"
The Limits of ChatGPT in Extracting Aspect-Category-Opinion-Sentiment  Quadruples: A Comparative Analysis,Xiancai Xu,http://arxiv.org/pdf/2310.06502v1.pdf,2023-10-10,['cs.cl'],2310.06502v1.pdf,"  Recently, ChatGPT has attracted great attention from both industry and
academia due to its surprising abilities in natural language understanding and
generation. We are particularly curious about whether it can achieve promising
performance on one of the most complex tasks in aspect-based sentiment
analysis, i.e., extracting aspect-category-opinion-sentiment quadruples from
texts. To this end, in this paper we develop a specialized prompt template that
enables ChatGPT to effectively tackle this complex quadruple extraction task.
Further, we propose a selection method on few-shot examples to fully exploit
the in-context learning ability of ChatGPT and uplift its effectiveness on this
complex task. Finally, we provide a comparative evaluation on ChatGPT against
existing state-of-the-art quadruple extraction models based on four public
datasets and highlight some important findings regarding the capability
boundaries of ChatGPT in the quadruple extraction.
"
AMAGO: Scalable In-Context Reinforcement Learning for Adaptive Agents,Jake Grigsby,http://arxiv.org/pdf/2310.09971v2.pdf,2023-10-15,['cs.lg'],2310.09971v2.pdf,"  We introduce AMAGO, an in-context Reinforcement Learning (RL) agent that uses
sequence models to tackle the challenges of generalization, long-term memory,
and meta-learning. Recent works have shown that off-policy learning can make
in-context RL with recurrent policies viable. Nonetheless, these approaches
require extensive tuning and limit scalability by creating key bottlenecks in
agents' memory capacity, planning horizon, and model size. AMAGO revisits and
redesigns the off-policy in-context approach to successfully train
long-sequence Transformers over entire rollouts in parallel with end-to-end RL.
Our agent is uniquely scalable and applicable to a wide range of problems. We
demonstrate its strong performance empirically in meta-RL and long-term memory
domains. AMAGO's focus on sparse rewards and off-policy data also allows
in-context learning to extend to goal-conditioned problems with challenging
exploration. When combined with a novel hindsight relabeling scheme, AMAGO can
solve a previously difficult category of open-world domains, where agents
complete many possible instructions in procedurally generated environments. We
evaluate our agent on three goal-conditioned domains and study how its
individual improvements connect to create a generalist policy.
"
A Search for Prompts: Generating Structured Answers from Contracts,Adam Roegiest,http://arxiv.org/pdf/2310.10141v1.pdf,2023-10-16,['cs.cv'],2310.10141v1.pdf,"  In many legal processes being able to action on the concrete implication of a
legal question can be valuable to automating human review or signalling certain
conditions (e.g., alerts around automatic renewal). To support such tasks, we
present a form of legal question answering that seeks to return one (or more)
fixed answers for a question about a contract clause. After showing that
unstructured generative question answering can have questionable outcomes for
such a task, we discuss our exploration methodology for legal question
answering prompts using OpenAI's \textit{GPT-3.5-Turbo} and provide a summary
of insights.
  Using insights gleaned from our qualitative experiences, we compare our
proposed template prompts against a common semantic matching approach and find
that our prompt templates are far more accurate despite being less reliable in
the exact response return. With some additional tweaks to prompts and the use
of in-context learning, we are able to further improve the performance of our
proposed strategy while maximizing the reliability of responses as best we can.
"
Large Language Models Meet Open-World Intent Discovery and Recognition:  An Evaluation of ChatGPT,Xiaoshuai Song,http://arxiv.org/pdf/2310.10176v1.pdf,2023-10-16,"['cs.cl', 'cs.ai', 'cs.lg']",2310.10176v1.pdf,"  The tasks of out-of-domain (OOD) intent discovery and generalized intent
discovery (GID) aim to extend a closed intent classifier to open-world intent
sets, which is crucial to task-oriented dialogue (TOD) systems. Previous
methods address them by fine-tuning discriminative models. Recently, although
some studies have been exploring the application of large language models
(LLMs) represented by ChatGPT to various downstream tasks, it is still unclear
for the ability of ChatGPT to discover and incrementally extent OOD intents. In
this paper, we comprehensively evaluate ChatGPT on OOD intent discovery and
GID, and then outline the strengths and weaknesses of ChatGPT. Overall, ChatGPT
exhibits consistent advantages under zero-shot settings, but is still at a
disadvantage compared to fine-tuned models. More deeply, through a series of
analytical experiments, we summarize and discuss the challenges faced by LLMs
including clustering, domain-specific understanding, and cross-domain
in-context learning scenarios. Finally, we provide empirical guidance for
future directions to address these challenges.
"
MoConVQ: Unified Physics-Based Motion Control via Scalable Discrete  Representations,Heyuan Yao,http://arxiv.org/pdf/2310.10198v2.pdf,2023-10-16,"['cs.cv', 'cs.gr']",2310.10198v2.pdf,"  In this work, we present MoConVQ, a novel unified framework for physics-based
motion control leveraging scalable discrete representations. Building upon
vector quantized variational autoencoders (VQ-VAE) and model-based
reinforcement learning, our approach effectively learns motion embeddings from
a large, unstructured dataset spanning tens of hours of motion examples. The
resultant motion representation not only captures diverse motion skills but
also offers a robust and intuitive interface for various applications. We
demonstrate the versatility of MoConVQ through several applications: universal
tracking control from various motion sources, interactive character control
with latent motion representations using supervised learning, physics-based
motion generation from natural language descriptions using the GPT framework,
and, most interestingly, seamless integration with large language models (LLMs)
with in-context learning to tackle complex and abstract tasks.
"
Semantic Parsing by Large Language Models for Intricate Updating  Strategies of Zero-Shot Dialogue State Tracking,Yuxiang Wu,http://arxiv.org/pdf/2310.10520v2.pdf,2023-10-16,"['cs.cl', 'cs.ai', 'cs.lg']",2310.10520v2.pdf,"  Zero-shot Dialogue State Tracking (DST) addresses the challenge of acquiring
and annotating task-oriented dialogues, which can be time consuming and costly.
However, DST extends beyond simple slot-filling and requires effective updating
strategies for tracking dialogue state as conversations progress. In this
paper, we propose ParsingDST, a new In-Context Learning (ICL) method, to
introduce additional intricate updating strategies in zero-shot DST. Our
approach reformulates the DST task by leveraging powerful Large Language Models
(LLMs) and translating the original dialogue text to JSON through semantic
parsing as an intermediate state. We also design a novel framework that
includes more modules to ensure the effectiveness of updating strategies in the
text-to-JSON process. Experimental results demonstrate that our approach
outperforms existing zero-shot DST methods on MultiWOZ, exhibiting significant
improvements in Joint Goal Accuracy (JGA) and slot accuracy compared to
existing ICL methods.
"
Mastering the Task of Open Information Extraction with Large Language  Models and Consistent Reasoning Environment,Ji Qi,http://arxiv.org/pdf/2310.10590v1.pdf,2023-10-16,['cs.cl'],2310.10590v1.pdf,"  Open Information Extraction (OIE) aims to extract objective structured
knowledge from natural texts, which has attracted growing attention to build
dedicated models with human experience. As the large language models (LLMs)
have exhibited remarkable in-context learning capabilities, a question arises
as to whether the task of OIE can be effectively tackled with this paradigm? In
this paper, we explore solving the OIE problem by constructing an appropriate
reasoning environment for LLMs. Specifically, we first propose a method to
effectively estimate the discrepancy of syntactic distribution between a LLM
and test samples, which can serve as correlation evidence for preparing
positive demonstrations. Upon the evidence, we introduce a simple yet effective
mechanism to establish the reasoning environment for LLMs on specific tasks.
Without bells and whistles, experimental results on the standard CaRB benchmark
demonstrate that our $6$-shot approach outperforms state-of-the-art supervised
method, achieving an $55.3$ $F_1$ score. Further experiments on TACRED and
ACE05 show that our method can naturally generalize to other information
extraction tasks, resulting in improvements of $5.7$ and $6.8$ $F_1$ scores,
respectively.
"
Exploring Automatic Evaluation Methods based on a Decoder-based LLM for  Text Generation,Tomohito Kasahara,http://arxiv.org/pdf/2310.11026v1.pdf,2023-10-17,['cs.cl'],2310.11026v1.pdf,"  Automatic evaluation of text generation is essential for improving the
accuracy of generation tasks. In light of the current trend towards
increasingly larger decoder-based language models, we investigate automatic
evaluation methods based on such models for text generation. This paper
compares various methods, including tuning with encoder-based models and large
language models under equal conditions, on two different tasks, machine
translation evaluation and semantic textual similarity, in two languages,
Japanese and English. Experimental results show that compared to the tuned
encoder-based models, the tuned decoder-based models perform poorly. The
analysis of the causes for this suggests that the decoder-based models focus on
surface word sequences and do not capture meaning. It is also revealed that
in-context learning of very large decoder-based models such as ChatGPT makes it
difficult to identify fine-grained semantic differences.
"
Learning from Red Teaming: Gender Bias Provocation and Mitigation in  Large Language Models,Hsuan Su,http://arxiv.org/pdf/2310.11079v1.pdf,2023-10-17,"['cs.cl', 'cs.ai']",2310.11079v1.pdf,"  Recently, researchers have made considerable improvements in dialogue systems
with the progress of large language models (LLMs) such as ChatGPT and GPT-4.
These LLM-based chatbots encode the potential biases while retaining
disparities that can harm humans during interactions. The traditional biases
investigation methods often rely on human-written test cases. However, these
test cases are usually expensive and limited. In this work, we propose a
first-of-its-kind method that automatically generates test cases to detect
LLMs' potential gender bias. We apply our method to three well-known LLMs and
find that the generated test cases effectively identify the presence of biases.
To address the biases identified, we propose a mitigation strategy that uses
the generated test cases as demonstrations for in-context learning to
circumvent the need for parameter fine-tuning. The experimental results show
that LLMs generate fairer responses with the proposed approach.
"
Evaluating LLMs for Privilege-Escalation Scenarios,Andreas Happe,http://arxiv.org/pdf/2310.11409v2.pdf,2023-10-17,"['cs.cr', 'cs.ai']",2310.11409v2.pdf,"  Penetration testing, an essential component of cybersecurity, allows
organizations to proactively identify and remediate vulnerabilities in their
systems, thus bolstering their defense mechanisms against potential
cyberattacks. One recent advancement in the realm of penetration testing is the
utilization of Language Models (LLMs). We explore the intersection of LLMs and
penetration testing to gain insight into their capabilities and challenges in
the context of privilige escalation. We create an automated Linux
privilege-escalation benchmark utilizing local virtual machines. We introduce
an LLM-guided privilege-escalation tool designed for evaluating different LLMs
and prompt strategies against our benchmark. We analyze the impact of different
prompt designs, the benefits of in-context learning, and the advantages of
offering high-level guidance to LLMs. We discuss challenging areas for LLMs,
including maintaining focus during testing, coping with errors, and finally
comparing them with both stochastic parrots as well as with human hackers.
"
Measuring Pointwise $\mathcal{V}$-Usable Information In-Context-ly,Sheng Lu,http://arxiv.org/pdf/2310.12300v1.pdf,2023-10-18,['cs.cl'],2310.12300v1.pdf,"  In-context learning (ICL) is a new learning paradigm that has gained
popularity along with the development of large language models. In this work,
we adapt a recently proposed hardness metric, pointwise $\mathcal{V}$-usable
information (PVI), to an in-context version (in-context PVI). Compared to the
original PVI, in-context PVI is more efficient in that it requires only a few
exemplars and does not require fine-tuning. We conducted a comprehensive
empirical analysis to evaluate the reliability of in-context PVI. Our findings
indicate that in-context PVI estimates exhibit similar characteristics to the
original PVI. Specific to the in-context setting, we show that in-context PVI
estimates remain consistent across different exemplar selections and numbers of
shots. The variance of in-context PVI estimates across different exemplar
selections is insignificant, which suggests that in-context PVI are stable.
Furthermore, we demonstrate how in-context PVI can be employed to identify
challenging instances. Our work highlights the potential of in-context PVI and
provides new insights into the capabilities of ICL.
"
Attack Prompt Generation for Red Teaming and Defending Large Language  Models,Boyi Deng,http://arxiv.org/pdf/2310.12505v1.pdf,2023-10-19,"['cs.cl', 'cs.cr', 'cs.lg']",2310.12505v1.pdf,"  Large language models (LLMs) are susceptible to red teaming attacks, which
can induce LLMs to generate harmful content. Previous research constructs
attack prompts via manual or automatic methods, which have their own
limitations on construction cost and quality. To address these issues, we
propose an integrated approach that combines manual and automatic methods to
economically generate high-quality attack prompts. Specifically, considering
the impressive capabilities of newly emerged LLMs, we propose an attack
framework to instruct LLMs to mimic human-generated prompts through in-context
learning. Furthermore, we propose a defense framework that fine-tunes victim
LLMs through iterative interactions with the attack framework to enhance their
safety against red teaming attacks. Extensive experiments on different LLMs
validate the effectiveness of our proposed attack and defense frameworks.
Additionally, we release a series of attack prompts datasets named SAP with
varying sizes, facilitating the safety evaluation and enhancement of more LLMs.
Our code and dataset is available on https://github.com/Aatrox103/SAP .
"
Are Structural Concepts Universal in Transformer Language Models?  Towards Interpretable Cross-Lingual Generalization,Ningyu Xu,http://arxiv.org/pdf/2310.12794v1.pdf,2023-10-19,['cs.cl'],2310.12794v1.pdf,"  Large language models (LLMs) have exhibited considerable cross-lingual
generalization abilities, whereby they implicitly transfer knowledge across
languages. However, the transfer is not equally successful for all languages,
especially for low-resource ones, which poses an ongoing challenge. It is
unclear whether we have reached the limits of implicit cross-lingual
generalization and if explicit knowledge transfer is viable. In this paper, we
investigate the potential for explicitly aligning conceptual correspondence
between languages to enhance cross-lingual generalization. Using the syntactic
aspect of language as a testbed, our analyses of 43 languages reveal a high
degree of alignability among the spaces of structural concepts within each
language for both encoder-only and decoder-only LLMs. We then propose a
meta-learning-based method to learn to align conceptual spaces of different
languages, which facilitates zero-shot and few-shot generalization in concept
classification and also offers insights into the cross-lingual in-context
learning phenomenon. Experiments on syntactic analysis tasks show that our
approach achieves competitive results with state-of-the-art methods and narrows
the performance gap between languages, particularly benefiting those with
limited resources.
"
Mind the instructions: a holistic evaluation of consistency and  interactions in prompt-based learning,Lucas Weber,http://arxiv.org/pdf/2310.13486v1.pdf,2023-10-20,"['cs.cl', 'cs.ai']",2310.13486v1.pdf,"  Finding the best way of adapting pre-trained language models to a task is a
big challenge in current NLP. Just like the previous generation of task-tuned
models (TT), models that are adapted to tasks via in-context-learning (ICL) are
robust in some setups but not in others. Here, we present a detailed analysis
of which design choices cause instabilities and inconsistencies in LLM
predictions. First, we show how spurious correlations between input
distributions and labels -- a known issue in TT models -- form only a minor
problem for prompted models. Then, we engage in a systematic, holistic
evaluation of different factors that have been found to influence predictions
in a prompting setup. We test all possible combinations of a range of factors
on both vanilla and instruction-tuned (IT) LLMs of different scale and
statistically analyse the results to show which factors are the most
influential, interactive or stable. Our results show which factors can be used
without precautions and which should be avoided or handled with care in most
settings.
"
A Simple Baseline for Knowledge-Based Visual Question Answering,Alexandros Xenos,http://arxiv.org/pdf/2310.13570v2.pdf,2023-10-20,['cs.cv'],2310.13570v2.pdf,"  This paper is on the problem of Knowledge-Based Visual Question Answering
(KB-VQA). Recent works have emphasized the significance of incorporating both
explicit (through external databases) and implicit (through LLMs) knowledge to
answer questions requiring external knowledge effectively. A common limitation
of such approaches is that they consist of relatively complicated pipelines and
often heavily rely on accessing GPT-3 API. Our main contribution in this paper
is to propose a much simpler and readily reproducible pipeline which, in a
nutshell, is based on efficient in-context learning by prompting LLaMA (1 and
2) using question-informative captions as contextual information. Contrary to
recent approaches, our method is training-free, does not require access to
external databases or APIs, and yet achieves state-of-the-art accuracy on the
OK-VQA and A-OK-VQA datasets. Finally, we perform several ablation studies to
understand important aspects of our method. Our code is publicly available at
https://github.com/alexandrosXe/ASimple-Baseline-For-Knowledge-Based-VQA
"
An In-Context Schema Understanding Method for Knowledge Base Question  Answering,Yantao Liu,http://arxiv.org/pdf/2310.14174v1.pdf,2023-10-22,['cs.cl'],2310.14174v1.pdf,"  The Knowledge Base Question Answering (KBQA) task aims to answer natural
language questions based on a given knowledge base. As a kind of common method
for this task, semantic parsing-based ones first convert natural language
questions to logical forms (e.g., SPARQL queries) and then execute them on
knowledge bases to get answers. Recently, Large Language Models (LLMs) have
shown strong abilities in language understanding and may be adopted as semantic
parsers in such kinds of methods. However, in doing so, a great challenge for
LLMs is to understand the schema of knowledge bases. Therefore, in this paper,
we propose an In-Context Schema Understanding (ICSU) method for facilitating
LLMs to be used as a semantic parser in KBQA. Specifically, ICSU adopts the
In-context Learning mechanism to instruct LLMs to generate SPARQL queries with
examples. In order to retrieve appropriate examples from annotated
question-query pairs, which contain comprehensive schema information related to
questions, ICSU explores four different retrieval strategies. Experimental
results on the largest KBQA benchmark, KQA Pro, show that ICSU with all these
strategies outperforms that with a random retrieval strategy significantly
(from 12\% to 78.76\% in accuracy).
"
From Chaos to Clarity: Claim Normalization to Empower Fact-Checking,Megha Sundriyal,http://arxiv.org/pdf/2310.14338v1.pdf,2023-10-22,"['cs.cl', 'cs.ai']",2310.14338v1.pdf,"  With the proliferation of social media platforms, users are exposed to vast
information, including posts containing misleading claims. However, the
pervasive noise inherent in these posts presents a challenge in identifying
precise and prominent claims that require verification. Extracting the core
assertions from such posts is arduous and time-consuming. We introduce a novel
task called Claim Normalization (aka ClaimNorm) that aims to decompose complex
and noisy social media posts into more straightforward and understandable
forms, termed normalized claims. We propose CACN, a pioneering approach that
leverages chain-of-thought and claim check-worthiness estimation, mimicking
human reasoning processes, to comprehend intricate claims. Moreover, we
capitalize on large language models' powerful in-context learning abilities to
provide guidance and improve the claim normalization process. To evaluate the
effectiveness of our proposed model, we meticulously compile a comprehensive
real-world dataset, CLAN, comprising more than 6k instances of social media
posts alongside their respective normalized claims. Experimentation
demonstrates that CACN outperforms several baselines across various evaluation
measures. A rigorous error analysis validates CACN's capabilities and pitfalls.
"
Retrieval-Augmented Chain-of-Thought in Semi-structured Domains,Vaibhav Mavi,http://arxiv.org/pdf/2310.14435v1.pdf,2023-10-22,"['cs.cl', 'cs.ai']",2310.14435v1.pdf,"  Applying existing question answering (QA) systems to specialized domains like
law and finance presents challenges that necessitate domain expertise. Although
large language models (LLMs) have shown impressive language comprehension and
in-context learning capabilities, their inability to handle very long
inputs/contexts is well known. Tasks specific to these domains need significant
background knowledge, leading to contexts that can often exceed the maximum
length that existing LLMs can process. This study explores leveraging the
semi-structured nature of legal and financial data to efficiently retrieve
relevant context, enabling the use of LLMs for domain-specialized QA. The
resulting system outperforms contemporary models and also provides useful
explanations for the answers, encouraging the integration of LLMs into legal
and financial NLP systems for future research.
"
Statistical Depth for Ranking and Characterizing Transformer-Based Text  Embeddings,Parker Seegmiller,http://arxiv.org/pdf/2310.15010v1.pdf,2023-10-23,['cs.cl'],2310.15010v1.pdf,"  The popularity of transformer-based text embeddings calls for better
statistical tools for measuring distributions of such embeddings. One such tool
would be a method for ranking texts within a corpus by centrality, i.e.
assigning each text a number signifying how representative that text is of the
corpus as a whole. However, an intrinsic center-outward ordering of
high-dimensional text representations is not trivial. A statistical depth is a
function for ranking k-dimensional objects by measuring centrality with respect
to some observed k-dimensional distribution. We adopt a statistical depth to
measure distributions of transformer-based text embeddings, transformer-based
text embedding (TTE) depth, and introduce the practical use of this depth for
both modeling and distributional inference in NLP pipelines. We first define
TTE depth and an associated rank sum test for determining whether two corpora
differ significantly in embedding space. We then use TTE depth for the task of
in-context learning prompt selection, showing that this approach reliably
improves performance over statistical baseline approaches across six text
classification tasks. Finally, we use TTE depth and the associated rank sum
test to characterize the distributions of synthesized and human-generated
corpora, showing that five recent synthetic data augmentation processes cause a
measurable distributional shift away from associated human-generated text.
"
Meta- (out-of-context) learning in neural networks,Dmitrii Krasheninnikov,http://arxiv.org/pdf/2310.15047v2.pdf,2023-10-23,"['cs.lg', 'cs.ai']",2310.15047v2.pdf,"  Brown et al. (2020) famously introduced the phenomenon of in-context learning
in large language models (LLMs). We establish the existence of a phenomenon we
call meta-out-of-context learning (meta-OCL) via carefully designed synthetic
experiments with LLMs. Our results suggest that meta-OCL leads LLMs to more
readily ""internalize"" the semantic content of text that is, or appears to be,
broadly useful (such as true statements, or text from authoritative sources)
and use it in appropriate circumstances. We further demonstrate meta-OCL in a
synthetic computer vision setting, and propose two hypotheses for the emergence
of meta-OCL: one relying on the way models store knowledge in their parameters,
and another suggesting that the implicit gradient alignment bias of
gradient-descent-based optimizers may be responsible. Finally, we reflect on
what our results might imply about capabilities of future AI systems, and
discuss potential risks. Our code can be found at
https://github.com/krasheninnikov/internalization.
"
The BLA Benchmark: Investigating Basic Language Abilities of Pre-Trained  Multimodal Models,Xinyi Chen,http://arxiv.org/pdf/2310.15061v1.pdf,2023-10-23,"['cs.cl', 'cs.ai', 'cs.cv']",2310.15061v1.pdf,"  Despite the impressive performance achieved by pre-trained
language-and-vision models in downstream tasks, it remains an open question
whether this reflects a proper understanding of image-text interaction. In this
work, we explore to what extent they handle basic linguistic constructions --
active-passive voice, coordination, and relative clauses -- that even preschool
children can typically master. We present BLA, a novel, automatically
constructed benchmark to evaluate multimodal models on these Basic Language
Abilities. We show that different types of Transformer-based systems, such as
CLIP, ViLBERT, and BLIP2, generally struggle with BLA in a zero-shot setting,
in line with previous findings. Our experiments, in particular, show that most
of the tested models only marginally benefit when fine-tuned or prompted with
construction-specific samples. Yet, the generative BLIP2 shows promising
trends, especially in an in-context learning setting. This opens the door to
using BLA not only as an evaluation benchmark but also to improve models' basic
language abilities.
"
LLM-in-the-loop: Leveraging Large Language Model for Thematic Analysis,Shih-Chieh Dai,http://arxiv.org/pdf/2310.15100v1.pdf,2023-10-23,['cs.cl'],2310.15100v1.pdf,"  Thematic analysis (TA) has been widely used for analyzing qualitative data in
many disciplines and fields. To ensure reliable analysis, the same piece of
data is typically assigned to at least two human coders. Moreover, to produce
meaningful and useful analysis, human coders develop and deepen their data
interpretation and coding over multiple iterations, making TA labor-intensive
and time-consuming. Recently the emerging field of large language models (LLMs)
research has shown that LLMs have the potential replicate human-like behavior
in various tasks: in particular, LLMs outperform crowd workers on
text-annotation tasks, suggesting an opportunity to leverage LLMs on TA. We
propose a human-LLM collaboration framework (i.e., LLM-in-the-loop) to conduct
TA with in-context learning (ICL). This framework provides the prompt to frame
discussions with a LLM (e.g., GPT-3.5) to generate the final codebook for TA.
We demonstrate the utility of this framework using survey datasets on the
aspects of the music listening experience and the usage of a password manager.
Results of the two case studies show that the proposed framework yields similar
coding quality to that of human coders but reduces TA's labor and time demands.
"
UI Layout Generation with LLMs Guided by UI Grammar,Yuwen Lu,http://arxiv.org/pdf/2310.15455v1.pdf,2023-10-24,"['cs.hc', 'cs.ai']",2310.15455v1.pdf,"  The recent advances in Large Language Models (LLMs) have stimulated interest
among researchers and industry professionals, particularly in their application
to tasks concerning mobile user interfaces (UIs). This position paper
investigates the use of LLMs for UI layout generation. Central to our
exploration is the introduction of UI grammar -- a novel approach we proposed
to represent the hierarchical structure inherent in UI screens. The aim of this
approach is to guide the generative capacities of LLMs more effectively and
improve the explainability and controllability of the process. Initial
experiments conducted with GPT-4 showed the promising capability of LLMs to
produce high-quality user interfaces via in-context learning. Furthermore, our
preliminary comparative study suggested the potential of the grammar-based
approach in improving the quality of generative results in specific aspects.
"
POE: Process of Elimination for Multiple Choice Reasoning,Chenkai Ma,http://arxiv.org/pdf/2310.15575v1.pdf,2023-10-24,['cs.cl'],2310.15575v1.pdf,"  Language models (LMs) are capable of conducting in-context learning for
multiple choice reasoning tasks, but the options in these tasks are treated
equally. As humans often first eliminate wrong options before picking the final
correct answer, we argue a similar two-step strategy can make LMs better at
these tasks. To this end, we present the Process of Elimination (POE), a
two-step scoring method. In the first step, POE scores each option, and
eliminates seemingly wrong options. In the second step, POE masks these wrong
options, and makes the final prediction from the remaining options. Zero-shot
experiments on 8 reasoning tasks illustrate the effectiveness of POE, and a
following analysis finds our method to be especially performant on logical
reasoning tasks. We further analyze the effect of masks, and show that POE
applies to few-shot settings and large language models (LLMs) like ChatGPT.
"
WebWISE: Web Interface Control and Sequential Exploration with Large  Language Models,Heyi Tao,http://arxiv.org/pdf/2310.16042v2.pdf,2023-10-24,"['cs.cl', 'cs.ai']",2310.16042v2.pdf,"  The paper investigates using a Large Language Model (LLM) to automatically
perform web software tasks using click, scroll, and text input operations.
Previous approaches, such as reinforcement learning (RL) or imitation learning,
are inefficient to train and task-specific. Our method uses filtered Document
Object Model (DOM) elements as observations and performs tasks step-by-step,
sequentially generating small programs based on the current observations. We
use in-context learning, either benefiting from a single manually provided
example, or an automatically generated example based on a successful zero-shot
trial. We evaluate the proposed method on the MiniWob++ benchmark. With only
one in-context example, our WebWISE method achieves similar or better
performance than other methods that require many demonstrations or trials.
"
From Heuristic to Analytic: Cognitively Motivated Strategies for  Coherent Physical Commonsense Reasoning,Zheyuan Zhang,http://arxiv.org/pdf/2310.18364v1.pdf,2023-10-24,"['cs.cl', 'cs.ai']",2310.18364v1.pdf,"  Pre-trained language models (PLMs) have shown impressive performance in
various language tasks. However, they are prone to spurious correlations, and
often generate illusory information. In real-world applications, PLMs should
justify decisions with formalized, coherent reasoning chains, but this
challenge remains under-explored. Cognitive psychology theorizes that humans
are capable of utilizing fast and intuitive heuristic thinking to make
decisions based on past experience, then rationalizing the decisions through
slower and deliberative analytic reasoning. We incorporate these interlinked
dual processes in fine-tuning and in-context learning with PLMs, applying them
to two language understanding tasks that require coherent physical commonsense
reasoning. We show that our proposed Heuristic-Analytic Reasoning (HAR)
strategies drastically improve the coherence of rationalizations for model
decisions, yielding state-of-the-art results on Tiered Reasoning for Intuitive
Physics (TRIP). We also find that this improved coherence is a direct result of
more faithful attention to relevant language context in each step of reasoning.
Our findings suggest that human-like reasoning strategies can effectively
improve the coherence and reliability of PLM reasoning.
"
The Mystery and Fascination of LLMs: A Comprehensive Survey on the  Interpretation and Analysis of Emergent Abilities,Yuxiang Zhou,http://arxiv.org/pdf/2311.00237v1.pdf,2023-11-01,['cs.cl'],2311.00237v1.pdf,"  Understanding emergent abilities, such as in-context learning (ICL) and
chain-of-thought (CoT) prompting in large language models (LLMs), is of utmost
importance. This importance stems not only from the better utilization of these
capabilities across various tasks, but also from the proactive identification
and mitigation of potential risks, including concerns of truthfulness, bias,
and toxicity, that may arise alongside these capabilities. In this paper, we
present a thorough survey on the interpretation and analysis of emergent
abilities of LLMs. First, we provide a concise introduction to the background
and definition of emergent abilities. Then, we give an overview of advancements
from two perspectives: 1) a macro perspective, emphasizing studies on the
mechanistic interpretability and delving into the mathematical foundations
behind emergent abilities; and 2) a micro-perspective, concerning studies that
focus on empirical interpretability by examining factors associated with these
abilities. We conclude by highlighting the challenges encountered and
suggesting potential avenues for future research. We believe that our work
establishes the basis for further exploration into the interpretation of
emergent abilities.
"
Narrowing the Gap between Zero- and Few-shot Machine Translation by  Matching Styles,Weiting Tan,http://arxiv.org/pdf/2311.02310v1.pdf,2023-11-04,['cs.cl'],2311.02310v1.pdf,"  Large language models trained primarily in a monolingual setting have
demonstrated their ability to generalize to machine translation using zero- and
few-shot examples with in-context learning. However, even though zero-shot
translations are relatively good, there remains a discernible gap comparing
their performance with the few-shot setting. In this paper, we investigate the
factors contributing to this gap and find that this gap can largely be closed
(for about 70%) by matching the writing styles of the target corpus.
Additionally, we explore potential approaches to enhance zero-shot baselines
without the need for parallel demonstration examples, providing valuable
insights into how these methods contribute to improving translation metrics.
"
Instructed Language Models with Retrievers Are Powerful Entity Linkers,Zilin Xiao,http://arxiv.org/pdf/2311.03250v1.pdf,2023-11-06,"['cs.cl', 'cs.ai']",2311.03250v1.pdf,"  Generative approaches powered by large language models (LLMs) have
demonstrated emergent abilities in tasks that require complex reasoning
abilities. Yet the generative nature still makes the generated content suffer
from hallucinations, thus unsuitable for entity-centric tasks like entity
linking (EL) requiring precise entity predictions over a large knowledge base.
We present Instructed Generative Entity Linker (INSGENEL), the first approach
that enables casual language models to perform entity linking over knowledge
bases. Several methods to equip language models with EL capability were
proposed in this work, including (i) a sequence-to-sequence training EL
objective with instruction-tuning, (ii) a novel generative EL framework based
on a light-weight potential mention retriever that frees the model from heavy
and non-parallelizable decoding, achieving 4$\times$ speedup without compromise
on linking metrics. INSGENEL outperforms previous generative alternatives with
+6.8 F1 points gain on average, also with a huge advantage in training data
efficiency and training compute consumption. In addition, our skillfully
engineered in-context learning (ICL) framework for EL still lags behind
INSGENEL significantly, reaffirming that the EL task remains a persistent
hurdle for general LLMs.
"
Meta-learning via Language Model In-context Tuning,Yanda Chen,http://arxiv.org/pdf/2110.07814v2.pdf,2021-10-15,"['cs.cl', 'cs.lg']",2110.07814v2.pdf,"  The goal of meta-learning is to learn to adapt to a new task with only a few
labeled examples. To tackle this problem in NLP, we propose $\textit{in-context
tuning}$, which recasts adaptation and prediction as a simple sequence
prediction problem: to form the input sequence, we concatenate the task
instruction, the labeled examples, and the target input to predict; to
meta-train the model to learn from in-context examples, we fine-tune a
pre-trained language model (LM) to predict the target label from the input
sequences on a collection of tasks.
  We benchmark our method on two collections of text classification tasks: LAMA
and BinaryClfs. Compared to first-order MAML which adapts the model with
gradient descent, our method better leverages the inductive bias of LMs to
perform pattern matching, and outperforms MAML by an absolute $6\%$ AUC ROC
score on BinaryClfs, with increasing advantage w.r.t. model size. Compared to
non-fine-tuned in-context learning (i.e. prompting a raw LM), in-context tuning
directly learns to learn from in-context examples. On BinaryClfs, in-context
tuning improves the average AUC-ROC score by an absolute $10\%$, and reduces
the variance with respect to example ordering by 6x and example choices by 2x.
"
Good Examples Make A Faster Learner: Simple Demonstration-based Learning  for Low-resource NER,Dong-Ho Lee,http://arxiv.org/pdf/2110.08454v3.pdf,2021-10-16,['cs.cl'],2110.08454v3.pdf,"  Recent advances in prompt-based learning have shown strong results on
few-shot text classification by using cloze-style templates. Similar attempts
have been made on named entity recognition (NER) which manually design
templates to predict entity types for every text span in a sentence. However,
such methods may suffer from error propagation induced by entity span
detection, high cost due to enumeration of all possible text spans, and
omission of inter-dependencies among token labels in a sentence. Here we
present a simple demonstration-based learning method for NER, which lets the
input be prefaced by task demonstrations for in-context learning. We perform a
systematic study on demonstration strategy regarding what to include (entity
examples, with or without surrounding context), how to select the examples, and
what templates to use. Results on in-domain learning and domain adaptation show
that the model's performance in low-resource settings can be largely improved
with a suitable demonstration strategy (e.g., a 4-17% improvement on 25 train
instances). We also find that good demonstration can save many labeled examples
and consistency in demonstration contributes to better performance.
"
GLaM: Efficient Scaling of Language Models with Mixture-of-Experts,Nan Du,http://arxiv.org/pdf/2112.06905v2.pdf,2021-12-13,['cs.cl'],2112.06905v2.pdf,"  Scaling language models with more data, compute and parameters has driven
significant progress in natural language processing. For example, thanks to
scaling, GPT-3 was able to achieve strong results on in-context learning tasks.
However, training these large dense models requires significant amounts of
computing resources. In this paper, we propose and develop a family of language
models named GLaM (Generalist Language Model), which uses a sparsely activated
mixture-of-experts architecture to scale the model capacity while also
incurring substantially less training cost compared to dense variants. The
largest GLaM has 1.2 trillion parameters, which is approximately 7x larger than
GPT-3. It consumes only 1/3 of the energy used to train GPT-3 and requires half
of the computation flops for inference, while still achieving better overall
zero-shot and one-shot performance across 29 NLP tasks.
"
Can language models learn from explanations in context?,Andrew K. Lampinen,http://arxiv.org/pdf/2204.02329v4.pdf,2022-04-05,"['cs.cl', 'cs.ai', 'cs.lg']",2204.02329v4.pdf,"  Language Models (LMs) can perform new tasks by adapting to a few in-context
examples. For humans, explanations that connect examples to task principles can
improve learning. We therefore investigate whether explanations of few-shot
examples can help LMs. We annotate questions from 40 challenging tasks with
answer explanations, and various matched control explanations. We evaluate how
different types of explanations, instructions, and controls affect zero- and
few-shot performance. We analyze these results using statistical multilevel
modeling techniques that account for the nested dependencies among conditions,
tasks, prompts, and models. We find that explanations can improve performance
-- even without tuning. Furthermore, explanations hand-tuned for performance on
a small validation set offer substantially larger benefits, and building a
prompt by selecting examples and explanations together substantially improves
performance over selecting examples alone. Finally, even untuned explanations
outperform carefully matched controls, suggesting that the benefits are due to
the link between an example and its explanation, rather than lower-level
features. However, only large models benefit. In summary, explanations can
support the in-context learning of large LMs on challenging tasks.
"
Automatic Short Math Answer Grading via In-context Meta-learning,Mengxue Zhang,http://arxiv.org/pdf/2205.15219v3.pdf,2022-05-30,"['cs.cl', 'cs.lg']",2205.15219v3.pdf,"  Automatic short answer grading is an important research direction in the
exploration of how to use artificial intelligence (AI)-based tools to improve
education. Current state-of-the-art approaches use neural language models to
create vectorized representations of students responses, followed by
classifiers to predict the score. However, these approaches have several key
limitations, including i) they use pre-trained language models that are not
well-adapted to educational subject domains and/or student-generated text and
ii) they almost always train one model per question, ignoring the linkage
across a question and result in a significant model storage problem due to the
size of advanced language models. In this paper, we study the problem of
automatic short answer grading for students' responses to math questions and
propose a novel framework for this task. First, we use MathBERT, a variant of
the popular language model BERT adapted to mathematical content, as our base
model and fine-tune it for the downstream task of student response grading.
Second, we use an in-context learning approach that provides scoring examples
as input to the language model to provide additional context information and
promote generalization to previously unseen questions. We evaluate our
framework on a real-world dataset of student responses to open-ended math
questions and show that our framework (often significantly) outperforms
existing approaches, especially for new questions that are not seen during
training.
"
ThinkSum: Probabilistic reasoning over sets using large language models,Batu Ozturkler,http://arxiv.org/pdf/2210.01293v2.pdf,2022-10-04,['cs.cl'],2210.01293v2.pdf,"  Large language models (LLMs) have a substantial capacity for high-level
analogical reasoning: reproducing patterns in linear text that occur in their
training data (zero-shot evaluation) or in the provided context (few-shot
in-context learning). However, recent studies show that even the more advanced
LLMs fail in scenarios that require reasoning over multiple objects or facts
and making sequences of logical deductions. We propose a two-stage
probabilistic inference paradigm, ThinkSum, which reasons over sets of objects
or facts in a structured manner. In the first stage (Think - retrieval of
associations), a LLM is queried in parallel over a set of phrases extracted
from the prompt or an auxiliary model call. In the second stage (Sum -
probabilistic inference or reasoning), the results of these queries are
aggregated to make the final prediction. We demonstrate the possibilities and
advantages of ThinkSum on the BIG-bench suite of LLM evaluation tasks,
achieving improvements over the state of the art using GPT-family models on
thirteen difficult tasks, often with far smaller model variants. We also
compare and contrast ThinkSum with other proposed modifications to direct
prompting of LLMs, such as variants of chain-of-thought prompting. Our results
suggest that because the probabilistic inference in ThinkSum is performed
outside of calls to the LLM, ThinkSum is less sensitive to prompt design,
yields more interpretable predictions, and can be flexibly combined with latent
variable models to extract structured knowledge from LLMs. Overall, our
proposed paradigm represents a promising approach for enhancing the reasoning
capabilities of LLMs.
"
Honest Students from Untrusted Teachers: Learning an Interpretable  Question-Answering Pipeline from a Pretrained Language Model,Jacob Eisenstein,http://arxiv.org/pdf/2210.02498v2.pdf,2022-10-05,"['cs.cl', 'cs.lg']",2210.02498v2.pdf,"  Explainable question answering systems should produce not only accurate
answers but also rationales that justify their reasoning and allow humans to
check their work. But what sorts of rationales are useful and how can we train
systems to produce them? We propose a new style of rationale for open-book
question answering, called \emph{markup-and-mask}, which combines aspects of
extractive and free-text explanations. In the markup phase, the passage is
augmented with free-text markup that enables each sentence to stand on its own
outside the discourse context. In the masking phase, a sub-span of the
marked-up passage is selected. To train a system to produce markup-and-mask
rationales without annotations, we leverage in-context learning. Specifically,
we generate silver annotated data by sending a series of prompts to a frozen
pretrained language model, which acts as a teacher. We then fine-tune a smaller
student model by training on the subset of rationales that led to correct
answers. The student is ""honest"" in the sense that it is a pipeline: the
rationale acts as a bottleneck between the passage and the answer, while the
""untrusted"" teacher operates under no such constraints. Thus, we offer a new
way to build trustworthy pipeline systems from a combination of end-task
annotations and frozen pretrained language models.
"
Large Language Models can Implement Policy Iteration,Ethan Brooks,http://arxiv.org/pdf/2210.03821v2.pdf,2022-10-07,['cs.lg'],2210.03821v2.pdf,"  This work presents In-Context Policy Iteration, an algorithm for performing
Reinforcement Learning (RL), in-context, using foundation models. While the
application of foundation models to RL has received considerable attention,
most approaches rely on either (1) the curation of expert demonstrations
(either through manual design or task-specific pretraining) or (2) adaptation
to the task of interest using gradient methods (either fine-tuning or training
of adapter layers). Both of these techniques have drawbacks. Collecting
demonstrations is labor-intensive, and algorithms that rely on them do not
outperform the experts from which the demonstrations were derived. All gradient
techniques are inherently slow, sacrificing the ""few-shot"" quality that made
in-context learning attractive to begin with. In this work, we present an
algorithm, ICPI, that learns to perform RL tasks without expert demonstrations
or gradients. Instead we present a policy-iteration method in which the prompt
content is the entire locus of learning. ICPI iteratively updates the contents
of the prompt from which it derives its policy through trial-and-error
interaction with an RL environment. In order to eliminate the role of
in-weights learning (on which approaches like Decision Transformer rely
heavily), we demonstrate our algorithm using Codex, a language model with no
prior knowledge of the domains on which we evaluate it.
"
Transformers generalize differently from information stored in context  vs in weights,Stephanie C. Y. Chan,http://arxiv.org/pdf/2210.05675v2.pdf,2022-10-11,"['cs.cl', 'cs.ai', 'cs.lg']",2210.05675v2.pdf,"  Transformer models can use two fundamentally different kinds of information:
information stored in weights during training, and information provided
``in-context'' at inference time. In this work, we show that transformers
exhibit different inductive biases in how they represent and generalize from
the information in these two sources. In particular, we characterize whether
they generalize via parsimonious rules (rule-based generalization) or via
direct comparison with observed examples (exemplar-based generalization). This
is of important practical consequence, as it informs whether to encode
information in weights or in context, depending on how we want models to use
that information. In transformers trained on controlled stimuli, we find that
generalization from weights is more rule-based whereas generalization from
context is largely exemplar-based. In contrast, we find that in transformers
pre-trained on natural language, in-context learning is significantly
rule-based, with larger models showing more rule-basedness. We hypothesise that
rule-based generalization from in-context information might be an emergent
consequence of large-scale training on language, which has sparse rule-like
structure. Using controlled stimuli, we verify that transformers pretrained on
data containing sparse rule-like structure exhibit more rule-based
generalization.
"
Large Language Models Meet Harry Potter: A Bilingual Dataset for  Aligning Dialogue Agents with Characters,Nuo Chen,http://arxiv.org/pdf/2211.06869v4.pdf,2022-11-13,"['cs.cl', 'cs.ai']",2211.06869v4.pdf,"  In recent years, Dialogue-style Large Language Models (LLMs) such as ChatGPT
and GPT4 have demonstrated immense potential in constructing open-domain
dialogue agents. However, aligning these agents with specific characters or
individuals remains a considerable challenge due to the complexities of
character representation and the lack of comprehensive annotations. In this
paper, we introduce the Harry Potter Dialogue (HPD) dataset, designed to
advance the study of dialogue agents and character alignment. The dataset
encompasses all dialogue sessions (in both English and Chinese) from the Harry
Potter series and is annotated with vital background information, including
dialogue scenes, speakers, character relationships, and attributes. These
extensive annotations may empower LLMs to unlock character-driven dialogue
capabilities. Furthermore, it can serve as a universal benchmark for evaluating
how well can a LLM aligning with a specific character. We benchmark LLMs on HPD
using both fine-tuning and in-context learning settings. Evaluation results
reveal that although there is substantial room for improvement in generating
high-quality, character-aligned responses, the proposed dataset is valuable in
guiding models toward responses that better align with the character of Harry
Potter.
"
Retrieval-Augmented Multimodal Language Modeling,Michihiro Yasunaga,http://arxiv.org/pdf/2211.12561v2.pdf,2022-11-22,"['cs.cv', 'cs.cl', 'cs.lg']",2211.12561v2.pdf,"  Recent multimodal models such as DALL-E and CM3 have achieved remarkable
progress in text-to-image and image-to-text generation. However, these models
store all learned knowledge (e.g., the appearance of the Eiffel Tower) in the
model parameters, requiring increasingly larger models and training data to
capture more knowledge. To integrate knowledge in a more scalable and modular
way, we propose a retrieval-augmented multimodal model, which enables a base
multimodal model (generator) to refer to relevant text and images fetched by a
retriever from external memory (e.g., documents on the web). Specifically, for
the retriever, we use a pretrained CLIP, and for the generator, we train a CM3
Transformer on the LAION dataset. Our resulting model, named
Retrieval-Augmented CM3 (RA-CM3), is the first multimodal model that can
retrieve and generate both text and images. We show that RA-CM3 significantly
outperforms baseline multimodal models such as DALL-E and CM3 on both image and
caption generation tasks (12 FID and 17 CIDEr improvements on MS-COCO), while
requiring much less compute for training (<30% of DALL-E). Moreover, we show
that RA-CM3 exhibits novel capabilities, such as faithful image generation and
multimodal in-context learning (e.g., image generation from demonstrations).
"
"Operationalizing Specifications, In Addition to Test Sets for Evaluating  Constrained Generative Models",Vikas Raunak,http://arxiv.org/pdf/2212.00006v1.pdf,2022-11-19,"['cs.hc', 'cs.cl', 'cs.cv', 'cs.cy']",2212.00006v1.pdf,"  In this work, we present some recommendations on the evaluation of
state-of-the-art generative models for constrained generation tasks. The
progress on generative models has been rapid in recent years. These large-scale
models have had three impacts: firstly, the fluency of generation in both
language and vision modalities has rendered common average-case evaluation
metrics much less useful in diagnosing system errors. Secondly, the same
substrate models now form the basis of a number of applications, driven both by
the utility of their representations as well as phenomena such as in-context
learning, which raise the abstraction level of interacting with such models.
Thirdly, the user expectations around these models and their feted public
releases have made the technical challenge of out of domain generalization much
less excusable in practice. Subsequently, our evaluation methodologies haven't
adapted to these changes. More concretely, while the associated utility and
methods of interacting with generative models have expanded, a similar
expansion has not been observed in their evaluation practices. In this paper,
we argue that the scale of generative models could be exploited to raise the
abstraction level at which evaluation itself is conducted and provide
recommendations for the same. Our recommendations are based on leveraging
specifications as a powerful instrument to evaluate generation quality and are
readily applicable to a variety of tasks.
"
Language model acceptability judgements are not always robust to context,Koustuv Sinha,http://arxiv.org/pdf/2212.08979v1.pdf,2022-12-18,"['cs.cl', 'cs.lg']",2212.08979v1.pdf,"  Targeted syntactic evaluations of language models ask whether models show
stable preferences for syntactically acceptable content over minimal-pair
unacceptable inputs. Most targeted syntactic evaluation datasets ask models to
make these judgements with just a single context-free sentence as input. This
does not match language models' training regime, in which input sentences are
always highly contextualized by the surrounding corpus. This mismatch raises an
important question: how robust are models' syntactic judgements in different
contexts? In this paper, we investigate the stability of language models'
performance on targeted syntactic evaluations as we vary properties of the
input context: the length of the context, the types of syntactic phenomena it
contains, and whether or not there are violations of grammaticality. We find
that model judgements are generally robust when placed in randomly sampled
linguistic contexts. However, they are substantially unstable for contexts
containing syntactic structures matching those in the critical test content.
Among all tested models (GPT-2 and five variants of OPT), we significantly
improve models' judgements by providing contexts with matching syntactic
structures, and conversely significantly worsen them using unacceptable
contexts with matching but violated syntactic structures. This effect is
amplified by the length of the context, except for unrelated inputs. We show
that these changes in model performance are not explainable by simple features
matching the context and the test inputs, such as lexical overlap and
dependency overlap. This sensitivity to highly specific syntactic features of
the context can only be explained by the models' implicit in-context learning
abilities.
"
Low-Resource Authorship Style Transfer: Can Non-Famous Authors Be  Imitated?,Ajay Patel,http://arxiv.org/pdf/2212.08986v2.pdf,2022-12-18,['cs.cl'],2212.08986v2.pdf,"  Authorship style transfer involves altering text to match the style of a
target author whilst preserving the original meaning. Existing unsupervised
approaches like STRAP have largely focused on style transfer to target authors
with many examples of their writing style in books, speeches, or other
published works. This high-resource training data requirement (often greater
than 100,000 words) makes these approaches primarily useful for style transfer
to published authors, politicians, or other well-known figures and authorship
styles, while style transfer to non-famous authors has not been well-studied.
We introduce the \textit{low-resource authorship style transfer} task, a more
challenging class of authorship style transfer where only a limited amount of
text in the target author's style may exist. In our experiments, we
specifically choose source and target authors from Reddit and style transfer
their Reddit posts, limiting ourselves to just 16 posts (on average ~500 words)
of the target author's style. Style transfer accuracy is typically measured by
how often a classifier or human judge will classify an output as written by the
target author. Recent authorship representations models excel at authorship
identification even with just a few writing samples, making automatic
evaluation of this task possible for the first time through evaluation metrics
we propose. Our results establish an in-context learning technique we develop
as the strongest baseline, though we find current approaches do not yet achieve
mastery of this challenging task. We release our data and implementations to
encourage further investigation.
"
Training Trajectories of Language Models Across Scales,Mengzhou Xia,http://arxiv.org/pdf/2212.09803v3.pdf,2022-12-19,"['cs.cl', 'cs.ai', 'cs.lg']",2212.09803v3.pdf,"  Scaling up language models has led to unprecedented performance gains, but
little is understood about how the training dynamics change as models get
larger. How do language models of different sizes learn during pre-training?
Why do larger language models demonstrate more desirable behaviors? In this
paper, we analyze the intermediate training checkpoints of differently sized
OPT models (Zhang et al.,2022)--from 125M to 175B parameters--on next-token
prediction, sequence-level generation, and downstream tasks. We find that 1) at
a given perplexity and independent of model sizes, a similar subset of training
tokens see the most significant reduction in loss, with the rest stagnating or
showing double-descent behavior; 2) early in training, all models learn to
reduce the perplexity of grammatical sequences that contain hallucinations,
with small models halting at this suboptimal distribution and larger ones
eventually learning to assign these sequences lower probabilities; 3)
perplexity is a strong predictor of in-context learning performance on 74
multiple-choice tasks from BIG-Bench, and this holds independent of the model
size. Together, these results show that perplexity is more predictive of model
behaviors than model size or training computation.
"
Dialog2API: Task-Oriented Dialogue with API Description and Example  Programs,Raphael Shu,http://arxiv.org/pdf/2212.09946v1.pdf,2022-12-20,['cs.cl'],2212.09946v1.pdf,"  Functionality and dialogue experience are two important factors of
task-oriented dialogue systems. Conventional approaches with closed schema
(e.g., conversational semantic parsing) often fail as both the functionality
and dialogue experience are strongly constrained by the underlying schema. We
introduce a new paradigm for task-oriented dialogue - Dialog2API - to greatly
expand the functionality and provide seamless dialogue experience. The
conversational model interacts with the environment by generating and executing
programs triggering a set of pre-defined APIs. The model also manages the
dialogue policy and interact with the user through generating appropriate
natural language responses. By allowing generating free-form programs,
Dialog2API supports composite goals by combining different APIs, whereas
unrestricted program revision provides natural and robust dialogue experience.
To facilitate Dialog2API, the core model is provided with API documents, an
execution environment and optionally some example dialogues annotated with
programs. We propose an approach tailored for the Dialog2API, where the
dialogue states are represented by a stack of programs, with most recently
mentioned program on the top of the stack. Dialog2API can work with many
application scenarios such as software automation and customer service. In this
paper, we construct a dataset for AWS S3 APIs and present evaluation results of
in-context learning baselines.
"
HINT: Hypernetwork Instruction Tuning for Efficient Zero- & Few-Shot  Generalisation,Hamish Ivison,http://arxiv.org/pdf/2212.10315v2.pdf,2022-12-20,['cs.cl'],2212.10315v2.pdf,"  Recent NLP models have shown the remarkable ability to effectively generalise
`zero-shot' to new tasks using only natural language instructions as guidance.
However, many of these approaches suffer from high computational costs due to
their reliance on concatenating lengthy instructions with every input example,
resulting in costly reprocessing of the instruction. To avoid this, we
introduce Hypernetworks for INstruction Tuning (HINT), which convert task
instructions and examples into parameter-efficient modules inserted into an
underlying model using a pretrained text encoder, eliminating the need to
include instructions in the model input. The hypernetwork in HINT also produces
an encoded instruction, which we concatenate with encoded inputs during
decoding to further improve performance. HINT models outperform strong
state-of-the-art baselines by over 10% when controlling for compute (measured
in FLOPs). By converting instructions into modules, HINT models can effectively
disregard the length of instructions and few-shot example inputs in terms of
compute usage. As a result, HINT can enhance its performance by up to 25% by
incorporating additional few-shot data, while utilizing only up to 5% more
compute. This combines the strengths of parameter-efficient fine-tuning and
in-context learning.
"
Prompt-Augmented Linear Probing: Scaling beyond the Limit of Few-shot  In-Context Learners,Hyunsoo Cho,http://arxiv.org/pdf/2212.10873v3.pdf,2022-12-21,"['cs.cl', 'cs.lg']",2212.10873v3.pdf,"  Through in-context learning (ICL), large-scale language models are effective
few-shot learners without additional model fine-tuning. However, the ICL
performance does not scale well with the number of available training samples
as it is limited by the inherent input length constraint of the underlying
language model. Meanwhile, many studies have revealed that language models are
also powerful feature extractors, allowing them to be utilized in a black-box
manner and enabling the linear probing paradigm, where lightweight
discriminators are trained on top of the pre-extracted input representations.
This paper proposes prompt-augmented linear probing (PALP), a hybrid of linear
probing and ICL, which leverages the best of both worlds. PALP inherits the
scalability of linear probing and the capability of enforcing language models
to derive more meaningful representations via tailoring input into a more
conceivable form. Throughout in-depth investigations on various datasets, we
verified that PALP significantly enhances the input representations closing the
gap between ICL in the data-hungry scenario and fine-tuning in the
data-abundant scenario with little training overhead, potentially making PALP a
strong alternative in a black-box scenario.
"
Parallel Context Windows for Large Language Models,Nir Ratner,http://arxiv.org/pdf/2212.10947v3.pdf,2022-12-21,['cs.cl'],2212.10947v3.pdf,"  When applied to processing long text, Large Language Models (LLMs) are
limited by their context window. Existing efforts to address this limitation
involve training specialized architectures, and cannot be easily applied to
off-the-shelf LLMs. We present Parallel Context Windows (PCW), a method that
alleviates the context window restriction for any off-the-shelf LLM without
further training. The key to the approach is to carve a long context into
chunks (``windows''), restrict the attention mechanism to apply only within
each window, and re-use the positional embeddings across the windows. Our main
results test the PCW approach on in-context learning with models that range in
size between 750 million and 178 billion parameters, and show substantial
improvements for tasks with diverse input and output spaces. We show additional
benefits in other settings where long context windows may be beneficial:
multi-hop questions and retrieval-augmented question answering with multiple
retrieved documents. Our results highlight Parallel Context Windows as a
promising method for applying off-the-shelf LLMs in a range of settings that
require long text sequences. We make our code publicly available at
https://github.com/ai21labs/parallel-context-windows.
"
Collaborating with language models for embodied reasoning,Ishita Dasgupta,http://arxiv.org/pdf/2302.00763v1.pdf,2023-02-01,"['cs.lg', 'cs.ai', 'cs.cl']",2302.00763v1.pdf,"  Reasoning in a complex and ambiguous environment is a key goal for
Reinforcement Learning (RL) agents. While some sophisticated RL agents can
successfully solve difficult tasks, they require a large amount of training
data and often struggle to generalize to new unseen environments and new tasks.
On the other hand, Large Scale Language Models (LSLMs) have exhibited strong
reasoning ability and the ability to to adapt to new tasks through in-context
learning. However, LSLMs do not inherently have the ability to interrogate or
intervene on the environment. In this work, we investigate how to combine these
complementary abilities in a single system consisting of three parts: a
Planner, an Actor, and a Reporter. The Planner is a pre-trained language model
that can issue commands to a simple embodied agent (the Actor), while the
Reporter communicates with the Planner to inform its next command. We present a
set of tasks that require reasoning, test this system's ability to generalize
zero-shot and investigate failure cases, and demonstrate how components of this
system can be trained with reinforcement-learning to improve performance.
"
Controlling Personality Style in Dialogue with Zero-Shot Prompt-Based  Learning,Angela Ramirez,http://arxiv.org/pdf/2302.03848v1.pdf,2023-02-08,['cs.cl'],2302.03848v1.pdf,"  Prompt-based or in-context learning has achieved high zero-shot performance
on many natural language generation (NLG) tasks. Here we explore the
performance of prompt-based learning for simultaneously controlling the
personality and the semantic accuracy of an NLG for task-oriented dialogue. We
experiment with prompt-based learning on the PERSONAGE restaurant
recommendation corpus to generate semantically and stylistically-controlled
text for 5 different Big-5 personality types: agreeable, disagreeable,
conscientious, unconscientious, and extravert. We test two different classes of
discrete prompts to generate utterances for a particular personality style: (1)
prompts that demonstrate generating directly from a meaning representation that
includes a personality specification; and (2) prompts that rely on first
converting the meaning representation to a textual pseudo-reference, and then
using the pseudo-reference in a textual style transfer (TST) prompt. In each
case, we show that we can vastly improve performance by over-generating outputs
and ranking them, testing several ranking functions based on automatic metrics
for semantic accuracy, personality-match, and fluency. We also test whether NLG
personality demonstrations from the restaurant domain can be used with meaning
representations for the video game domain to generate personality stylized
utterances about video games. Our findings show that the TST prompts produces
the highest semantic accuracy (78.46% for restaurants and 87.6% for video
games) and personality accuracy (100% for restaurants and 97% for video games).
Our results on transferring personality style to video game utterances are
surprisingly good. To our knowledge, there is no previous work testing the
application of prompt-based learning to simultaneously controlling both style
and semantic accuracy in NLG.
"
Distinguishability Calibration to In-Context Learning,Hongjing Li,http://arxiv.org/pdf/2302.06198v3.pdf,2023-02-13,['cs.cl'],2302.06198v3.pdf,"  Recent years have witnessed increasing interests in prompt-based learning in
which models can be trained on only a few annotated instances, making them
suitable in low-resource settings. When using prompt-based learning for text
classification, the goal is to use a pre-trained language model (PLM) to
predict a missing token in a pre-defined template given an input text, which
can be mapped to a class label. However, PLMs built on the transformer
architecture tend to generate similar output embeddings, making it difficult to
discriminate between different class labels. The problem is further exacerbated
when dealing with classification tasks involving many fine-grained class
labels. In this work, we alleviate this information diffusion issue, i.e.,
different tokens share a large proportion of similar information after going
through stacked multiple self-attention layers in a transformer, by proposing a
calibration method built on feature transformations through rotation and
scaling to map a PLM-encoded embedding into a new metric space to guarantee the
distinguishability of the resulting embeddings. Furthermore, we take the
advantage of hyperbolic embeddings to capture the hierarchical relations among
fine-grained class-associated token embedding by a coarse-to-fine metric
learning strategy to enhance the distinguishability of the learned output
embeddings. Extensive experiments on the three datasets under various settings
demonstrate the effectiveness of our approach. Our code can be found at
https://github.com/donttal/TARA.
"
Do We Still Need Clinical Language Models?,Eric Lehman,http://arxiv.org/pdf/2302.08091v1.pdf,2023-02-16,['cs.cl'],2302.08091v1.pdf,"  Although recent advances in scaling large language models (LLMs) have
resulted in improvements on many NLP tasks, it remains unclear whether these
models trained primarily with general web text are the right tool in highly
specialized, safety critical domains such as clinical text. Recent results have
suggested that LLMs encode a surprising amount of medical knowledge. This
raises an important question regarding the utility of smaller domain-specific
language models. With the success of general-domain LLMs, is there still a need
for specialized clinical models? To investigate this question, we conduct an
extensive empirical analysis of 12 language models, ranging from 220M to 175B
parameters, measuring their performance on 3 different clinical tasks that test
their ability to parse and reason over electronic health records. As part of
our experiments, we train T5-Base and T5-Large models from scratch on clinical
notes from MIMIC III and IV to directly investigate the efficiency of clinical
tokens. We show that relatively small specialized clinical models substantially
outperform all in-context learning approaches, even when finetuned on limited
annotated data. Further, we find that pretraining on clinical tokens allows for
smaller, more parameter-efficient models that either match or outperform much
larger language models trained on general text. We release the code and the
models used under the PhysioNet Credentialed Health Data license and data use
agreement.
"
eP-ALM: Efficient Perceptual Augmentation of Language Models,Mustafa Shukor,http://arxiv.org/pdf/2303.11403v4.pdf,2023-03-20,"['cs.cv', 'cs.cl', 'cs.lg']",2303.11403v4.pdf,"  Large Language Models (LLMs) have so far impressed the world, with
unprecedented capabilities that emerge in models at large scales. On the vision
side, transformer models (i.e., ViT) are following the same trend, achieving
the best performance on challenging benchmarks. With the abundance of such
unimodal models, a natural question arises; do we need also to follow this
trend to tackle multimodal tasks? In this work, we propose to rather direct
effort to efficient adaptations of existing models, and propose to augment
Language Models with perception. Existing approaches for adapting pretrained
models for vision-language tasks still rely on several key components that
hinder their efficiency. In particular, they still train a large number of
parameters, rely on large multimodal pretraining, use encoders (e.g., CLIP)
trained on huge image-text datasets, and add significant inference overhead. In
addition, most of these approaches have focused on Zero-Shot and In Context
Learning, with little to no effort on direct finetuning. We investigate the
minimal computational effort needed to adapt unimodal models for multimodal
tasks and propose a new challenging setup, alongside different approaches, that
efficiently adapts unimodal pretrained models. We show that by freezing more
than 99% of total parameters, training only one linear projection layer, and
prepending only one trainable token, our approach (dubbed eP-ALM) significantly
outperforms other baselines on VQA and Captioning across Image, Video, and
Audio modalities, following the proposed setup. The code is available here:
https://github.com/mshukor/eP-ALM.
"
Towards Making the Most of ChatGPT for Machine Translation,Keqin Peng,http://arxiv.org/pdf/2303.13780v4.pdf,2023-03-24,['cs.cl'],2303.13780v4.pdf,"  ChatGPT shows remarkable capabilities for machine translation (MT). Several
prior studies have shown that it achieves comparable results to commercial
systems for high-resource languages, but lags behind in complex tasks, e.g.,
low-resource and distant-language-pairs translation. However, they usually
adopt simple prompts which can not fully elicit the capability of ChatGPT. In
this paper, we aim to further mine ChatGPT's translation ability by revisiting
several aspects: temperature, task information, and domain information, and
correspondingly propose an optimal temperature setting and two (simple but
effective) prompts: Task-Specific Prompts (TSP) and Domain-Specific Prompts
(DSP). We show that: 1) The performance of ChatGPT depends largely on
temperature, and a lower temperature usually can achieve better performance; 2)
Emphasizing the task information can further improve ChatGPT's performance,
particularly in complex MT tasks; 3) Introducing domain information can elicit
ChatGPT's generalization ability and improve its performance in the specific
domain; 4) ChatGPT tends to generate hallucinations for non-English-centric MT
tasks, which can be partially addressed by our proposed prompts but still need
to be highlighted for the MT/NLP community. We also explore the effects of
advanced in-context learning strategies and find a (negative but interesting)
observation: the powerful chain-of-thought prompt leads to word-by-word
translation behavior, thus bringing significant translation degradation.
"
$k$NN Prompting: Beyond-Context Learning with Calibration-Free Nearest  Neighbor Inference,Benfeng Xu,http://arxiv.org/pdf/2303.13824v1.pdf,2023-03-24,"['cs.cl', 'cs.ai']",2303.13824v1.pdf,"  In-Context Learning (ICL), which formulates target tasks as prompt completion
conditioned on in-context demonstrations, has become the prevailing utilization
of LLMs. In this paper, we first disclose an actual predicament for this
typical usage that it can not scale up with training data due to context length
restriction. Besides, existing works have shown that ICL also suffers from
various biases and requires delicate calibration treatment. To address both
challenges, we advocate a simple and effective solution, $k$NN Prompting, which
first queries LLM with training data for distributed representations, then
predicts test instances by simply referring to nearest neighbors. We conduct
comprehensive experiments to demonstrate its two-fold superiority: 1)
Calibration-Free: $k$NN Prompting does not directly align LLM output
distribution with task-specific label space, instead leverages such
distribution to align test and training instances. It significantly outperforms
state-of-the-art calibration-based methods under comparable few-shot scenario.
2) Beyond-Context: $k$NN Prompting can further scale up effectively with as
many training data as are available, continually bringing substantial
improvements. The scaling trend holds across 10 orders of magnitude ranging
from 2 shots to 1024 shots as well as different LLMs scales ranging from 0.8B
to 30B. It successfully bridges data scaling into model scaling, and brings new
potentials for the gradient-free paradigm of LLM deployment. Code is publicly
available.
"
Chat-REC: Towards Interactive and Explainable LLMs-Augmented Recommender  System,Yunfan Gao,http://arxiv.org/pdf/2303.14524v2.pdf,2023-03-25,"['cs.ir', 'cs.cl', 'cs.lg']",2303.14524v2.pdf,"  Large language models (LLMs) have demonstrated their significant potential to
be applied for addressing various application tasks. However, traditional
recommender systems continue to face great challenges such as poor
interactivity and explainability, which actually also hinder their broad
deployment in real-world systems. To address these limitations, this paper
proposes a novel paradigm called Chat-Rec (ChatGPT Augmented Recommender
System) that innovatively augments LLMs for building conversational recommender
systems by converting user profiles and historical interactions into prompts.
Chat-Rec is demonstrated to be effective in learning user preferences and
establishing connections between users and products through in-context
learning, which also makes the recommendation process more interactive and
explainable. What's more, within the Chat-Rec framework, user's preferences can
transfer to different products for cross-domain recommendations, and
prompt-based injection of information into LLMs can also handle the cold-start
scenarios with new items. In our experiments, Chat-Rec effectively improve the
results of top-k recommendations and performs better in zero-shot rating
prediction task. Chat-Rec offers a novel approach to improving recommender
systems and presents new practical scenarios for the implementation of AIGC (AI
generated content) in recommender system studies.
"
What Makes Good In-context Demonstrations for Code Intelligence Tasks  with LLMs?,Shuzheng Gao,http://arxiv.org/pdf/2304.07575v2.pdf,2023-04-15,['cs.se'],2304.07575v2.pdf,"  Pre-trained models of source code have gained widespread popularity in many
code intelligence tasks. Recently, with the scaling of the model and corpus
size, large language models have shown the ability of in-context learning
(ICL). ICL employs task instructions and a few examples as demonstrations, and
then inputs the demonstrations to the language models for making predictions.
This new learning paradigm is training-free and has shown impressive
performance in various natural language processing and code intelligence tasks.
However, the performance of ICL heavily relies on the quality of
demonstrations, e.g., the selected examples. It is important to systematically
investigate how to construct a good demonstration for code-related tasks. In
this paper, we empirically explore the impact of three key factors on the
performance of ICL in code intelligence tasks: the selection, order, and number
of demonstration examples. We conduct extensive experiments on three code
intelligence tasks including code summarization, bug fixing, and program
synthesis. Our experimental results demonstrate that all the above three
factors dramatically impact the performance of ICL in code intelligence tasks.
Additionally, we summarize our findings and provide takeaway suggestions on how
to construct effective demonstrations, taking into account these three
perspectives. We also show that a carefully-designed demonstration based on our
findings can lead to substantial improvements over widely-used demonstration
construction methods, e.g., improving BLEU-4, EM, and EM by at least 9.90%,
175.96%, and 50.81% on code summarization, bug fixing, and program synthesis,
respectively
"
Sparks of GPTs in Edge Intelligence for Metaverse: Caching and Inference  for Mobile AIGC Services,Minrui Xu,http://arxiv.org/pdf/2304.08782v2.pdf,2023-04-18,['cs.ni'],2304.08782v2.pdf,"  Aiming at achieving artificial general intelligence (AGI) for Metaverse,
pretrained foundation models (PFMs), e.g., generative pretrained transformers
(GPTs), can effectively provide various AI services, such as autonomous
driving, digital twins, and AI-generated content (AIGC) for extended reality.
With the advantages of low latency and privacy-preserving, serving PFMs of
mobile AI services in edge intelligence is a viable solution for caching and
executing PFMs on edge servers with limited computing resources and GPU memory.
However, PFMs typically consist of billions of parameters that are computation
and memory-intensive for edge servers during loading and execution. In this
article, we investigate edge PFM serving problems for mobile AIGC services of
Metaverse. First, we introduce the fundamentals of PFMs and discuss their
characteristic fine-tuning and inference methods in edge intelligence. Then, we
propose a novel framework of joint model caching and inference for managing
models and allocating resources to satisfy users' requests efficiently.
Furthermore, considering the in-context learning ability of PFMs, we propose a
new metric to evaluate the freshness and relevance between examples in
demonstrations and executing tasks, namely the Age of Context (AoC). Finally,
we propose a least context algorithm for managing cached models at edge servers
by balancing the tradeoff among latency, energy consumption, and accuracy.
"
Controlled Text Generation with Natural Language Instructions,Wangchunshu Zhou,http://arxiv.org/pdf/2304.14293v2.pdf,2023-04-27,"['cs.cl', 'cs.ai', 'cs.lg']",2304.14293v2.pdf,"  Large language models generate fluent texts and can follow natural language
instructions to solve a wide range of tasks without task-specific training.
Nevertheless, it is notoriously difficult to control their generation to
satisfy the various constraints required by different applications. In this
work, we present InstructCTG, a controlled text generation framework that
incorporates different constraints by conditioning on natural language
descriptions and demonstrations of the constraints. In particular, we first
extract the underlying constraints of natural texts through a combination of
off-the-shelf NLP tools and simple heuristics. We then verbalize the
constraints into natural language instructions to form weakly supervised
training data. By prepending natural language descriptions of the constraints
and a few demonstrations, we fine-tune a pre-trained language model to
incorporate various types of constraints. Compared to existing search-based or
score-based methods, InstructCTG is more flexible to different constraint types
and has a much smaller impact on the generation quality and speed because it
does not modify the decoding procedure. Additionally, InstructCTG allows the
model to adapt to new constraints without re-training through the use of
few-shot task generalization and in-context learning abilities of
instruction-tuned language models.
"
TALLRec: An Effective and Efficient Tuning Framework to Align Large  Language Model with Recommendation,Keqin Bao,http://arxiv.org/pdf/2305.00447v3.pdf,2023-04-30,['cs.ir'],2305.00447v3.pdf,"  Large Language Models (LLMs) have demonstrated remarkable performance across
diverse domains, thereby prompting researchers to explore their potential for
use in recommendation systems. Initial attempts have leveraged the exceptional
capabilities of LLMs, such as rich knowledge and strong generalization through
In-context Learning, which involves phrasing the recommendation task as
prompts. Nevertheless, the performance of LLMs in recommendation tasks remains
suboptimal due to a substantial disparity between the training tasks for LLMs
and recommendation tasks, as well as inadequate recommendation data during
pre-training. To bridge the gap, we consider building a Large Recommendation
Language Model by tunning LLMs with recommendation data. To this end, we
propose an efficient and effective Tuning framework for Aligning LLMs with
Recommendation, namely TALLRec. We have demonstrated that the proposed TALLRec
framework can significantly enhance the recommendation capabilities of LLMs in
the movie and book domains, even with a limited dataset of fewer than 100
samples. Additionally, the proposed framework is highly efficient and can be
executed on a single RTX 3090 with LLaMA-7B. Furthermore, the fine-tuned LLM
exhibits robust cross-domain generalization. Our code and data are available at
https://github.com/SAI990323/TALLRec.
"
Cognitive Reframing of Negative Thoughts through Human-Language Model  Interaction,Ashish Sharma,http://arxiv.org/pdf/2305.02466v1.pdf,2023-05-04,"['cs.cl', 'cs.hc', 'cs.si']",2305.02466v1.pdf,"  A proven therapeutic technique to overcome negative thoughts is to replace
them with a more hopeful ""reframed thought."" Although therapy can help people
practice and learn this Cognitive Reframing of Negative Thoughts, clinician
shortages and mental health stigma commonly limit people's access to therapy.
In this paper, we conduct a human-centered study of how language models may
assist people in reframing negative thoughts. Based on psychology literature,
we define a framework of seven linguistic attributes that can be used to
reframe a thought. We develop automated metrics to measure these attributes and
validate them with expert judgements from mental health practitioners. We
collect a dataset of 600 situations, thoughts and reframes from practitioners
and use it to train a retrieval-enhanced in-context learning model that
effectively generates reframed thoughts and controls their linguistic
attributes. To investigate what constitutes a ""high-quality"" reframe, we
conduct an IRB-approved randomized field study on a large mental health website
with over 2,000 participants. Amongst other findings, we show that people
prefer highly empathic or specific reframes, as opposed to reframes that are
overly positive. Our findings provide key implications for the use of LMs to
assist people in overcoming negative thoughts.
"
Using ChatGPT for Entity Matching,Ralph Peeters,http://arxiv.org/pdf/2305.03423v2.pdf,2023-05-05,['cs.cl'],2305.03423v2.pdf,"  Entity Matching is the task of deciding if two entity descriptions refer to
the same real-world entity. State-of-the-art entity matching methods often rely
on fine-tuning Transformer models such as BERT or RoBERTa. Two major drawbacks
of using these models for entity matching are that (i) the models require
significant amounts of fine-tuning data for reaching a good performance and
(ii) the fine-tuned models are not robust concerning out-of-distribution
entities. In this paper, we investigate using ChatGPT for entity matching as a
more robust, training data-efficient alternative to traditional Transformer
models. We perform experiments along three dimensions: (i) general prompt
design, (ii) in-context learning, and (iii) provision of higher-level matching
knowledge. We show that ChatGPT is competitive with a fine-tuned RoBERTa model,
reaching a zero-shot performance of 82.35% F1 on a challenging matching task on
which RoBERTa requires 2000 training examples for reaching a similar
performance. Adding in-context demonstrations to the prompts further improves
the F1 by up to 7.85% when using similarity-based example selection. Always
using the same set of 10 handpicked demonstrations leads to an improvement of
4.92% over the zero-shot performance. Finally, we show that ChatGPT can also be
guided by adding higher-level matching knowledge in the form of rules to the
prompts. Providing matching rules leads to similar performance gains as
providing in-context demonstrations.
"
Multilingual LLMs are Better Cross-lingual In-context Learners with  Alignment,Eshaan Tanwar,http://arxiv.org/pdf/2305.05940v3.pdf,2023-05-10,['cs.cl'],2305.05940v3.pdf,"  In-context learning (ICL) unfolds as large language models become capable of
inferring test labels conditioned on a few labeled samples without any gradient
update. ICL-enabled large language models provide a promising step forward
toward bypassing recurrent annotation costs in a low-resource setting. Yet,
only a handful of past studies have explored ICL in a cross-lingual setting, in
which the need for transferring label-knowledge from a high-resource language
to a low-resource one is immensely crucial. To bridge the gap, we provide the
first in-depth analysis of ICL for cross-lingual text classification. We find
that the prevalent mode of selecting random input-label pairs to construct the
prompt-context is severely limited in the case of cross-lingual ICL, primarily
due to the lack of alignment in the input as well as the output spaces. To
mitigate this, we propose a novel prompt construction strategy -- Cross-lingual
In-context Source-Target Alignment (X-InSTA). With an injected coherence in the
semantics of the input examples and a task-based alignment across the source
and target languages, X-InSTA is able to outperform random prompt selection by
a large margin across three different tasks using 44 different cross-lingual
pairs.
"
Can Language Models Solve Graph Problems in Natural Language?,Heng Wang,http://arxiv.org/pdf/2305.10037v2.pdf,2023-05-17,"['cs.cl', 'cs.ai']",2305.10037v2.pdf,"  Large language models (LLMs) are increasingly adopted for a variety of tasks
with implicit graphical structures, such as planning in robotics, multi-hop
question answering or knowledge probing, structured commonsense reasoning, and
more. While LLMs have advanced the state-of-the-art on these tasks with
structure implications, whether LLMs could explicitly process textual
descriptions of graphs and structures, map them to grounded conceptual spaces,
and perform structured operations remains underexplored. To this end, we
propose NLGraph (Natural Language Graph), a comprehensive benchmark of
graph-based problem solving designed in natural language. NLGraph contains
29,370 problems, covering eight graph reasoning tasks with varying complexity
from simple tasks such as connectivity and shortest path up to complex problems
such as maximum flow and simulating graph neural networks. We evaluate LLMs
(GPT-3/4) with various prompting approaches on the NLGraph benchmark and find
that 1) language models do demonstrate preliminary graph reasoning abilities,
2) the benefit of advanced prompting and in-context learning diminishes on more
complex graph problems, while 3) LLMs are also (un)surprisingly brittle in the
face of spurious correlations in graph and problem settings. We then propose
Build-a-Graph Prompting and Algorithmic Prompting, two instruction-based
approaches to enhance LLMs in solving natural language graph problems.
Build-a-Graph and Algorithmic prompting improve the performance of LLMs on
NLGraph by 3.07% to 16.85% across multiple tasks and settings, while how to
solve the most complicated graph reasoning tasks in our setup with language
models remains an open research question. The NLGraph benchmark and evaluation
code are available at https://github.com/Arthur-Heng/NLGraph.
"
Joint Foundation Model Caching and Inference of Generative AI Services  for Edge Intelligence,Minrui Xu,http://arxiv.org/pdf/2305.12130v1.pdf,2023-05-20,['cs.ni'],2305.12130v1.pdf,"  With the rapid development of artificial general intelligence (AGI), various
multimedia services based on pretrained foundation models (PFMs) need to be
effectively deployed. With edge servers that have cloud-level computing power,
edge intelligence can extend the capabilities of AGI to mobile edge networks.
However, compared with cloud data centers, resource-limited edge servers can
only cache and execute a small number of PFMs, which typically consist of
billions of parameters and require intensive computing power and GPU memory
during inference. To address this challenge, in this paper, we propose a joint
foundation model caching and inference framework that aims to balance the
tradeoff among inference latency, accuracy, and resource consumption by
managing cached PFMs and user requests efficiently during the provisioning of
generative AI services. Specifically, considering the in-context learning
ability of PFMs, a new metric named the Age of Context (AoC), is proposed to
model the freshness and relevance between examples in past demonstrations and
current service requests. Based on the AoC, we propose a least context caching
algorithm to manage cached PFMs at edge servers with historical prompts and
inference results. The numerical results demonstrate that the proposed
algorithm can reduce system costs compared with existing baselines by
effectively utilizing contextual information.
"
Enhancing Few-shot Text-to-SQL Capabilities of Large Language Models: A  Study on Prompt Design Strategies,Linyong Nan,http://arxiv.org/pdf/2305.12586v1.pdf,2023-05-21,['cs.cl'],2305.12586v1.pdf,"  In-context learning (ICL) has emerged as a new approach to various natural
language processing tasks, utilizing large language models (LLMs) to make
predictions based on context that has been supplemented with a few examples or
task-specific instructions. In this paper, we aim to extend this method to
question answering tasks that utilize structured knowledge sources, and improve
Text-to-SQL systems by exploring various prompt design strategies for employing
LLMs. We conduct a systematic investigation into different demonstration
selection methods and optimal instruction formats for prompting LLMs in the
Text-to-SQL task. Our approach involves leveraging the syntactic structure of
an example's SQL query to retrieve demonstrations, and we demonstrate that
pursuing both diversity and similarity in demonstration selection leads to
enhanced performance. Furthermore, we show that LLMs benefit from
database-related knowledge augmentations. Our most effective strategy
outperforms the state-of-the-art system by 2.5 points (Execution Accuracy) and
the best fine-tuned system by 5.1 points on the Spider dataset. These results
highlight the effectiveness of our approach in adapting LLMs to the Text-to-SQL
task, and we present an analysis of the factors contributing to the success of
our strategy.
"
Exploring Chain-of-Thought Style Prompting for Text-to-SQL,Chang-You Tai,http://arxiv.org/pdf/2305.14215v2.pdf,2023-05-23,['cs.cl'],2305.14215v2.pdf,"  In-context learning with large language models (LLMs) has recently caught
increasing attention due to its superior few-shot performance on various tasks.
However, its performance on text-to-SQL parsing still has much room for
improvement. In this paper, we hypothesize that a crucial aspect of LLMs to
improve for text-to-SQL parsing is their multi-step reasoning ability. Thus, we
systematically study how to enhance LLMs' reasoning ability through chain of
thought (CoT) style prompting, including the original chain-of-thought
prompting (Wei et al., 2022b) and least-to-most prompting (Zhou et al., 2023).
Our experiments demonstrate that iterative prompting as in Zhou et al. (2023)
may be unnecessary for text-to-SQL parsing, and using detailed reasoning steps
tends to have more error propagation issues. Based on these findings, we
propose a new CoT-style prompting method for text-to-SQL parsing. It brings 5.2
and 6.5 point absolute gains on the Spider development set and the Spider
Realistic set, respectively, compared to the standard prompting method without
reasoning steps; 2.4 and 1.5 point absolute gains, compared to the
least-to-most prompting method.
"
Sociocultural Norm Similarities and Differences via Situational  Alignment and Explainable Textual Entailment,Sky CH-Wang,http://arxiv.org/pdf/2305.14492v2.pdf,2023-05-23,['cs.cl'],2305.14492v2.pdf,"  Designing systems that can reason across cultures requires that they are
grounded in the norms of the contexts in which they operate. However, current
research on developing computational models of social norms has primarily
focused on American society. Here, we propose a novel approach to discover and
compare descriptive social norms across Chinese and American cultures. We
demonstrate our approach by leveraging discussions on a Chinese Q&A platform
(Zhihu) and the existing SocialChemistry dataset as proxies for contrasting
cultural axes, align social situations cross-culturally, and extract social
norms from texts using in-context learning. Embedding Chain-of-Thought
prompting in a human-AI collaborative framework, we build a high-quality
dataset of 3,069 social norms aligned with social situations across Chinese and
American cultures alongside corresponding free-text explanations. To test the
ability of models to reason about social norms across cultures, we introduce
the task of explainable social norm entailment, showing that existing models
under 3B parameters have significant room for improvement in both automatic and
human evaluation. Further analysis of cross-cultural norm differences based on
our dataset shows empirical alignment with the social orientations framework,
revealing several situational and descriptive nuances in norms across these
cultures.
"
Increasing Probability Mass on Answer Choices Does Not Always Improve  Accuracy,Sarah Wiegreffe,http://arxiv.org/pdf/2305.14596v2.pdf,2023-05-24,"['cs.cl', 'cs.lg']",2305.14596v2.pdf,"  When pretrained language models (LMs) are applied to discriminative tasks
such as multiple-choice questions, they place probability mass on vocabulary
tokens that aren't among the given answer choices. Spreading probability mass
across multiple surface forms with identical meaning (such as ""bath"" and
""bathtub"") is thought to cause an underestimation of a model's true
performance, referred to as the ""surface form competition"" (SFC) hypothesis.
This has motivated the introduction of various probability normalization
methods. However, many core questions remain unanswered. How do we measure SFC?
Are there direct ways of reducing it, and does doing so improve task
performance?
  We propose a mathematical formalism for SFC which allows us to quantify and
bound its impact for the first time. We identify a simple method for reducing
it -- namely, increasing probability mass on the given answer choices by a)
including them in the prompt and b) using in-context learning with even just
one example. We show this method eliminates the impact of SFC in the majority
of instances. Our experiments on three diverse datasets and six LMs reveal
several additional surprising findings. For example, both normalization and
prompting methods for reducing SFC can be ineffective or even detrimental to
task performance for some LMs. We conclude with practical insights for
effectively prompting LMs for multiple-choice tasks.
"
Universal Self-Adaptive Prompting,Xingchen Wan,http://arxiv.org/pdf/2305.14926v2.pdf,2023-05-24,"['cs.cl', 'cs.ai', 'cs.lg']",2305.14926v2.pdf,"  A hallmark of modern large language models (LLMs) is their impressive general
zero-shot and few-shot abilities, often elicited through in-context learning
(ICL) via prompting. However, while highly coveted and being the most general,
zero-shot performances in LLMs are still typically weaker due to the lack of
guidance and the difficulty of applying existing automatic prompt design
methods in general tasks when ground-truth labels are unavailable. In this
study, we address this by presenting Universal Self-Adaptive Prompting (USP),
an automatic prompt design approach specifically tailored for zero-shot
learning (while compatible with few-shot). Requiring only a small amount of
unlabeled data and an inference-only LLM, USP is highly versatile: to achieve
universal prompting, USP categorizes a possible NLP task into one of the three
possible task types and then uses a corresponding selector to select the most
suitable queries and zero-shot model-generated responses as
pseudo-demonstrations, thereby generalizing ICL to the zero-shot setup in a
fully automated way. We evaluate USP with PaLM and PaLM 2 models and
demonstrate performances that are considerably stronger than standard zero-shot
baselines and often comparable to or even superior to few-shot baselines across
more than 40 natural language understanding, natural language generation, and
reasoning tasks.
"
Are Chatbots Ready for Privacy-Sensitive Applications? An Investigation  into Input Regurgitation and Prompt-Induced Sanitization,Aman Priyanshu,http://arxiv.org/pdf/2305.15008v1.pdf,2023-05-24,"['cs.cl', 'cs.ai', 'cs.cy']",2305.15008v1.pdf,"  LLM-powered chatbots are becoming widely adopted in applications such as
healthcare, personal assistants, industry hiring decisions, etc. In many of
these cases, chatbots are fed sensitive, personal information in their prompts,
as samples for in-context learning, retrieved records from a database, or as
part of the conversation. The information provided in the prompt could directly
appear in the output, which might have privacy ramifications if there is
sensitive information there. As such, in this paper, we aim to understand the
input copying and regurgitation capabilities of these models during inference
and how they can be directly instructed to limit this copying by complying with
regulations such as HIPAA and GDPR, based on their internal knowledge of them.
More specifically, we find that when ChatGPT is prompted to summarize cover
letters of a 100 candidates, it would retain personally identifiable
information (PII) verbatim in 57.4% of cases, and we find this retention to be
non-uniform between different subgroups of people, based on attributes such as
gender identity. We then probe ChatGPT's perception of privacy-related policies
and privatization mechanisms by directly instructing it to provide compliant
outputs and observe a significant omission of PII from output.
"
Fine-Tuning Language Models with Just Forward Passes,Sadhika Malladi,http://arxiv.org/pdf/2305.17333v2.pdf,2023-05-27,"['cs.lg', 'cs.cl']",2305.17333v2.pdf,"  Fine-tuning language models (LMs) has yielded success on diverse downstream
tasks, but as LMs grow in size, backpropagation requires a prohibitively large
amount of memory. Zeroth-order (ZO) methods can in principle estimate gradients
using only two forward passes but are theorized to be catastrophically slow for
optimizing large models. In this work, we propose a memory-efficient
zerothorder optimizer (MeZO), adapting the classical ZO-SGD method to operate
in-place, thereby fine-tuning LMs with the same memory footprint as inference.
For example, with a single A100 80GB GPU, MeZO can train a 30-billion parameter
model, whereas fine-tuning with backpropagation can train only a 2.7B LM with
the same budget. We conduct comprehensive experiments across model types
(masked and autoregressive LMs), model scales (up to 66B), and downstream tasks
(classification, multiple-choice, and generation). Our results demonstrate that
(1) MeZO significantly outperforms in-context learning and linear probing; (2)
MeZO achieves comparable performance to fine-tuning with backpropagation across
multiple tasks, with up to 12x memory reduction and up to 2x GPU-hour reduction
in our implementation; (3) MeZO is compatible with both full-parameter and
parameter-efficient tuning techniques such as LoRA and prefix tuning; (4) MeZO
can effectively optimize non-differentiable objectives (e.g., maximizing
accuracy or F1). We support our empirical findings with theoretical insights,
highlighting how adequate pre-training and task prompts enable MeZO to
fine-tune huge models, despite classical ZO analyses suggesting otherwise.
"
Do Large Language Models Know What They Don't Know?,Zhangyue Yin,http://arxiv.org/pdf/2305.18153v2.pdf,2023-05-29,['cs.cl'],2305.18153v2.pdf,"  Large language models (LLMs) have a wealth of knowledge that allows them to
excel in various Natural Language Processing (NLP) tasks. Current research
focuses on enhancing their performance within their existing knowledge. Despite
their vast knowledge, LLMs are still limited by the amount of information they
can accommodate and comprehend. Therefore, the ability to understand their own
limitations on the unknows, referred to as self-knowledge, is of paramount
importance. This study aims to evaluate LLMs' self-knowledge by assessing their
ability to identify unanswerable or unknowable questions. We introduce an
automated methodology to detect uncertainty in the responses of these models,
providing a novel measure of their self-knowledge. We further introduce a
unique dataset, SelfAware, consisting of unanswerable questions from five
diverse categories and their answerable counterparts. Our extensive analysis,
involving 20 LLMs including GPT-3, InstructGPT, and LLaMA, discovering an
intrinsic capacity for self-knowledge within these models. Moreover, we
demonstrate that in-context learning and instruction tuning can further enhance
this self-knowledge. Despite this promising insight, our findings also
highlight a considerable gap between the capabilities of these models and human
proficiency in recognizing the limits of their knowledge.
"
Improving CLIP Training with Language Rewrites,Lijie Fan,http://arxiv.org/pdf/2305.20088v2.pdf,2023-05-31,"['cs.cv', 'cs.cl', 'cs.lg']",2305.20088v2.pdf,"  Contrastive Language-Image Pre-training (CLIP) stands as one of the most
effective and scalable methods for training transferable vision models using
paired image and text data. CLIP models are trained using contrastive loss,
which typically relies on data augmentations to prevent overfitting and
shortcuts. However, in the CLIP training paradigm, data augmentations are
exclusively applied to image inputs, while language inputs remain unchanged
throughout the entire training process, limiting the exposure of diverse texts
to the same image. In this paper, we introduce Language augmented CLIP
(LaCLIP), a simple yet highly effective approach to enhance CLIP training
through language rewrites. Leveraging the in-context learning capability of
large language models, we rewrite the text descriptions associated with each
image. These rewritten texts exhibit diversity in sentence structure and
vocabulary while preserving the original key concepts and meanings. During
training, LaCLIP randomly selects either the original texts or the rewritten
versions as text augmentations for each image. Extensive experiments on CC3M,
CC12M, RedCaps and LAION-400M datasets show that CLIP pre-training with
language rewrites significantly improves the transfer performance without
computation or memory overhead during training. Specifically for ImageNet
zero-shot accuracy, LaCLIP outperforms CLIP by 8.2% on CC12M and 2.4% on
LAION-400M. Code is available at https://github.com/LijieFan/LaCLIP.
"
SQL-PaLM: Improved Large Language Model Adaptation for Text-to-SQL,Ruoxi Sun,http://arxiv.org/pdf/2306.00739v3.pdf,2023-05-26,"['cs.cl', 'cs.ai', 'cs.db']",2306.00739v3.pdf,"  One impressive emergent capability of large language models (LLMs) is
generation of code, including Structured Query Language (SQL) for databases.
For the task of converting natural language text to SQL queries, Text-to-SQL,
adaptation of LLMs is of paramount importance, both in in-context learning and
fine-tuning settings, depending on the amount of adaptation data used. In this
paper, we propose an LLM-based Text-to-SQL model SQL-PaLM, leveraging on
PaLM-2, that pushes the state-of-the-art in both settings. Few-shot SQL-PaLM is
based on an execution-based self-consistency prompting approach designed for
Text-to-SQL, and achieves 77.3% in test-suite accuracy on Spider, which to our
best knowledge is the first to outperform previous state-of-the-art with
fine-tuning by a significant margin, 4%. Furthermore, we demonstrate that the
fine-tuned SQL-PALM outperforms it further by another 1%. Towards applying
SQL-PaLM to real-world scenarios we further evaluate its robustness on other
challenging variants of Spider and demonstrate the superior generalization
capability of SQL-PaLM. In addition, via extensive case studies, we demonstrate
the impressive intelligent capabilities and various success enablers of
LLM-based Text-to-SQL.
"
Zero-Shot 3D Shape Correspondence,Ahmed Abdelreheem,http://arxiv.org/pdf/2306.03253v2.pdf,2023-06-05,['cs.cv'],2306.03253v2.pdf,"  We propose a novel zero-shot approach to computing correspondences between 3D
shapes. Existing approaches mainly focus on isometric and near-isometric shape
pairs (e.g., human vs. human), but less attention has been given to strongly
non-isometric and inter-class shape matching (e.g., human vs. cow). To this
end, we introduce a fully automatic method that exploits the exceptional
reasoning capabilities of recent foundation models in language and vision to
tackle difficult shape correspondence problems. Our approach comprises multiple
stages. First, we classify the 3D shapes in a zero-shot manner by feeding
rendered shape views to a language-vision model (e.g., BLIP2) to generate a
list of class proposals per shape. These proposals are unified into a single
class per shape by employing the reasoning capabilities of ChatGPT. Second, we
attempt to segment the two shapes in a zero-shot manner, but in contrast to the
co-segmentation problem, we do not require a mutual set of semantic regions.
Instead, we propose to exploit the in-context learning capabilities of ChatGPT
to generate two different sets of semantic regions for each shape and a
semantic mapping between them. This enables our approach to match strongly
non-isometric shapes with significant differences in geometric structure.
Finally, we employ the generated semantic mapping to produce coarse
correspondences that can further be refined by the functional maps framework to
produce dense point-to-point maps. Our approach, despite its simplicity,
produces highly plausible results in a zero-shot manner, especially between
strongly non-isometric shapes. Project webpage:
https://samir55.github.io/3dshapematch/.
"
MIMIC-IT: Multi-Modal In-Context Instruction Tuning,Bo Li,http://arxiv.org/pdf/2306.05425v1.pdf,2023-06-08,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.hc']",2306.05425v1.pdf,"  High-quality instructions and responses are essential for the zero-shot
performance of large language models on interactive natural language tasks. For
interactive vision-language tasks involving intricate visual scenes, a large
quantity of diverse and creative instruction-response pairs should be
imperative to tune vision-language models (VLMs). Nevertheless, the current
availability of vision-language instruction-response pairs in terms of
quantity, diversity, and creativity remains limited, posing challenges to the
generalization of interactive VLMs. Here we present MultI-Modal In-Context
Instruction Tuning (MIMIC-IT), a dataset comprising 2.8 million multimodal
instruction-response pairs, with 2.2 million unique instructions derived from
images and videos. Each pair is accompanied by multi-modal in-context
information, forming conversational contexts aimed at empowering VLMs in
perception, reasoning, and planning. The instruction-response collection
process, dubbed as Syphus, is scaled using an automatic annotation pipeline
that combines human expertise with GPT's capabilities. Using the MIMIC-IT
dataset, we train a large VLM named Otter. Based on extensive evaluations
conducted on vision-language benchmarks, it has been observed that Otter
demonstrates remarkable proficiency in multi-modal perception, reasoning, and
in-context learning. Human evaluation reveals it effectively aligns with the
user's intentions. We release the MIMIC-IT dataset, instruction-response
collection pipeline, benchmarks, and the Otter model.
"
MedFMC: A Real-world Dataset and Benchmark For Foundation Model  Adaptation in Medical Image Classification,Dequan Wang,http://arxiv.org/pdf/2306.09579v1.pdf,2023-06-16,['cs.cv'],2306.09579v1.pdf,"  Foundation models, often pre-trained with large-scale data, have achieved
paramount success in jump-starting various vision and language applications.
Recent advances further enable adapting foundation models in downstream tasks
efficiently using only a few training samples, e.g., in-context learning. Yet,
the application of such learning paradigms in medical image analysis remains
scarce due to the shortage of publicly accessible data and benchmarks. In this
paper, we aim at approaches adapting the foundation models for medical image
classification and present a novel dataset and benchmark for the evaluation,
i.e., examining the overall performance of accommodating the large-scale
foundation models downstream on a set of diverse real-world clinical tasks. We
collect five sets of medical imaging data from multiple institutes targeting a
variety of real-world clinical tasks (22,349 images in total), i.e., thoracic
diseases screening in X-rays, pathological lesion tissue screening, lesion
detection in endoscopy images, neonatal jaundice evaluation, and diabetic
retinopathy grading. Results of multiple baseline methods are demonstrated
using the proposed dataset from both accuracy and cost-effective perspectives.
"
JiuZhang 2.0: A Unified Chinese Pre-trained Language Model for  Multi-task Mathematical Problem Solving,Wayne Xin Zhao,http://arxiv.org/pdf/2306.11027v1.pdf,2023-06-19,"['cs.cl', 'cs.ai']",2306.11027v1.pdf,"  Although pre-trained language models~(PLMs) have recently advanced the
research progress in mathematical reasoning, they are not specially designed as
a capable multi-task solver, suffering from high cost for multi-task deployment
(\eg a model copy for a task) and inferior performance on complex mathematical
problems in practical applications. To address these issues, in this paper, we
propose \textbf{JiuZhang~2.0}, a unified Chinese PLM specially for multi-task
mathematical problem solving. Our idea is to maintain a moderate-sized model
and employ the \emph{cross-task knowledge sharing} to improve the model
capacity in a multi-task setting. Specially, we construct a
Mixture-of-Experts~(MoE) architecture for modeling mathematical text, so as to
capture the common mathematical knowledge across tasks. For optimizing the MoE
architecture, we design \emph{multi-task continual pre-training} and
\emph{multi-task fine-tuning} strategies for multi-task adaptation. These
training strategies can effectively decompose the knowledge from the task data
and establish the cross-task sharing via expert networks. In order to further
improve the general capacity of solving different complex tasks, we leverage
large language models~(LLMs) as complementary models to iteratively refine the
generated solution by our PLM, via in-context learning. Extensive experiments
have demonstrated the effectiveness of our model.
"
A Chain of AI-based Solutions for Resolving FQNs and Fixing Syntax  Errors in Partial Code,Qing Huang,http://arxiv.org/pdf/2306.11981v1.pdf,2023-06-21,['cs.se'],2306.11981v1.pdf,"  API documentation, technical blogs and programming Q&A sites contain numerous
partial code that can be reused in programming tasks, but often these code are
uncompilable due to unresolved names and syntax errors. To facilitate partial
code reuse, we propose the Partial Code Reuse Chain (PCR-Chain) for resolving
fully-qualified names (FQNs) and fixing last-mile syntax errors in partial code
based on a giant large language model (LLM) like ChatGPT. Methodologically,
PCR-Chain is backed up by the underlying global-level prompt architecture
(which combines three design ideas: hierarchical task breakdown, prompt
composition, and a mix of prompt-based AI and non-AI units) and the local-level
prompt design. Technically, we propose PCR-Chain, which employs in-context
learning rather than symbolic, costly training methods. Experimental results
demonstrate that in dynamically-typed languages (Python), PCR-Chain outperforms
current state-of-the-art (SOTA) 5% accuracy like RING. For statically-type
languages (Java), our approach achieves high accuracy of 80.5% in resolving
both non-FQNs and last-mile syntax errors, surpassing SOTA methods (RING) that
can only address last-mile syntax errors. The correct execution of the unit,
module, and PCR-Chain demonstrates the effectiveness of the prompt design,
composition, and architecture and opens up possibilities for building software
engineering tools based on LLMs, replacing traditional program analysis
methods.
"
Generative Multimodal Entity Linking,Senbao Shi,http://arxiv.org/pdf/2306.12725v2.pdf,2023-06-22,['cs.cl'],2306.12725v2.pdf,"  Multimodal Entity Linking (MEL) is the task of mapping mentions with
multimodal contexts to the referent entities from a knowledge base (e.g.
Wikipedia). Existing MEL methods mainly focus on designing complex multimodal
interaction mechanisms and require fine-tuning all model parameters, which can
be prohibitively costly and difficult to scale in the era of Large Language
Models (LLMs). In this work, we propose GEMEL, a simple yet effective
Generative Multimodal Entity Linking framework based on LLMs, which directly
generates target entity names. We keep the vision and language model frozen and
only train a feature mapper to enable cross-modality interactions. To adapt
LLMs to the MEL task, we take advantage of the emergent in-context learning
capability of LLMs by retrieving multimodal instances as demonstrations.
Extensive experiments show that, with only ~0.3% of the model parameters
fine-tuned, GEMEL achieves state-of-the-art results on two well-established MEL
datasets (7.7% accuracy gains on WikiDiverse and 8.8% accuracy gains on
WikiMEL). The performance gain stems from mitigating the popularity bias of LLM
predictions and disambiguating less common entities effectively. Further
analysis verifies the generality and scalability of GEMEL. Our approach is
compatible with any off-the-shelf language model, paving the way towards an
efficient and general solution for utilizing LLMs in the MEL task.
"
Kosmos-2: Grounding Multimodal Large Language Models to the World,Zhiliang Peng,http://arxiv.org/pdf/2306.14824v3.pdf,2023-06-26,"['cs.cl', 'cs.cv']",2306.14824v3.pdf,"  We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2.
"
Supervised Pretraining Can Learn In-Context Reinforcement Learning,Jonathan N. Lee,http://arxiv.org/pdf/2306.14892v1.pdf,2023-06-26,"['cs.lg', 'cs.ai']",2306.14892v1.pdf,"  Large transformer models trained on diverse datasets have shown a remarkable
ability to learn in-context, achieving high few-shot performance on tasks they
were not explicitly trained to solve. In this paper, we study the in-context
learning capabilities of transformers in decision-making problems, i.e.,
reinforcement learning (RL) for bandits and Markov decision processes. To do
so, we introduce and study Decision-Pretrained Transformer (DPT), a supervised
pretraining method where the transformer predicts an optimal action given a
query state and an in-context dataset of interactions, across a diverse set of
tasks. This procedure, while simple, produces a model with several surprising
capabilities. We find that the pretrained transformer can be used to solve a
range of RL problems in-context, exhibiting both exploration online and
conservatism offline, despite not being explicitly trained to do so. The model
also generalizes beyond the pretraining distribution to new tasks and
automatically adapts its decision-making strategies to unknown structure.
Theoretically, we show DPT can be viewed as an efficient implementation of
Bayesian posterior sampling, a provably sample-efficient RL algorithm. We
further leverage this connection to provide guarantees on the regret of the
in-context algorithm yielded by DPT, and prove that it can learn faster than
algorithms used to generate the pretraining data. These results suggest a
promising yet simple path towards instilling strong in-context decision-making
abilities in transformers.
"
A GPT-4 Reticular Chemist for Guiding MOF Discovery,Zhiling Zheng,http://arxiv.org/pdf/2306.14915v2.pdf,2023-06-20,"['cs.ai', 'cond-mat.mtrl-sci', 'physics.chem-ph']",2306.14915v2.pdf,"  We present a new framework integrating the AI model GPT-4 into the iterative
process of reticular chemistry experimentation, leveraging a cooperative
workflow of interaction between AI and a human researcher. This GPT-4 Reticular
Chemist is an integrated system composed of three phases. Each of these
utilizes GPT-4 in various capacities, wherein GPT-4 provides detailed
instructions for chemical experimentation and the human provides feedback on
the experimental outcomes, including both success and failures, for the
in-context learning of AI in the next iteration. This iterative human-AI
interaction enabled GPT-4 to learn from the outcomes, much like an experienced
chemist, by a prompt-learning strategy. Importantly, the system is based on
natural language for both development and operation, eliminating the need for
coding skills, and thus, make it accessible to all chemists. Our collaboration
with GPT-4 Reticular Chemist guided the discovery of an isoreticular series of
MOFs, with each synthesis fine-tuned through iterative feedback and expert
suggestions. This workflow presents a potential for broader applications in
scientific research by harnessing the capability of large language models like
GPT-4 to enhance the feasibility and efficiency of research activities.
"
Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale,Matthew Le,http://arxiv.org/pdf/2306.15687v2.pdf,2023-06-23,"['eess.as', 'cs.cl', 'cs.lg', 'cs.sd']",2306.15687v2.pdf,"  Large-scale generative models such as GPT and DALL-E have revolutionized the
research community. These models not only generate high fidelity outputs, but
are also generalists which can solve tasks not explicitly taught. In contrast,
speech generative models are still primitive in terms of scale and task
generalization. In this paper, we present Voicebox, the most versatile
text-guided generative model for speech at scale. Voicebox is a
non-autoregressive flow-matching model trained to infill speech, given audio
context and text, trained on over 50K hours of speech that are not filtered or
enhanced. Similar to GPT, Voicebox can perform many different tasks through
in-context learning, but is more flexible as it can also condition on future
context. Voicebox can be used for mono or cross-lingual zero-shot
text-to-speech synthesis, noise removal, content editing, style conversion, and
diverse sample generation. In particular, Voicebox outperforms the
state-of-the-art zero-shot TTS model VALL-E on both intelligibility (5.9% vs
1.9% word error rates) and audio similarity (0.580 vs 0.681) while being up to
20 times faster. Audio samples can be found in
\url{https://voicebox.metademolab.com}.
"
SPAE: Semantic Pyramid AutoEncoder for Multimodal Generation with Frozen  LLMs,Lijun Yu,http://arxiv.org/pdf/2306.17842v3.pdf,2023-06-30,"['cs.cv', 'cs.cl', 'cs.mm']",2306.17842v3.pdf,"  In this work, we introduce Semantic Pyramid AutoEncoder (SPAE) for enabling
frozen LLMs to perform both understanding and generation tasks involving
non-linguistic modalities such as images or videos. SPAE converts between raw
pixels and interpretable lexical tokens (or words) extracted from the LLM's
vocabulary. The resulting tokens capture both the semantic meaning and the
fine-grained details needed for visual reconstruction, effectively translating
the visual content into a language comprehensible to the LLM, and empowering it
to perform a wide array of multimodal tasks. Our approach is validated through
in-context learning experiments with frozen PaLM 2 and GPT 3.5 on a diverse set
of image understanding and generation tasks. Our method marks the first
successful attempt to enable a frozen LLM to generate image content while
surpassing state-of-the-art performance in image understanding tasks, under the
same setting, by over 25%.
"
RecallM: An Adaptable Memory Mechanism with Temporal Understanding for  Large Language Models,Brandon Kynoch,http://arxiv.org/pdf/2307.02738v3.pdf,2023-07-06,"['cs.ai', 'cs.cl', 'cs.sc']",2307.02738v3.pdf,"  Large Language Models (LLMs) have made extraordinary progress in the field of
Artificial Intelligence and have demonstrated remarkable capabilities across a
large variety of tasks and domains. However, as we venture closer to creating
Artificial General Intelligence (AGI) systems, we recognize the need to
supplement LLMs with long-term memory to overcome the context window limitation
and more importantly, to create a foundation for sustained reasoning,
cumulative learning and long-term user interaction. In this paper we propose
RecallM, a novel architecture for providing LLMs with an adaptable and
updatable long-term memory mechanism. Unlike previous methods, the RecallM
architecture is particularly effective at belief updating and maintaining a
temporal understanding of the knowledge provided to it. We demonstrate through
various experiments the effectiveness of this architecture. Furthermore,
through our own temporal understanding and belief updating experiments, we show
that RecallM is four times more effective than using a vector database for
updating knowledge previously stored in long-term memory. We also demonstrate
that RecallM shows competitive performance on general question-answering and
in-context learning tasks.
"
One Step of Gradient Descent is Provably the Optimal In-Context Learner  with One Layer of Linear Self-Attention,Arvind Mahankali,http://arxiv.org/pdf/2307.03576v1.pdf,2023-07-07,['cs.lg'],2307.03576v1.pdf,"  Recent works have empirically analyzed in-context learning and shown that
transformers trained on synthetic linear regression tasks can learn to
implement ridge regression, which is the Bayes-optimal predictor, given
sufficient capacity [Aky\""urek et al., 2023], while one-layer transformers with
linear self-attention and no MLP layer will learn to implement one step of
gradient descent (GD) on a least-squares linear regression objective [von
Oswald et al., 2022]. However, the theory behind these observations remains
poorly understood. We theoretically study transformers with a single layer of
linear self-attention, trained on synthetic noisy linear regression data.
First, we mathematically show that when the covariates are drawn from a
standard Gaussian distribution, the one-layer transformer which minimizes the
pre-training loss will implement a single step of GD on the least-squares
linear regression objective. Then, we find that changing the distribution of
the covariates and weight vector to a non-isotropic Gaussian distribution has a
strong impact on the learned algorithm: the global minimizer of the
pre-training loss now implements a single step of $\textit{pre-conditioned}$
GD. However, if only the distribution of the responses is changed, then this
does not have a large effect on the learned algorithm: even when the response
comes from a more general family of $\textit{nonlinear}$ functions, the global
minimizer of the pre-training loss still implements a single step of GD on a
least-squares linear regression objective.
"
Large Language Models as General Pattern Machines,Suvir Mirchandani,http://arxiv.org/pdf/2307.04721v2.pdf,2023-07-10,"['cs.ai', 'cs.cl', 'cs.ro']",2307.04721v2.pdf,"  We observe that pre-trained large language models (LLMs) are capable of
autoregressively completing complex token sequences -- from arbitrary ones
procedurally generated by probabilistic context-free grammars (PCFG), to more
rich spatial patterns found in the Abstraction and Reasoning Corpus (ARC), a
general AI benchmark, prompted in the style of ASCII art. Surprisingly, pattern
completion proficiency can be partially retained even when the sequences are
expressed using tokens randomly sampled from the vocabulary. These results
suggest that without any additional training, LLMs can serve as general
sequence modelers, driven by in-context learning. In this work, we investigate
how these zero-shot capabilities may be applied to problems in robotics -- from
extrapolating sequences of numbers that represent states over time to complete
simple motions, to least-to-most prompting of reward-conditioned trajectories
that can discover and represent closed-loop policies (e.g., a stabilizing
controller for CartPole). While difficult to deploy today for real systems due
to latency, context size limitations, and compute costs, the approach of using
LLMs to drive low-level control may provide an exciting glimpse into how the
patterns among words could be transferred to actions.
"
Mega-TTS 2: Zero-Shot Text-to-Speech with Arbitrary Length Speech  Prompts,Ziyue Jiang,http://arxiv.org/pdf/2307.07218v2.pdf,2023-07-14,"['eess.as', 'cs.sd']",2307.07218v2.pdf,"  Zero-shot text-to-speech aims at synthesizing voices with unseen speech
prompts. Previous large-scale multispeaker TTS models have successfully
achieved this goal with an enrolled recording within 10 seconds. However, most
of them are designed to utilize only short speech prompts. The limited
information in short speech prompts significantly hinders the performance of
fine-grained identity imitation. In this paper, we introduce Mega-TTS 2, a
generic zero-shot multispeaker TTS model that is capable of synthesizing speech
for unseen speakers with arbitrary-length prompts. Specifically, we 1) design a
multi-reference timbre encoder to extract timbre information from multiple
reference speeches; 2) and train a prosody language model with arbitrary-length
speech prompts; With these designs, our model is suitable for prompts of
different lengths, which extends the upper bound of speech quality for
zero-shot text-to-speech. Besides arbitrary-length prompts, we introduce
arbitrary-source prompts, which leverages the probabilities derived from
multiple P-LLM outputs to produce expressive and controlled prosody.
Furthermore, we propose a phoneme-level auto-regressive duration model to
introduce in-context learning capabilities to duration modeling. Experiments
demonstrate that our method could not only synthesize identity-preserving
speech with a short prompt of an unseen speaker but also achieve improved
performance with longer speech prompts. Audio samples can be found in
https://mega-tts.github.io/mega2_demo/.
"
Do Emergent Abilities Exist in Quantized Large Language Models: An  Empirical Study,Peiyu Liu,http://arxiv.org/pdf/2307.08072v2.pdf,2023-07-16,"['cs.cl', 'cs.ai']",2307.08072v2.pdf,"  Despite the superior performance, Large Language Models~(LLMs) require
significant computational resources for deployment and use. To overcome this
issue, quantization methods have been widely applied to reduce the memory
footprint of LLMs as well as increasing the inference rate. However, a major
challenge is that low-bit quantization methods often lead to performance
degradation. It is important to understand how quantization impacts the
capacity of LLMs. Different from previous studies focused on overall
performance, this work aims to investigate the impact of quantization on
\emph{emergent abilities}, which are important characteristics that distinguish
LLMs from small language models. Specially, we examine the abilities of
in-context learning, chain-of-thought reasoning, and instruction-following in
quantized LLMs. Our empirical experiments show that these emergent abilities
still exist in 4-bit quantization models, while 2-bit models encounter severe
performance degradation on the test of these abilities. To improve the
performance of low-bit models, we conduct two special experiments: (1)
fine-gained impact analysis that studies which components (or substructures)
are more sensitive to quantization, and (2) performance compensation through
model fine-tuning. Our work derives a series of important findings to
understand the impact of quantization on emergent abilities, and sheds lights
on the possibilities of extremely low-bit quantization for LLMs.
"
Generating Mathematical Derivations with Large Language Models,Jordan Meadows,http://arxiv.org/pdf/2307.09998v3.pdf,2023-07-19,"['cs.cl', 'math.ho']",2307.09998v3.pdf,"  The derivation of mathematical results in specialised fields, using Large
Language Models (LLMs), is an emerging research direction that can help
identify models' limitations, and potentially support mathematical discovery.
In this paper, we leverage a symbolic engine to generate derivations of
equations at scale, and investigate the capabilities of LLMs when deriving goal
equations from premises. Specifically, we employ in-context learning for GPT
and fine-tune a range of T5 models to compare the robustness and generalisation
of pre-training strategies to specialised models. Empirical results show that
fine-tuned FLAN-T5-large (MathT5) outperforms GPT models on all static and
out-of-distribution test sets in conventional scores. However, an in-depth
analysis reveals that the fine-tuned models are more sensitive to perturbations
involving unseen symbols and (to a lesser extent) changes to equation
structure. In addition, we analyse 1.7K equations, and over 200 derivations, to
highlight common reasoning errors such as the inclusion of incorrect,
irrelevant, and redundant equations. Finally, we explore the suitability of
existing metrics for evaluating mathematical derivations and find evidence
that, while they can capture general properties such as sensitivity to
perturbations, they fail to highlight fine-grained reasoning errors and
essential differences between models. Overall, this work demonstrates that
training models on synthetic data may improve their math capabilities beyond
much larger LLMs, but current metrics are not appropriately assessing the
quality of generated mathematical text.
"
LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA  Composition,Chengsong Huang,http://arxiv.org/pdf/2307.13269v1.pdf,2023-07-25,"['cs.cl', 'cs.ai']",2307.13269v1.pdf,"  Low-rank adaptations (LoRA) are often employed to fine-tune large language
models (LLMs) for new tasks. This paper investigates LoRA composability for
cross-task generalization and introduces LoraHub, a strategic framework devised
for the purposive assembly of LoRA modules trained on diverse given tasks, with
the objective of achieving adaptable performance on unseen tasks. With just a
few examples from a novel task, LoraHub enables the fluid combination of
multiple LoRA modules, eradicating the need for human expertise. Notably, the
composition requires neither additional model parameters nor gradients. Our
empirical results, derived from the Big-Bench Hard (BBH) benchmark, suggest
that LoraHub can effectively mimic the performance of in-context learning in
few-shot scenarios, excluding the necessity of in-context examples alongside
each inference input. A significant contribution of our research is the
fostering of a community for LoRA, where users can share their trained LoRA
modules, thereby facilitating their application to new tasks. We anticipate
this resource will widen access to and spur advancements in general
intelligence as well as LLMs in production. Code will be available at
https://github.com/sail-sg/lorahub.
"
LayoutLLM-T2I: Eliciting Layout Guidance from LLM for Text-to-Image  Generation,Leigang Qu,http://arxiv.org/pdf/2308.05095v2.pdf,2023-08-09,"['cs.cv', 'cs.ai']",2308.05095v2.pdf,"  In the text-to-image generation field, recent remarkable progress in Stable
Diffusion makes it possible to generate rich kinds of novel photorealistic
images. However, current models still face misalignment issues (e.g.,
problematic spatial relation understanding and numeration failure) in complex
natural scenes, which impedes the high-faithfulness text-to-image generation.
Although recent efforts have been made to improve controllability by giving
fine-grained guidance (e.g., sketch and scribbles), this issue has not been
fundamentally tackled since users have to provide such guidance information
manually. In this work, we strive to synthesize high-fidelity images that are
semantically aligned with a given textual prompt without any guidance. Toward
this end, we propose a coarse-to-fine paradigm to achieve layout planning and
image generation. Concretely, we first generate the coarse-grained layout
conditioned on a given textual prompt via in-context learning based on Large
Language Models. Afterward, we propose a fine-grained object-interaction
diffusion method to synthesize high-faithfulness images conditioned on the
prompt and the automatically generated layout. Extensive experiments
demonstrate that our proposed method outperforms the state-of-the-art models in
terms of layout and image generation. Our code and settings are available at
https://layoutllm-t2i.github.io.
"
AudioLDM 2: Learning Holistic Audio Generation with Self-supervised  Pretraining,Haohe Liu,http://arxiv.org/pdf/2308.05734v2.pdf,2023-08-10,"['cs.sd', 'cs.ai', 'cs.mm', 'eess.as', 'eess.sp']",2308.05734v2.pdf,"  Although audio generation shares commonalities across different types of
audio, such as speech, music, and sound effects, designing models for each type
requires careful consideration of specific objectives and biases that can
significantly differ from those of other types. To bring us closer to a unified
perspective of audio generation, this paper proposes a framework that utilizes
the same learning method for speech, music, and sound effect generation. Our
framework introduces a general representation of audio, called ""language of
audio"" (LOA). Any audio can be translated into LOA based on AudioMAE, a
self-supervised pre-trained representation learning model. In the generation
process, we translate any modalities into LOA by using a GPT-2 model, and we
perform self-supervised audio generation learning with a latent diffusion model
conditioned on LOA. The proposed framework naturally brings advantages such as
in-context learning abilities and reusable self-supervised pretrained AudioMAE
and latent diffusion models. Experiments on the major benchmarks of
text-to-audio, text-to-music, and text-to-speech demonstrate state-of-the-art
or competitive performance against previous approaches. Our code, pretrained
model, and demo are available at https://audioldm.github.io/audioldm2.
"
Time Travel in LLMs: Tracing Data Contamination in Large Language Models,Shahriar Golchin,http://arxiv.org/pdf/2308.08493v2.pdf,2023-08-16,"['cs.cl', 'cs.cr', 'cs.lg']",2308.08493v2.pdf,"  Data contamination, i.e., the presence of test data from downstream tasks in
the training data of large language models (LLMs), is a potential major issue
in measuring LLMs' real effectiveness on other tasks. We propose a
straightforward yet effective method for identifying data contamination within
LLMs. At its core, our approach starts by identifying potential contamination
at the instance level; using this information, our approach then assesses wider
contamination at the partition level. To estimate contamination of individual
instances, we employ ""guided instruction:"" a prompt consisting of the dataset
name, partition type, and the random-length initial segment of a reference
instance, asking the LLM to complete it. An instance is flagged as contaminated
if the LLM's output either exactly or nearly matches the latter segment of the
reference. To understand if an entire partition is contaminated, we propose two
ideas. The first idea marks a dataset partition as contaminated if the average
overlap score with the reference instances (as measured by ROUGE-L or BLEURT)
is statistically significantly better with the completions from guided
instruction compared to a ""general instruction"" that does not include the
dataset and partition name. The second idea marks a dataset partition as
contaminated if a classifier based on GPT-4 with few-shot in-context learning
prompt marks multiple generated completions as exact/near-exact matches of the
corresponding reference instances. Our best method achieves an accuracy between
92% and 100% in detecting if an LLM is contaminated with seven datasets,
containing train and test/validation partitions, when contrasted with manual
evaluation by human experts. Further, our findings indicate that GPT-4 is
contaminated with AG News, WNLI, and XSum datasets.
"
Inductive-bias Learning: Generating Code Models with Large Language  Model,Toma Tanaka,http://arxiv.org/pdf/2308.09890v1.pdf,2023-08-19,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.pl']",2308.09890v1.pdf,"  Large Language Models(LLMs) have been attracting attention due to a ability
called in-context learning(ICL). ICL, without updating the parameters of a LLM,
it is possible to achieve highly accurate inference based on rules ``in the
context'' by merely inputting a training data into the prompt. Although ICL is
a developing field with many unanswered questions, LLMs themselves serves as a
inference model, seemingly realizing inference without explicitly indicate
``inductive bias''. On the other hand, a code generation is also a highlighted
application of LLMs. The accuracy of code generation has dramatically improved,
enabling even non-engineers to generate code to perform the desired tasks by
crafting appropriate prompts. In this paper, we propose a novel ``learning''
method called an ``Inductive-Bias Learning (IBL)'', which combines the
techniques of ICL and code generation. An idea of IBL is straightforward. Like
ICL, IBL inputs a training data into the prompt and outputs a code with a
necessary structure for inference (we referred to as ``Code Model'') from a
``contextual understanding''. Despite being a seemingly simple approach, IBL
encompasses both a ``property of inference without explicit inductive bias''
inherent in ICL and a ``readability and explainability'' of the code
generation. Surprisingly, generated Code Models have been found to achieve
predictive accuracy comparable to, and in some cases surpassing, ICL and
representative machine learning models. Our IBL code is open source:
https://github.com/fuyu-quant/IBLM
"
Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation  with Large Language Models,Martin Weyssow,http://arxiv.org/pdf/2308.10462v1.pdf,2023-08-21,"['cs.se', 'cs.cl', 'cs.lg']",2308.10462v1.pdf,"  Large Language Models (LLMs) possess impressive capabilities to generate
meaningful code snippets given natural language intents in zero-shot, i.e.,
without the need for specific fine-tuning. In the perspective of unleashing
their full potential, prior work has demonstrated the benefits of fine-tuning
the models to task-specific data. However, fine-tuning process demands heavy
computational costs and is intractable when resources are scarce, especially
for models with billions of parameters. In light of these challenges, previous
studies explored In-Context Learning (ICL) as an effective strategy to generate
contextually appropriate code without fine-tuning. However, it operates at
inference time and does not involve learning task-specific parameters,
potentially limiting the model's performance on downstream tasks. In this
context, we foresee that Parameter-Efficient Fine-Tuning (PEFT) techniques
carry a high potential for efficiently specializing LLMs to task-specific data.
In this paper, we deliver a comprehensive study of LLMs with the impact of PEFT
techniques under the automated code generation scenario. Our experimental
results reveal the superiority and potential of such techniques over ICL on a
wide range of LLMs in reducing the computational burden and improving
performance. Therefore, the study opens opportunities for broader applications
of PEFT in software engineering scenarios.
"
Analyzing Transformer Dynamics as Movement through Embedding Space,Sumeet S. Singh,http://arxiv.org/pdf/2308.10874v1.pdf,2023-08-21,"['cs.lg', 'cs.ai', 'cs.cl', 'cs.ne']",2308.10874v1.pdf,"  Transformer language models exhibit intelligent behaviors such as
understanding natural language, recognizing patterns, acquiring knowledge,
reasoning, planning, reflecting and using tools. This paper explores how their
underlying mechanics give rise to intelligent behaviors. We adopt a systems
approach to analyze Transformers in detail and develop a mathematical framework
that frames their dynamics as movement through embedding space. This novel
perspective provides a principled way of thinking about the problem and reveals
important insights related to the emergence of intelligence:
  1. At its core the Transformer is a Embedding Space walker, mapping
intelligent behavior to trajectories in this vector space.
  2. At each step of the walk, it composes context into a single composite
vector whose location in Embedding Space defines the next step.
  3. No learning actually occurs during decoding; in-context learning and
generalization are simply the result of different contexts composing into
different vectors.
  4. Ultimately the knowledge, intelligence and skills exhibited by the model
are embodied in the organization of vectors in Embedding Space rather than in
specific neurons or layers. These abilities are properties of this
organization.
  5. Attention's contribution boils down to the association-bias it lends to
vector composition and which influences the aforementioned organization.
However, more investigation is needed to ascertain its significance.
  6. The entire model is composed from two principal operations: data
independent filtering and data dependent aggregation. This generalization
unifies Transformers with other sequence models and across modalities.
  Building upon this foundation we formalize and test a semantic space theory
which posits that embedding vectors represent semantic concepts and find some
evidence of its validity.
"
Causal Intersectionality and Dual Form of Gradient Descent for  Multimodal Analysis: a Case Study on Hateful Memes,Yosuke Miyanishi,http://arxiv.org/pdf/2308.11585v1.pdf,2023-08-19,"['cs.ai', 'cs.cl']",2308.11585v1.pdf,"  In the wake of the explosive growth of machine learning (ML) usage,
particularly within the context of emerging Large Language Models (LLMs),
comprehending the semantic significance rooted in their internal workings is
crucial. While causal analyses focus on defining semantics and its
quantification, the gradient-based approach is central to explainable AI (XAI),
tackling the interpretation of the black box. By synergizing these approaches,
the exploration of how a model's internal mechanisms illuminate its causal
effect has become integral for evidence-based decision-making. A parallel line
of research has revealed that intersectionality - the combinatory impact of
multiple demographics of an individual - can be structured in the form of an
Averaged Treatment Effect (ATE). Initially, this study illustrates that the
hateful memes detection problem can be formulated as an ATE, assisted by the
principles of intersectionality, and that a modality-wise summarization of
gradient-based attention attribution scores can delineate the distinct
behaviors of three Transformerbased models concerning ATE. Subsequently, we
show that the latest LLM LLaMA2 has the ability to disentangle the
intersectional nature of memes detection in an in-context learning setting,
with their mechanistic properties elucidated via meta-gradient, a secondary
form of gradient. In conclusion, this research contributes to the ongoing
dialogue surrounding XAI and the multifaceted nature of ML models.
"
Knowledge-Driven CoT: Exploring Faithful Reasoning in LLMs for  Knowledge-intensive Question Answering,Keheng Wang,http://arxiv.org/pdf/2308.13259v2.pdf,2023-08-25,"['cs.cl', 'cs.ai']",2308.13259v2.pdf,"  Equipped with Chain-of-Thought (CoT), Large language models (LLMs) have shown
impressive reasoning ability in various downstream tasks. Even so, suffering
from hallucinations and the inability to access external knowledge, LLMs often
come with incorrect or unfaithful intermediate reasoning steps, especially in
the context of answering knowledge-intensive tasks such as KBQA. To alleviate
this issue, we propose a framework called Knowledge-Driven Chain-of-Thought
(KD-CoT) to verify and modify reasoning traces in CoT via interaction with
external knowledge, and thus overcome the hallucinations and error propagation.
Concretely, we formulate the CoT rationale process of LLMs into a structured
multi-round QA format. In each round, LLMs interact with a QA system that
retrieves external knowledge and produce faithful reasoning traces based on
retrieved precise answers. The structured CoT reasoning of LLMs is facilitated
by our developed KBQA CoT collection, which serves as in-context learning
demonstrations and can also be utilized as feedback augmentation to train a
robust retriever. Extensive experiments on WebQSP and ComplexWebQuestion
datasets demonstrate the effectiveness of proposed KD-CoT in task-solving
reasoning generation, which outperforms the vanilla CoT ICL with an absolute
success rate of 8.0% and 5.1%. Furthermore, our proposed feedback-augmented
retriever outperforms the state-of-the-art baselines for retrieving knowledge,
achieving significant improvement in Hit and recall performance. Our code and
data are released on https://github.com/AdelWang/KD-CoT/tree/main.
"
Empowering Dynamics-aware Text-to-Video Diffusion with Large Language  Models,Hao Fei,http://arxiv.org/pdf/2308.13812v1.pdf,2023-08-26,"['cs.ai', 'cs.cv']",2308.13812v1.pdf,"  Text-to-video (T2V) synthesis has gained increasing attention in the
community, in which the recently emerged diffusion models (DMs) have
promisingly shown stronger performance than the past approaches. While existing
state-of-the-art DMs are competent to achieve high-resolution video generation,
they may largely suffer from key limitations (e.g., action occurrence
disorders, crude video motions) with respect to the intricate temporal dynamics
modeling, one of the crux of video synthesis. In this work, we investigate
strengthening the awareness of video dynamics for DMs, for high-quality T2V
generation. Inspired by human intuition, we design an innovative dynamic scene
manager (dubbed as Dysen) module, which includes (step-1) extracting from input
text the key actions with proper time-order arrangement, (step-2) transforming
the action schedules into the dynamic scene graph (DSG) representations, and
(step-3) enriching the scenes in the DSG with sufficient and reasonable
details. Taking advantage of the existing powerful LLMs (e.g., ChatGPT) via
in-context learning, Dysen realizes (nearly) human-level temporal dynamics
understanding. Finally, the resulting video DSG with rich action scene details
is encoded as fine-grained spatio-temporal features, integrated into the
backbone T2V DM for video generating. Experiments on popular T2V datasets
suggest that our framework consistently outperforms prior arts with significant
margins, especially in the scenario with complex actions. Project page at
https://haofei.vip/Dysen-VDM
"
Identifying and Mitigating the Security Risks of Generative AI,Clark Barrett,http://arxiv.org/pdf/2308.14840v3.pdf,2023-08-28,['cs.ai'],2308.14840v3.pdf,"  Every major technical invention resurfaces the dual-use dilemma -- the new
technology has the potential to be used for good as well as for harm.
Generative AI (GenAI) techniques, such as large language models (LLMs) and
diffusion models, have shown remarkable capabilities (e.g., in-context
learning, code-completion, and text-to-image generation and editing). However,
GenAI can be used just as well by attackers to generate new attacks and
increase the velocity and efficacy of existing attacks.
  This paper reports the findings of a workshop held at Google (co-organized by
Stanford University and the University of Wisconsin-Madison) on the dual-use
dilemma posed by GenAI. This paper is not meant to be comprehensive, but is
rather an attempt to synthesize some of the interesting findings from the
workshop. We discuss short-term and long-term goals for the community on this
topic. We hope this paper provides both a launching point for a discussion on
this important topic as well as interesting problems that the research
community can work to address.
"
AnomalyGPT: Detecting Industrial Anomalies using Large Vision-Language  Models,Zhaopeng Gu,http://arxiv.org/pdf/2308.15366v3.pdf,2023-08-29,['cs.cv'],2308.15366v3.pdf,"  Large Vision-Language Models (LVLMs) such as MiniGPT-4 and LLaVA have
demonstrated the capability of understanding images and achieved remarkable
performance in various visual tasks. Despite their strong abilities in
recognizing common objects due to extensive training datasets, they lack
specific domain knowledge and have a weaker understanding of localized details
within objects, which hinders their effectiveness in the Industrial Anomaly
Detection (IAD) task. On the other hand, most existing IAD methods only provide
anomaly scores and necessitate the manual setting of thresholds to distinguish
between normal and abnormal samples, which restricts their practical
implementation. In this paper, we explore the utilization of LVLM to address
the IAD problem and propose AnomalyGPT, a novel IAD approach based on LVLM. We
generate training data by simulating anomalous images and producing
corresponding textual descriptions for each image. We also employ an image
decoder to provide fine-grained semantic and design a prompt learner to
fine-tune the LVLM using prompt embeddings. Our AnomalyGPT eliminates the need
for manual threshold adjustments, thus directly assesses the presence and
locations of anomalies. Additionally, AnomalyGPT supports multi-turn dialogues
and exhibits impressive few-shot in-context learning capabilities. With only
one normal shot, AnomalyGPT achieves the state-of-the-art performance with an
accuracy of 86.1%, an image-level AUC of 94.1%, and a pixel-level AUC of 95.3%
on the MVTec-AD dataset. Code is available at
https://github.com/CASIA-IVA-Lab/AnomalyGPT.
"
Taken out of context: On measuring situational awareness in LLMs,Lukas Berglund,http://arxiv.org/pdf/2309.00667v1.pdf,2023-09-01,"['cs.cl', 'cs.lg']",2309.00667v1.pdf,"  We aim to better understand the emergence of `situational awareness' in large
language models (LLMs). A model is situationally aware if it's aware that it's
a model and can recognize whether it's currently in testing or deployment.
Today's LLMs are tested for safety and alignment before they are deployed. An
LLM could exploit situational awareness to achieve a high score on safety
tests, while taking harmful actions after deployment. Situational awareness may
emerge unexpectedly as a byproduct of model scaling. One way to better foresee
this emergence is to run scaling experiments on abilities necessary for
situational awareness. As such an ability, we propose `out-of-context
reasoning' (in contrast to in-context learning). We study out-of-context
reasoning experimentally. First, we finetune an LLM on a description of a test
while providing no examples or demonstrations. At test time, we assess whether
the model can pass the test. To our surprise, we find that LLMs succeed on this
out-of-context reasoning task. Their success is sensitive to the training setup
and only works when we apply data augmentation. For both GPT-3 and LLaMA-1,
performance improves with model size. These findings offer a foundation for
further empirical study, towards predicting and potentially controlling the
emergence of situational awareness in LLMs. Code is available at:
https://github.com/AsaCooperStickland/situational-awareness-evals.
"
Business Process Text Sketch Automation Generation Using Large Language  Model,Rui Zhu,http://arxiv.org/pdf/2309.01071v1.pdf,2023-09-03,['cs.cl'],2309.01071v1.pdf,"  Business Process Management (BPM) is gaining increasing attention as it has
the potential to cut costs while boosting output and quality. Business process
document generation is a crucial stage in BPM. However, due to a shortage of
datasets, data-driven deep learning techniques struggle to deliver the expected
results. We propose an approach to transform Conditional Process Trees (CPTs)
into Business Process Text Sketches (BPTSs) using Large Language Models (LLMs).
The traditional prompting approach (Few-shot In-Context Learning) tries to get
the correct answer in one go, and it can find the pattern of transforming
simple CPTs into BPTSs, but for close-domain and CPTs with complex hierarchy,
the traditional prompts perform weakly and with low correctness. We suggest
using this technique to break down a difficult CPT into a number of basic CPTs
and then solve each one in turn, drawing inspiration from the
divide-and-conquer strategy. We chose 100 process trees with depths ranging
from 2 to 5 at random, as well as CPTs with many nodes, many degrees of
selection, and cyclic nesting. Experiments show that our method can achieve a
correct rate of 93.42%, which is 45.17% better than traditional prompting
methods. Our proposed method provides a solution for business process document
generation in the absence of datasets, and secondly, it becomes potentially
possible to provide a large number of datasets for the process model extraction
(PME) domain.
"
Textbooks Are All You Need II: phi-1.5 technical report,Yuanzhi Li,http://arxiv.org/pdf/2309.05463v1.pdf,2023-09-11,"['cs.cl', 'cs.ai']",2309.05463v1.pdf,"  We continue the investigation into the power of smaller Transformer-based
language models as initiated by \textbf{TinyStories} -- a 10 million parameter
model that can produce coherent English -- and the follow-up work on
\textbf{phi-1}, a 1.3 billion parameter model with Python coding performance
close to the state-of-the-art. The latter work proposed to use existing Large
Language Models (LLMs) to generate ``textbook quality"" data as a way to enhance
the learning process compared to traditional web data. We follow the
``Textbooks Are All You Need"" approach, focusing this time on common sense
reasoning in natural language, and create a new 1.3 billion parameter model
named \textbf{phi-1.5}, with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs on more complex
reasoning tasks such as grade-school mathematics and basic coding. More
generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs,
both good -- such as the ability to ``think step by step"" or perform some
rudimentary in-context learning -- and bad, including hallucinations and the
potential for toxic and biased generations -- encouragingly though, we are
seeing improvement on that front thanks to the absence of web data. We
open-source \textbf{phi-1.5} to promote further research on these urgent
topics.
"
Uncovering mesa-optimization algorithms in Transformers,Johannes von Oswald,http://arxiv.org/pdf/2309.05858v1.pdf,2023-09-11,"['cs.lg', 'cs.ai']",2309.05858v1.pdf,"  Transformers have become the dominant model in deep learning, but the reason
for their superior performance is poorly understood. Here, we hypothesize that
the strong performance of Transformers stems from an architectural bias towards
mesa-optimization, a learned process running within the forward pass of a model
consisting of the following two steps: (i) the construction of an internal
learning objective, and (ii) its corresponding solution found through
optimization. To test this hypothesis, we reverse-engineer a series of
autoregressive Transformers trained on simple sequence modeling tasks,
uncovering underlying gradient-based mesa-optimization algorithms driving the
generation of predictions. Moreover, we show that the learned forward-pass
optimization algorithm can be immediately repurposed to solve supervised
few-shot tasks, suggesting that mesa-optimization might underlie the in-context
learning capabilities of large language models. Finally, we propose a novel
self-attention layer, the mesa-layer, that explicitly and efficiently solves
optimization problems specified in context. We find that this layer can lead to
improved performance in synthetic and preliminary language modeling
experiments, adding weight to our hypothesis that mesa-optimization is an
important operation hidden within the weights of trained Transformers.
"
Narrowing the Gap between Supervised and Unsupervised Sentence  Representation Learning with Large Language Model,Mingxin Li,http://arxiv.org/pdf/2309.06453v1.pdf,2023-09-12,"['cs.cl', 'cs.lg']",2309.06453v1.pdf,"  Sentence Representation Learning (SRL) is a fundamental task in Natural
Language Processing (NLP), with Contrastive learning of Sentence Embeddings
(CSE) as the mainstream technique due to its superior performance. An
intriguing phenomenon in CSE is the significant performance gap between
supervised and unsupervised methods, even when their sentence encoder and loss
function are the same. Previous works attribute this performance gap to
differences in two representation properties (alignment and uniformity).
However, alignment and uniformity only measure the results, which means they
cannot answer ""What happens during the training process that leads to the
performance gap?"" and ""How can the performance gap be narrowed?"". In this
paper, we conduct empirical experiments to answer these ""What"" and ""How""
questions. We first answer the ""What"" question by thoroughly comparing the
behavior of supervised and unsupervised CSE during their respective training
processes. From the comparison, We observe a significant difference in fitting
difficulty. Thus, we introduce a metric, called Fitting Difficulty Increment
(FDI), to measure the fitting difficulty gap between the evaluation dataset and
the held-out training dataset, and use the metric to answer the ""What""
question. Then, based on the insights gained from the ""What"" question, we
tackle the ""How"" question by increasing the fitting difficulty of the training
dataset. We achieve this by leveraging the In-Context Learning (ICL) capability
of the Large Language Model (LLM) to generate data that simulates complex
patterns. By utilizing the hierarchical patterns in the LLM-generated data, we
effectively narrow the gap between supervised and unsupervised CSE.
"
Understanding Catastrophic Forgetting in Language Models via Implicit  Inference,Suhas Kotha,http://arxiv.org/pdf/2309.10105v1.pdf,2023-09-18,"['cs.cl', 'cs.lg']",2309.10105v1.pdf,"  Fine-tuning (via methods such as instruction-tuning or reinforcement learning
from human feedback) is a crucial step in training language models to robustly
carry out tasks of interest. However, we lack a systematic understanding of the
effects of fine-tuning, particularly on tasks outside the narrow fine-tuning
distribution. In a simplified scenario, we demonstrate that improving
performance on tasks within the fine-tuning data distribution comes at the
expense of suppressing model capabilities on other tasks. This degradation is
especially pronounced for tasks ""closest"" to the fine-tuning distribution. We
hypothesize that language models implicitly infer the task of the prompt
corresponds, and the fine-tuning process predominantly skews this task
inference towards tasks in the fine-tuning distribution. To test this
hypothesis, we propose Conjugate Prompting to see if we can recover pretrained
capabilities. Conjugate prompting artificially makes the task look farther from
the fine-tuning distribution while requiring the same capability. We find that
conjugate prompting systematically recovers some of the pretraining
capabilities on our synthetic setup. We then apply conjugate prompting to
real-world LLMs using the observation that fine-tuning distributions are
typically heavily skewed towards English. We find that simply translating the
prompts to different languages can cause the fine-tuned models to respond like
their pretrained counterparts instead. This allows us to recover the in-context
learning abilities lost via instruction tuning, and more concerningly, to
recover harmful content generation suppressed by safety fine-tuning in chatbots
like ChatGPT.
"
GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation  via Large Language Models,Yonggan Fu,http://arxiv.org/pdf/2309.10730v1.pdf,2023-09-19,"['cs.lg', 'cs.ar']",2309.10730v1.pdf,"  The remarkable capabilities and intricate nature of Artificial Intelligence
(AI) have dramatically escalated the imperative for specialized AI
accelerators. Nonetheless, designing these accelerators for various AI
workloads remains both labor- and time-intensive. While existing design
exploration and automation tools can partially alleviate the need for extensive
human involvement, they still demand substantial hardware expertise, posing a
barrier to non-experts and stifling AI accelerator development. Motivated by
the astonishing potential of large language models (LLMs) for generating
high-quality content in response to human language instructions, we embark on
this work to examine the possibility of harnessing LLMs to automate AI
accelerator design. Through this endeavor, we develop GPT4AIGChip, a framework
intended to democratize AI accelerator design by leveraging human natural
languages instead of domain-specific languages. Specifically, we first perform
an in-depth investigation into LLMs' limitations and capabilities for AI
accelerator design, thus aiding our understanding of our current position and
garnering insights into LLM-powered automated AI accelerator design.
Furthermore, drawing inspiration from the above insights, we develop a
framework called GPT4AIGChip, which features an automated demo-augmented
prompt-generation pipeline utilizing in-context learning to guide LLMs towards
creating high-quality AI accelerator design. To our knowledge, this work is the
first to demonstrate an effective pipeline for LLM-powered automated AI
accelerator generation. Accordingly, we anticipate that our insights and
framework can serve as a catalyst for innovations in next-generation
LLM-powered design automation tools.
"
User Simulation with Large Language Models for Evaluating Task-Oriented  Dialogue,Sam Davidson,http://arxiv.org/pdf/2309.13233v1.pdf,2023-09-23,['cs.cl'],2309.13233v1.pdf,"  One of the major impediments to the development of new task-oriented dialogue
(TOD) systems is the need for human evaluation at multiple stages and
iterations of the development process. In an effort to move toward automated
evaluation of TOD, we propose a novel user simulator built using recently
developed large pretrained language models (LLMs). In order to increase the
linguistic diversity of our system relative to the related previous work, we do
not fine-tune the LLMs used by our system on existing TOD datasets; rather we
use in-context learning to prompt the LLMs to generate robust and
linguistically diverse output with the goal of simulating the behavior of human
interlocutors. Unlike previous work, which sought to maximize goal success rate
(GSR) as the primary metric of simulator performance, our goal is a system
which achieves a GSR similar to that observed in human interactions with TOD
systems. Using this approach, our current simulator is effectively able to
interact with several TOD systems, especially on single-intent conversational
goals, while generating lexically and syntactically diverse output relative to
previous simulators that rely upon fine-tuned models. Finally, we collect a
Human2Bot dataset of humans interacting with the same TOD systems with which we
experimented in order to better quantify these achievements.
"
A Benchmark for Learning to Translate a New Language from One Grammar  Book,Garrett Tanzer,http://arxiv.org/pdf/2309.16575v1.pdf,2023-09-28,['cs.cl'],2309.16575v1.pdf,"  Large language models (LLMs) can perform impressive feats with in-context
learning or lightweight finetuning. It is natural to wonder how well these
models adapt to genuinely new tasks, but how does one find tasks that are
unseen in internet-scale training sets? We turn to a field that is explicitly
motivated and bottlenecked by a scarcity of web data: low-resource languages.
In this paper, we introduce MTOB (Machine Translation from One Book), a
benchmark for learning to translate between English and Kalamang -- a language
with less than 200 speakers and therefore virtually no presence on the web --
using several hundred pages of field linguistics reference materials. This task
framing is novel in that it asks a model to learn a language from a single
human-readable book of grammar explanations, rather than a large mined corpus
of in-domain data, more akin to L2 learning than L1 acquisition. We demonstrate
that baselines using current LLMs are promising but fall short of human
performance, achieving 44.7 chrF on Kalamang to English translation and 45.8
chrF on English to Kalamang translation, compared to 51.6 and 57.0 chrF by a
human who learned Kalamang from the same reference materials. We hope that MTOB
will help measure LLM capabilities along a new dimension, and that the methods
developed to solve it could help expand access to language technology for
underserved communities by leveraging qualitatively different kinds of data
than traditional machine translation.
"
Benchmarking Cognitive Biases in Large Language Models as Evaluators,Ryan Koo,http://arxiv.org/pdf/2309.17012v1.pdf,2023-09-29,"['cs.cl', 'cs.ai', 'cs.lg']",2309.17012v1.pdf,"  Large Language Models (LLMs) have recently been shown to be effective as
automatic evaluators with simple prompting and in-context learning. In this
work, we assemble 15 LLMs of four different size ranges and evaluate their
output responses by preference ranking from the other LLMs as evaluators, such
as System Star is better than System Square. We then evaluate the quality of
ranking outputs introducing the Cognitive Bias Benchmark for LLMs as Evaluators
(CoBBLEr), a benchmark to measure six different cognitive biases in LLM
evaluation outputs, such as the Egocentric bias where a model prefers to rank
its own outputs highly in evaluation. We find that LLMs are biased text quality
evaluators, exhibiting strong indications on our bias benchmark (average of 40%
of comparisons across all models) within each of their evaluations that
question their robustness as evaluators. Furthermore, we examine the
correlation between human and machine preferences and calculate the average
Rank-Biased Overlap (RBO) score to be 49.6%, indicating that machine
preferences are misaligned with humans. According to our findings, LLMs may
still be unable to be utilized for automatic annotation aligned with human
preferences. Our project page is at: https://minnesotanlp.github.io/cobbler.
"
Fewer-token Neural Speech Codec with Time-invariant Codes,Yong Ren,http://arxiv.org/pdf/2310.00014v1.pdf,2023-09-15,"['cs.sd', 'eess.as']",2310.00014v1.pdf,"  Language model based text-to-speech (TTS) models, like VALL-E, have gained
attention for their outstanding in-context learning capability in zero-shot
scenarios. Neural speech codec is a critical component of these models, which
can convert speech into discrete token representations. However, excessive
token sequences from the codec may negatively affect prediction accuracy and
restrict the progression of Language model based TTS models. To address this
issue, this paper proposes a novel neural speech codec with time-invariant
codes named TiCodec. By encoding and quantizing time-invariant information into
a separate code, TiCodec can reduce the amount of frame-level information that
needs encoding, effectively decreasing the number of tokens as codes of speech.
Furthermore, this paper introduces a time-invariant encoding consistency loss
to enhance the consistency of time-invariant code within an utterance and force
it to capture more global information, which can benefit the zero-shot TTS
task. Experimental results demonstrate that TiCodec can not only enhance the
quality of reconstruction speech with fewer tokens but also increase the
similarity and naturalness, as well as reduce the word error rate of the
synthesized speech by the TTS model.
"
ReAcTable: Enhancing ReAct for Table Question Answering,Yunjia Zhang,http://arxiv.org/pdf/2310.00815v1.pdf,2023-10-01,['cs.db'],2310.00815v1.pdf,"  Table Question Answering (TQA) presents a substantial challenge at the
intersection of natural language processing and data analytics. This task
involves answering natural language (NL) questions on top of tabular data,
demanding proficiency in logical reasoning, understanding of data semantics,
and fundamental analytical capabilities. Due to its significance, a substantial
volume of research has been dedicated to exploring a wide range of strategies
aimed at tackling this challenge including approaches that leverage Large
Language Models (LLMs) through in-context learning or Chain-of-Thought (CoT)
prompting as well as approaches that train and fine-tune custom models.
  Nonetheless, a conspicuous gap exists in the research landscape, where there
is limited exploration of how innovative foundational research, which
integrates incremental reasoning with external tools in the context of LLMs, as
exemplified by the ReAct paradigm, could potentially bring advantages to the
TQA task. In this paper, we aim to fill this gap, by introducing ReAcTable
(ReAct for Table Question Answering tasks), a framework inspired by the ReAct
paradigm that is carefully enhanced to address the challenges uniquely
appearing in TQA tasks such as interpreting complex data semantics, dealing
with errors generated by inconsistent data and generating intricate data
transformations. ReAcTable relies on external tools such as SQL and Python code
executors, to progressively enhance the data by generating intermediate data
representations, ultimately transforming it into a more accessible format for
answering the questions with greater ease. We demonstrate that ReAcTable
achieves remarkable performance even when compared to fine-tuned approaches. In
particular, it outperforms the best prior result on the WikiTQ benchmark,
achieving an accuracy of 68.0% without requiring training a new model or
fine-tuning.
"
GraphText: Graph Reasoning in Text Space,Jianan Zhao,http://arxiv.org/pdf/2310.01089v1.pdf,2023-10-02,"['cs.cl', 'cs.lg']",2310.01089v1.pdf,"  Large Language Models (LLMs) have gained the ability to assimilate human
knowledge and facilitate natural language interactions with both humans and
other LLMs. However, despite their impressive achievements, LLMs have not made
significant advancements in the realm of graph machine learning. This
limitation arises because graphs encapsulate distinct relational data, making
it challenging to transform them into natural language that LLMs understand. In
this paper, we bridge this gap with a novel framework, GraphText, that
translates graphs into natural language. GraphText derives a graph-syntax tree
for each graph that encapsulates both the node attributes and inter-node
relationships. Traversal of the tree yields a graph text sequence, which is
then processed by an LLM to treat graph tasks as text generation tasks.
Notably, GraphText offers multiple advantages. It introduces training-free
graph reasoning: even without training on graph data, GraphText with ChatGPT
can achieve on par with, or even surpassing, the performance of
supervised-trained graph neural networks through in-context learning (ICL).
Furthermore, GraphText paves the way for interactive graph reasoning, allowing
both humans and LLMs to communicate with the model seamlessly using natural
language. These capabilities underscore the vast, yet-to-be-explored potential
of LLMs in the domain of graph machine learning.
"
LLMParser: A LLM-based Log Parsing Framework,Zhihan Jiang,http://arxiv.org/pdf/2310.01796v1.pdf,2023-10-03,['cs.se'],2310.01796v1.pdf,"  The process of log parsing, which converts log messages into structured
formats, is a crucial step for various log analysis tasks. Although numerous
log parsers have been proposed, their effectiveness on complex log data is
often hindered due to reliance on human-made rules or learning-based models
with limited training data. The recent rise of powerful large language models
(LLMs) shows potential for log parsing due to their extensive pre-trained
knowledge related to code and logging. However, their accuracy is currently
limited due to the lack of specialized log parsing capabilities. Additionally,
the inconsistency of their answers and significant overhead obstruct the
practical implementation of LLM-based log parsing.
  To tackle these challenges, we introduce LLMParser, the first practical
LLM-based log parsing framework. LLMParser enables accurate and robust log
parsing by leveraging the in-context learning (ICL) capability of the LLM,
employing a hierarchical candidate sampling algorithm, and selecting
high-quality demonstrations. LLMParser also includes a novel adaptive parsing
cache component to store and refine the templates generated by the LLM. This
design aids in addressing the inefficiency of LLMs by rapid matching to
previously parsed log templates. LLMParser also adaptively updates the
templates in the parsing cache to ensure consistent parsed results. Extensive
evaluation on large-scale public datasets demonstrates that LLMParser surpasses
the state-of-the-art methods. Furthermore, LLMParser significantly reduces the
query times to LLMs, achieving efficiency comparable to the most efficient
baseline, Drain.
"
Uncovering hidden geometry in Transformers via disentangling position  and context,Jiajun Song,http://arxiv.org/pdf/2310.04861v1.pdf,2023-10-07,"['cs.lg', 'cs.ai', 'stat.ml']",2310.04861v1.pdf,"  Transformers are widely used to extract complex semantic meanings from input
tokens, yet they usually operate as black-box models. In this paper, we present
a simple yet informative decomposition of hidden states (or embeddings) of
trained transformers into interpretable components. For any layer, embedding
vectors of input sequence samples are represented by a tensor $\boldsymbol{h}
\in \mathbb{R}^{C \times T \times d}$. Given embedding vector
$\boldsymbol{h}_{c,t} \in \mathbb{R}^d$ at sequence position $t \le T$ in a
sequence (or context) $c \le C$, extracting the mean effects yields the
decomposition \[ \boldsymbol{h}_{c,t} = \boldsymbol{\mu} + \mathbf{pos}_t +
\mathbf{ctx}_c + \mathbf{resid}_{c,t} \] where $\boldsymbol{\mu}$ is the global
mean vector, $\mathbf{pos}_t$ and $\mathbf{ctx}_c$ are the mean vectors across
contexts and across positions respectively, and $\mathbf{resid}_{c,t}$ is the
residual vector. For popular transformer architectures and diverse text
datasets, empirically we find pervasive mathematical structure: (1)
$(\mathbf{pos}_t)_{t}$ forms a low-dimensional, continuous, and often spiral
shape across layers, (2) $(\mathbf{ctx}_c)_c$ shows clear cluster structure
that falls into context topics, and (3) $(\mathbf{pos}_t)_{t}$ and
$(\mathbf{ctx}_c)_c$ are mutually incoherent -- namely $\mathbf{pos}_t$ is
almost orthogonal to $\mathbf{ctx}_c$ -- which is canonical in compressed
sensing and dictionary learning. This decomposition offers structural insights
about input formats in in-context learning (especially for induction heads) and
in arithmetic tasks.
"
Lightweight In-Context Tuning for Multimodal Unified Models,Yixin Chen,http://arxiv.org/pdf/2310.05109v1.pdf,2023-10-08,['cs.cv'],2310.05109v1.pdf,"  In-context learning (ICL) involves reasoning from given contextual examples.
As more modalities comes, this procedure is becoming more challenging as the
interleaved input modalities convolutes the understanding process. This is
exemplified by the observation that multimodal models often struggle to
effectively extrapolate from contextual examples to perform ICL. To address
these challenges, we introduce MultiModal In-conteXt Tuning (M$^2$IXT), a
lightweight module to enhance the ICL capabilities of multimodal unified
models. The proposed M$^2$IXT module perceives an expandable context window to
incorporate various labeled examples of multiple modalities (e.g., text, image,
and coordinates). It can be prepended to various multimodal unified models
(e.g., OFA, Unival, LLaVA) of different architectures and trained via a
mixed-tasks strategy to enable rapid few-shot adaption on multiple tasks and
datasets. When tuned on as little as 50K multimodal data, M$^2$IXT can boost
the few-shot ICL performance significantly (e.g., 18\% relative increase for
OFA), and obtained state-of-the-art results across an array of tasks including
visual question answering, image captioning, visual grounding, and visual
entailment, while being considerably small in terms of model parameters (e.g.,
$\sim$$20\times$ smaller than Flamingo or MMICL), highlighting the flexibility
and effectiveness of M$^2$IXT as a multimodal in-context learner.
"
Explainable Claim Verification via Knowledge-Grounded Reasoning with  Large Language Models,Haoran Wang,http://arxiv.org/pdf/2310.05253v2.pdf,2023-10-08,"['cs.cl', 'cs.ai', 'cs.lg']",2310.05253v2.pdf,"  Claim verification plays a crucial role in combating misinformation. While
existing works on claim verification have shown promising results, a crucial
piece of the puzzle that remains unsolved is to understand how to verify claims
without relying on human-annotated data, which is expensive to create at a
large scale. Additionally, it is important for models to provide comprehensive
explanations that can justify their decisions and assist human fact-checkers.
This paper presents First-Order-Logic-Guided Knowledge-Grounded (FOLK)
Reasoning that can verify complex claims and generate explanations without the
need for annotated evidence using Large Language Models (LLMs). FOLK leverages
the in-context learning ability of LLMs to translate the claim into a
First-Order-Logic (FOL) clause consisting of predicates, each corresponding to
a sub-claim that needs to be verified. Then, FOLK performs FOL-Guided reasoning
over a set of knowledge-grounded question-and-answer pairs to make veracity
predictions and generate explanations to justify its decision-making process.
This process makes our model highly explanatory, providing clear explanations
of its reasoning process in human-readable form. Our experiment results
indicate that FOLK outperforms strong baselines on three datasets encompassing
various claim verification challenges. Our code and data are available.
"
Glitter or Gold? Deriving Structured Insights from Sustainability  Reports via Large Language Models,Marco Bronzini,http://arxiv.org/pdf/2310.05628v2.pdf,2023-10-09,"['cs.cl', 'cs.ce', 'cs.cy']",2310.05628v2.pdf,"  Over the last decade, several regulatory bodies have started requiring the
disclosure of non-financial information from publicly listed companies, in
light of the investors' increasing attention to Environmental, Social, and
Governance (ESG) issues. Such information is publicly released in a variety of
non-structured and multi-modal documentation. Hence, it is not straightforward
to aggregate and consolidate such data in a cohesive framework to further
derive insights about sustainability practices across companies and markets.
Given these premises, it is natural to resort to Information Extraction (IE)
techniques to provide concise, informative, and actionable data to the
stakeholders. Moving beyond traditional text processing techniques, in this
work we leverage Large Language Models (LLMs), along with the prominent
in-context learning technique and the Retrieved Augmented Generation (RAG)
paradigm, to extract semantically structured ESG-related information from
companies' sustainability reports. We then adopt graph-based representations to
conduct meaningful statistical, similarity and correlation analyses concerning
the ESG-related actions disclosed by companies in their sustainability reports.
These analyses unveiled that companies address ESG-related issues through
several actions encompassing recognition, compliance, and partnerships;
highlighting the complexity and joint efforts needed to address them. Moreover,
disclosure similarities emerged among companies from the same region or sector.
Lastly, we investigate which factual aspects impact the most on companies' ESG
scores using our findings and other company information. This analysis unveiled
that companies' disclosures affect ESG scores more than other financial or
company characteristics.
"
Are Large Language Models Post Hoc Explainers?,Nicholas Kroeger,http://arxiv.org/pdf/2310.05797v2.pdf,2023-10-09,"['cs.cl', 'cs.ai', 'cs.lg']",2310.05797v2.pdf,"  Large Language Models (LLMs) are increasingly used as powerful tools for a
plethora of natural language processing (NLP) applications. A recent
innovation, in-context learning (ICL), enables LLMs to learn new tasks by
supplying a few examples in the prompt during inference time, thereby
eliminating the need for model fine-tuning. While LLMs have been utilized in
several applications, their applicability in explaining the behavior of other
models remains relatively unexplored. Despite the growing number of new
explanation techniques, many require white-box access to the model and/or are
computationally expensive, highlighting a need for next-generation post hoc
explainers. In this work, we present the first framework to study the
effectiveness of LLMs in explaining other predictive models. More specifically,
we propose a novel framework encompassing multiple prompting strategies: i)
Perturbation-based ICL, ii) Prediction-based ICL, iii) Instruction-based ICL,
and iv) Explanation-based ICL, with varying levels of information about the
underlying ML model and the local neighborhood of the test sample. We conduct
extensive experiments with real-world benchmark datasets to demonstrate that
LLM-generated explanations perform on par with state-of-the-art post hoc
explainers using their ability to leverage ICL examples and their internal
knowledge in generating model explanations. On average, across four datasets
and two ML models, we observe that LLMs identify the most important feature
with 72.19% accuracy, opening up new frontiers in explainable artificial
intelligence (XAI) to explore LLM-based explanation frameworks.
"
SALMON: Self-Alignment with Principle-Following Reward Models,Zhiqing Sun,http://arxiv.org/pdf/2310.05910v1.pdf,2023-10-09,"['cs.cl', 'cs.ai', 'cs.lg']",2310.05910v1.pdf,"  Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight.
"
OpsEval: A Comprehensive Task-Oriented AIOps Benchmark for Large  Language Models,Yuhe Liu,http://arxiv.org/pdf/2310.07637v2.pdf,2023-10-11,"['cs.ai', 'cs.ni']",2310.07637v2.pdf,"  Large language models (LLMs) have exhibited remarkable capabilities in
NLP-related tasks such as translation, summarizing, and generation. The
application of LLMs in specific areas, notably AIOps (Artificial Intelligence
for IT Operations), holds great potential due to their advanced abilities in
information summarizing, report analyzing, and ability of API calling.
Nevertheless, the performance of current LLMs in AIOps tasks is yet to be
determined. Furthermore, a comprehensive benchmark is required to steer the
optimization of LLMs tailored for AIOps. Compared with existing benchmarks that
focus on evaluating specific fields like network configuration, in this paper,
we present \textbf{OpsEval}, a comprehensive task-oriented AIOps benchmark
designed for LLMs. For the first time, OpsEval assesses LLMs' proficiency in
three crucial scenarios (Wired Network Operation, 5G Communication Operation,
and Database Operation) at various ability levels (knowledge recall, analytical
thinking, and practical application). The benchmark includes 7,200 questions in
both multiple-choice and question-answer (QA) formats, available in English and
Chinese. With quantitative and qualitative results, we show how various LLM
tricks can affect the performance of AIOps, including zero-shot,
chain-of-thought, and few-shot in-context learning. We find that GPT4-score is
more consistent with experts than widely used Bleu and Rouge, which can be used
to replace automatic metrics for large-scale qualitative evaluations.
"
EIPE-text: Evaluation-Guided Iterative Plan Extraction for Long-Form  Narrative Text Generation,Wang You,http://arxiv.org/pdf/2310.08185v1.pdf,2023-10-12,"['cs.cl', 'cs.ai']",2310.08185v1.pdf,"  Plan-and-Write is a common hierarchical approach in long-form narrative text
generation, which first creates a plan to guide the narrative writing.
Following this approach, several studies rely on simply prompting large
language models for planning, which often yields suboptimal results. In this
paper, we propose a new framework called Evaluation-guided Iterative Plan
Extraction for long-form narrative text generation (EIPE-text), which extracts
plans from the corpus of narratives and utilizes the extracted plans to
construct a better planner. EIPE-text has three stages: plan extraction,
learning, and inference. In the plan extraction stage, it iteratively extracts
and improves plans from the narrative corpus and constructs a plan corpus. We
propose a question answer (QA) based evaluation mechanism to automatically
evaluate the plans and generate detailed plan refinement instructions to guide
the iterative improvement. In the learning stage, we build a better planner by
fine-tuning with the plan corpus or in-context learning with examples in the
plan corpus. Finally, we leverage a hierarchical approach to generate long-form
narratives. We evaluate the effectiveness of EIPE-text in the domains of novels
and storytelling. Both GPT-4-based evaluations and human evaluations
demonstrate that our method can generate more coherent and relevant long-form
narratives. Our code will be released in the future.
"
Prompting Large Language Models with Chain-of-Thought for Few-Shot  Knowledge Base Question Generation,Yuanyuan Liang,http://arxiv.org/pdf/2310.08395v3.pdf,2023-10-12,"['cs.cl', 'cs.ai']",2310.08395v3.pdf,"  The task of Question Generation over Knowledge Bases (KBQG) aims to convert a
logical form into a natural language question. For the sake of expensive cost
of large-scale question annotation, the methods of KBQG under low-resource
scenarios urgently need to be developed. However, current methods heavily rely
on annotated data for fine-tuning, which is not well-suited for few-shot
question generation. The emergence of Large Language Models (LLMs) has shown
their impressive generalization ability in few-shot tasks. Inspired by
Chain-of-Thought (CoT) prompting, which is an in-context learning strategy for
reasoning, we formulate KBQG task as a reasoning problem, where the generation
of a complete question is splitted into a series of sub-question generation.
Our proposed prompting method KQG-CoT first retrieves supportive logical forms
from the unlabeled data pool taking account of the characteristics of the
logical form. Then, we write a prompt to explicit the reasoning chain of
generating complicated questions based on the selected demonstrations. To
further ensure prompt quality, we extend KQG-CoT into KQG-CoT+ via sorting the
logical forms by their complexity. We conduct extensive experiments over three
public KBQG datasets. The results demonstrate that our prompting method
consistently outperforms other prompting baselines on the evaluated datasets.
Remarkably, our KQG-CoT+ method could surpass existing few-shot SoTA results of
the PathQuestions dataset by 18.25, 10.72, and 10.18 absolute points on BLEU-4,
METEOR, and ROUGE-L, respectively.
"
Do pretrained Transformers Really Learn In-context by Gradient Descent?,Lingfeng Shen,http://arxiv.org/pdf/2310.08540v1.pdf,2023-10-12,"['cs.cl', 'cs.ai', 'cs.lg']",2310.08540v1.pdf,"  Is In-Context Learning (ICL) implicitly equivalent to Gradient Descent (GD)?
Several recent works draw analogies between the dynamics of GD and the emergent
behavior of ICL in large language models. However, these works make assumptions
far from the realistic natural language setting in which language models are
trained. Such discrepancies between theory and practice, therefore, necessitate
further investigation to validate their applicability.
  We start by highlighting the weaknesses in prior works that construct
Transformer weights to simulate gradient descent. Their experiments with
training Transformers on ICL objective, inconsistencies in the order
sensitivity of ICL and GD, sparsity of the constructed weights, and sensitivity
to parameter changes are some examples of a mismatch from the real-world
setting.
  Furthermore, we probe and compare the ICL vs. GD hypothesis in a natural
setting. We conduct comprehensive empirical analyses on language models
pretrained on natural data (LLaMa-7B). Our comparisons on various performance
metrics highlight the inconsistent behavior of ICL and GD as a function of
various factors such as datasets, models, and number of demonstrations. We
observe that ICL and GD adapt the output distribution of language models
differently. These results indicate that the equivalence between ICL and GD is
an open hypothesis, requires nuanced considerations and calls for further
studies.
"
Mastering Robot Manipulation with Multimodal Prompts through Pretraining  and Multi-task Fine-tuning,Jiachen Li,http://arxiv.org/pdf/2310.09676v1.pdf,2023-10-14,"['cs.ro', 'cs.ai']",2310.09676v1.pdf,"  Prompt-based learning has been demonstrated as a compelling paradigm
contributing to large language models' tremendous success (LLMs). Inspired by
their success in language tasks, existing research has leveraged LLMs in
embodied instruction following and task planning. However, not much attention
has been paid to embodied tasks with multimodal prompts, combining vision
signals with text descriptions. This type of task poses a major challenge to
robots' capability to understand the interconnection and complementarity
between vision and language signals. In this work, we introduce an effective
framework that learns a policy to perform robot manipulation with multimodal
prompts from multi-task expert trajectories. Our methods consist of a two-stage
training pipeline that performs inverse dynamics pretraining and multi-task
finetuning. To facilitate multimodal understanding, we design our multimodal
prompt encoder by augmenting a pretrained LM with a residual connection to the
visual input and model the dependencies among action dimensions. Empirically,
we evaluate the efficacy of our method on the VIMA-BENCH and establish a new
state-of-the-art (10% improvement in success rate). Moreover, we demonstrate
that our model exhibits remarkable in-context learning ability.
"
Unifying Image Processing as Visual Prompting Question Answering,Yihao Liu,http://arxiv.org/pdf/2310.10513v1.pdf,2023-10-16,"['cs.cv', 'eess.iv']",2310.10513v1.pdf,"  Image processing is a fundamental task in computer vision, which aims at
enhancing image quality and extracting essential features for subsequent vision
applications. Traditionally, task-specific models are developed for individual
tasks and designing such models requires distinct expertise. Building upon the
success of large language models (LLMs) in natural language processing (NLP),
there is a similar trend in computer vision, which focuses on developing
large-scale models through pretraining and in-context learning. This paradigm
shift reduces the reliance on task-specific models, yielding a powerful unified
model to deal with various tasks. However, these advances have predominantly
concentrated on high-level vision tasks, with less attention paid to low-level
vision tasks. To address this issue, we propose a universal model for general
image processing that covers image restoration, image enhancement, image
feature extraction tasks, \textit{etc}. Our proposed framework, named
PromptGIP, unifies these diverse image processing tasks within a universal
framework. Inspired by NLP question answering (QA) techniques, we employ a
visual prompting question answering paradigm. Specifically, we treat the
input-output image pair as a structured question-answer sentence, thereby
reprogramming the image processing task as a prompting QA problem. PromptGIP
can undertake diverse \textbf{cross-domain} tasks using provided visual
prompts, eliminating the need for task-specific finetuning. Our methodology
offers a universal and adaptive solution to general image processing. While
PromptGIP has demonstrated a certain degree of out-of-domain task
generalization capability, further research is expected to fully explore its
more powerful emergent generalization.
"
In-Context Pretraining: Language Modeling Beyond Document Boundaries,Weijia Shi,http://arxiv.org/pdf/2310.10638v3.pdf,2023-10-16,"['cs.cl', 'cs.ai', 'cs.lg']",2310.10638v3.pdf,"  Large language models (LMs) are currently trained to predict tokens given
document prefixes, enabling them to directly perform long-form generation and
prompting-style tasks which can be reduced to document completion. Existing
pretraining pipelines train LMs by concatenating random sets of short documents
to create input contexts but the prior documents provide no signal for
predicting the next document. We instead present In-Context Pretraining, a new
approach where language models are pretrained on a sequence of related
documents, thereby explicitly encouraging them to read and reason across
document boundaries. We can do In-Context Pretraining by simply changing the
document ordering so that each context contains related documents, and directly
applying existing pretraining pipelines. However, this document sorting problem
is challenging. There are billions of documents and we would like the sort to
maximize contextual similarity for every document without repeating any data.
To do this, we introduce approximate algorithms for finding related documents
with efficient nearest neighbor search and constructing coherent input contexts
with a graph traversal algorithm. Our experiments show In-Context Pretraining
offers a simple and scalable approach to significantly enhance LMs'performance:
we see notable improvements in tasks that require more complex contextual
reasoning, including in-context learning (+8%), reading comprehension (+15%),
faithfulness to previous contexts (+16%), long-context reasoning (+5%), and
retrieval augmentation (+9%).
"
IDEAL: Influence-Driven Selective Annotations Empower In-Context  Learners in Large Language Models,Shaokun Zhang,http://arxiv.org/pdf/2310.10873v1.pdf,2023-10-16,['cs.cl'],2310.10873v1.pdf,"  In-context learning is a promising paradigm that utilizes in-context examples
as prompts for the predictions of large language models. These prompts are
crucial for achieving strong performance. However, since the prompts need to be
sampled from a large volume of annotated examples, finding the right prompt may
result in high annotation costs. To address this challenge, this paper
introduces an influence-driven selective annotation method that aims to
minimize annotation costs while improving the quality of in-context examples.
The essence of our method is to select a pivotal subset from a large-scale
unlabeled data pool to annotate for the subsequent sampling of prompts.
Specifically, a directed graph is first constructed to represent unlabeled
data. Afterward, the influence of candidate unlabeled subsets is quantified
with a diffusion process. A simple yet effective greedy algorithm for unlabeled
data selection is lastly introduced. It iteratively selects the data if it
provides a maximum marginal gain with respect to quantified influence. Compared
with previous efforts on selective annotations, our influence-driven method
works in an end-to-end manner, avoids an intractable explicit balance between
data diversity and representativeness, and enjoys theoretical support.
Experiments confirm the superiority of the proposed method on various
benchmarks, achieving better performance under lower time consumption during
subset selection. The project page is available at
https://skzhang1.github.io/IDEAL/.
"
Eureka: Human-Level Reward Design via Coding Large Language Models,Yecheng Jason Ma,http://arxiv.org/pdf/2310.12931v1.pdf,2023-10-19,"['cs.ro', 'cs.ai', 'cs.lg']",2310.12931v1.pdf,"  Large Language Models (LLMs) have excelled as high-level semantic planners
for sequential decision-making tasks. However, harnessing them to learn complex
low-level manipulation tasks, such as dexterous pen spinning, remains an open
problem. We bridge this fundamental gap and present Eureka, a human-level
reward design algorithm powered by LLMs. Eureka exploits the remarkable
zero-shot generation, code-writing, and in-context improvement capabilities of
state-of-the-art LLMs, such as GPT-4, to perform evolutionary optimization over
reward code. The resulting rewards can then be used to acquire complex skills
via reinforcement learning. Without any task-specific prompting or pre-defined
reward templates, Eureka generates reward functions that outperform expert
human-engineered rewards. In a diverse suite of 29 open-source RL environments
that include 10 distinct robot morphologies, Eureka outperforms human experts
on 83% of the tasks, leading to an average normalized improvement of 52%. The
generality of Eureka also enables a new gradient-free in-context learning
approach to reinforcement learning from human feedback (RLHF), readily
incorporating human inputs to improve the quality and the safety of the
generated rewards without model updating. Finally, using Eureka rewards in a
curriculum learning setting, we demonstrate for the first time, a simulated
Shadow Hand capable of performing pen spinning tricks, adeptly manipulating a
pen in circles at rapid speed.
"
Self-prompted Chain-of-Thought on Large Language Models for Open-domain  Multi-hop Reasoning,Jinyuan Wang,http://arxiv.org/pdf/2310.13552v2.pdf,2023-10-20,"['cs.cl', 'cs.ai']",2310.13552v2.pdf,"  In open-domain question-answering (ODQA), most existing questions require
single-hop reasoning on commonsense. To further extend this task, we officially
introduce open-domain multi-hop reasoning (ODMR) by answering multi-hop
questions with explicit reasoning steps in open-domain setting. Recently, large
language models (LLMs) have found significant utility in facilitating ODQA
without external corpus. Furthermore, chain-of-thought (CoT) prompting boosts
the reasoning capability of LLMs to a greater extent with manual or automated
paradigms. However, existing automated methods lack of quality assurance, while
manual approaches suffer from limited scalability and poor diversity, hindering
the capabilities of LLMs. In this paper, we propose Self-prompted
Chain-of-Thought (SP-CoT), an automated framework to mass-produce high quality
CoTs of LLMs, by LLMs and for LLMs. SP-CoT introduces an automated generation
pipeline of high quality ODMR datasets, an adaptive sampler for in-context CoT
selection and self-prompted inference via in-context learning. Extensive
experiments on four multi-hop question-answering benchmarks show that our
proposed SP-CoT not only significantly surpasses the previous SOTA methods on
large-scale (175B) LLMs, but also nearly doubles the zero-shot performance of
small-scale (13B) LLMs. Further analysis reveals the remarkable capability of
SP-CoT to elicit direct and concise intermediate reasoning steps by recalling
$\sim$50\% of intermediate answers on MuSiQue-Ans dataset.
"
Explainable Depression Symptom Detection in Social Media,Eliseo Bao Souto,http://arxiv.org/pdf/2310.13664v2.pdf,2023-10-20,['cs.cl'],2310.13664v2.pdf,"  Users of social platforms often perceive these sites as supportive spaces to
post about their mental health issues. Those conversations contain important
traces about individuals' health risks. Recently, researchers have exploited
this online information to construct mental health detection models, which aim
to identify users at risk on platforms like Twitter, Reddit or Facebook. Most
of these models are centred on achieving good classification results, ignoring
the explainability and interpretability of the decisions. Recent research has
pointed out the importance of using clinical markers, such as the use of
symptoms, to improve trust in the computational models by health professionals.
In this paper, we propose using transformer-based architectures to detect and
explain the appearance of depressive symptom markers in the users' writings. We
present two approaches: i) train a model to classify, and another one to
explain the classifier's decision separately and ii) unify the two tasks
simultaneously using a single model. Additionally, for this latter manner, we
also investigated the performance of recent conversational LLMs when using
in-context learning. Our natural language explanations enable clinicians to
interpret the models' decisions based on validated symptoms, enhancing trust in
the automated process. We evaluate our approach using recent symptom-based
datasets, employing both offline and expert-in-the-loop metrics to assess the
quality of the explanations generated by our models. The experimental results
show that it is possible to achieve good classification results while
generating interpretable symptom-based explanations.
"
Ensemble-Instruct: Generating Instruction-Tuning Data with a  Heterogeneous Mixture of LMs,Young-Suk Lee,http://arxiv.org/pdf/2310.13961v1.pdf,2023-10-21,"['cs.cl', 'cs.ai']",2310.13961v1.pdf,"  Using in-context learning (ICL) for data generation, techniques such as
Self-Instruct (Wang et al., 2023) or the follow-up Alpaca (Taori et al., 2023)
can train strong conversational agents with only a small amount of human
supervision. One limitation of these approaches is that they resort to very
large language models (around 175B parameters) that are also proprietary and
non-public. Here we explore the application of such techniques to language
models that are much smaller (around 10B--40B parameters) and have permissive
licenses. We find the Self-Instruct approach to be less effective at these
sizes and propose new ICL methods that draw on two main ideas: (a)
Categorization and simplification of the ICL templates to make prompt learning
easier for the LM, and (b) Ensembling over multiple LM outputs to help select
high-quality synthetic examples. Our algorithm leverages the 175 Self-Instruct
seed tasks and employs separate pipelines for instructions that require an
input and instructions that do not. Empirical investigations with different LMs
show that: (1) Our proposed method yields higher-quality instruction tuning
data than Self-Instruct, (2) It improves performances of both vanilla and
instruction-tuned LMs by significant margins, and (3) Smaller instruction-tuned
LMs generate more useful outputs than their larger un-tuned counterparts. Our
codebase is available at https://github.com/IBM/ensemble-instruct.
"
Investigating the Fairness of Large Language Models for Predictions on  Tabular Data,Yanchen Liu,http://arxiv.org/pdf/2310.14607v1.pdf,2023-10-23,"['cs.cl', 'cs.lg']",2310.14607v1.pdf,"  Recent literature has suggested the potential of using large language models
(LLMs) to make predictions for tabular tasks. However, LLMs have been shown to
exhibit harmful social biases that reflect the stereotypes and inequalities
present in the society. To this end, as well as the widespread use of tabular
data in many high-stake applications, it is imperative to explore the following
questions: what sources of information do LLMs draw upon when making
predictions for tabular tasks; whether and to what extent are LLM predictions
for tabular tasks influenced by social biases and stereotypes; and what are the
consequential implications for fairness? Through a series of experiments, we
delve into these questions and show that LLMs tend to inherit social biases
from their training data which significantly impact their fairness in tabular
prediction tasks. Furthermore, our investigations show that in the context of
bias mitigation, though in-context learning and fine-tuning have a moderate
effect, the fairness metric gap between different subgroups is still larger
than that in traditional machine learning models, such as Random Forest and
shallow Neural Networks. This observation emphasizes that the social biases are
inherent within the LLMs themselves and inherited from their pre-training
corpus, not only from the downstream task datasets. Besides, we demonstrate
that label-flipping of in-context examples can significantly reduce biases,
further highlighting the presence of inherent bias within LLMs.
"
Large Language Models are Visual Reasoning Coordinators,Liangyu Chen,http://arxiv.org/pdf/2310.15166v1.pdf,2023-10-23,"['cs.cv', 'cs.cl']",2310.15166v1.pdf,"  Visual reasoning requires multimodal perception and commonsense cognition of
the world. Recently, multiple vision-language models (VLMs) have been proposed
with excellent commonsense reasoning ability in various domains. However, how
to harness the collective power of these complementary VLMs is rarely explored.
Existing methods like ensemble still struggle to aggregate these models with
the desired higher-order communications. In this work, we propose Cola, a novel
paradigm that coordinates multiple VLMs for visual reasoning. Our key insight
is that a large language model (LLM) can efficiently coordinate multiple VLMs
by facilitating natural language communication that leverages their distinct
and complementary capabilities. Extensive experiments demonstrate that our
instruction tuning variant, Cola-FT, achieves state-of-the-art performance on
visual question answering (VQA), outside knowledge VQA, visual entailment, and
visual spatial reasoning tasks. Moreover, we show that our in-context learning
variant, Cola-Zero, exhibits competitive performance in zero and few-shot
settings, without finetuning. Through systematic ablation studies and
visualizations, we validate that a coordinator LLM indeed comprehends the
instruction prompts as well as the separate functionalities of VLMs; it then
coordinates them to enable impressive visual reasoning capabilities.
"
Function Vectors in Large Language Models,Eric Todd,http://arxiv.org/pdf/2310.15213v1.pdf,2023-10-23,"['cs.cl', 'cs.lg']",2310.15213v1.pdf,"  We report the presence of a simple neural mechanism that represents an
input-output function as a vector within autoregressive transformer language
models (LMs). Using causal mediation analysis on a diverse range of
in-context-learning (ICL) tasks, we find that a small number attention heads
transport a compact representation of the demonstrated task, which we call a
function vector (FV). FVs are robust to changes in context, i.e., they trigger
execution of the task on inputs such as zero-shot and natural text settings
that do not resemble the ICL contexts from which they are collected. We test
FVs across a range of tasks, models, and layers and find strong causal effects
across settings in middle layers. We investigate the internal structure of FVs
and find while that they often contain information that encodes the output
space of the function, this information alone is not sufficient to reconstruct
an FV. Finally, we test semantic vector composition in FVs, and find that to
some extent they can be summed to create vectors that trigger new complex
tasks. Taken together, our findings suggest that LLMs contain internal
abstractions of general-purpose functions that can be invoked in a variety of
contexts.
"
TCRA-LLM: Token Compression Retrieval Augmented Large Language Model for  Inference Cost Reduction,Junyi Liu,http://arxiv.org/pdf/2310.15556v2.pdf,2023-10-24,"['cs.cl', 'cs.ir']",2310.15556v2.pdf,"  Since ChatGPT released its API for public use, the number of applications
built on top of commercial large language models (LLMs) increase exponentially.
One popular usage of such models is leveraging its in-context learning ability
and generating responses given user queries leveraging knowledge obtained by
retrieval augmentation. One problem of deploying commercial retrieval-augmented
LLMs is the cost due to the additionally retrieved context that largely
increases the input token size of the LLMs. To mitigate this, we propose a
token compression scheme that includes two methods: summarization compression
and semantic compression. The first method applies a T5-based model that is
fine-tuned by datasets generated using self-instruct containing samples with
varying lengths and reduce token size by doing summarization. The second method
further compresses the token size by removing words with lower impact on the
semantic. In order to adequately evaluate the effectiveness of the proposed
methods, we propose and utilize a dataset called Food-Recommendation DB (FRDB)
focusing on food recommendation for women around pregnancy period or infants.
Our summarization compression can reduce 65% of the retrieval token size with
further 0.3% improvement on the accuracy; semantic compression provides a more
flexible way to trade-off the token size with performance, for which we can
reduce the token size by 20% with only 1.6% of accuracy drop.
"
Testing the Limits: Unusual Text Inputs Generation for Mobile App Crash  Detection with Large Language Model,Zhe Liu,http://arxiv.org/pdf/2310.15657v1.pdf,2023-10-24,['cs.se'],2310.15657v1.pdf,"  Mobile applications have become a ubiquitous part of our daily life,
providing users with access to various services and utilities. Text input, as
an important interaction channel between users and applications, plays an
important role in core functionality such as search queries, authentication,
messaging, etc. However, certain special text (e.g., -18 for Font Size) can
cause the app to crash, and generating diversified unusual inputs for fully
testing the app is highly demanded. Nevertheless, this is also challenging due
to the combination of explosion dilemma, high context sensitivity, and complex
constraint relations. This paper proposes InputBlaster which leverages the LLM
to automatically generate unusual text inputs for mobile app crash detection.
It formulates the unusual inputs generation problem as a task of producing a
set of test generators, each of which can yield a batch of unusual text inputs
under the same mutation rule. In detail, InputBlaster leverages LLM to produce
the test generators together with the mutation rules serving as the reasoning
chain, and utilizes the in-context learning schema to demonstrate the LLM with
examples for boosting the performance. InputBlaster is evaluated on 36 text
input widgets with cash bugs involving 31 popular Android apps, and results
show that it achieves 78% bug detection rate, with 136% higher than the best
baseline. Besides, we integrate it with the automated GUI testing tool and
detect 37 unseen crashes in real-world apps from Google Play.
"
ExPT: Synthetic Pretraining for Few-Shot Experimental Design,Tung Nguyen,http://arxiv.org/pdf/2310.19961v1.pdf,2023-10-30,"['cs.lg', 'cs.ai']",2310.19961v1.pdf,"  Experimental design is a fundamental problem in many science and engineering
fields. In this problem, sample efficiency is crucial due to the time, money,
and safety costs of real-world design evaluations. Existing approaches either
rely on active data collection or access to large, labeled datasets of past
experiments, making them impractical in many real-world scenarios. In this
work, we address the more challenging yet realistic setting of few-shot
experimental design, where only a few labeled data points of input designs and
their corresponding values are available. We approach this problem as a
conditional generation task, where a model conditions on a few labeled examples
and the desired output to generate an optimal input design. To this end, we
introduce Experiment Pretrained Transformers (ExPT), a foundation model for
few-shot experimental design that employs a novel combination of synthetic
pretraining with in-context learning. In ExPT, we only assume knowledge of a
finite collection of unlabelled data points from the input domain and pretrain
a transformer neural network to optimize diverse synthetic functions defined
over this domain. Unsupervised pretraining allows ExPT to adapt to any design
task at test time in an in-context fashion by conditioning on a few labeled
data points from the target task and generating the candidate optima. We
evaluate ExPT on few-shot experimental design in challenging domains and
demonstrate its superior generality and performance compared to existing
methods. The source code is available at https://github.com/tung-nd/ExPT.git.
"
Unleashing the Creative Mind: Language Model As Hierarchical Policy For  Improved Exploration on Challenging Problem Solving,Zhan Ling,http://arxiv.org/pdf/2311.00694v1.pdf,2023-11-01,"['cs.ai', 'cs.cl']",2311.00694v1.pdf,"  Large Language Models (LLMs) have achieved tremendous progress, yet they
still often struggle with challenging reasoning problems. Current approaches
address this challenge by sampling or searching detailed and low-level
reasoning chains. However, these methods are still limited in their exploration
capabilities, making it challenging for correct solutions to stand out in the
huge solution space. In this work, we unleash LLMs' creative potential for
exploring multiple diverse problem solving strategies by framing an LLM as a
hierarchical policy via in-context learning. This policy comprises of a
visionary leader that proposes multiple diverse high-level problem-solving
tactics as hints, accompanied by a follower that executes detailed
problem-solving processes following each of the high-level instruction. The
follower uses each of the leader's directives as a guide and samples multiple
reasoning chains to tackle the problem, generating a solution group for each
leader proposal. Additionally, we propose an effective and efficient
tournament-based approach to select among these explored solution groups to
reach the final answer. Our approach produces meaningful and inspiring hints,
enhances problem-solving strategy exploration, and improves the final answer
accuracy on challenging problems in the MATH dataset. Code will be released at
https://github.com/lz1oceani/LLM-As-Hierarchical-Policy.
"
Sentiment Analysis through LLM Negotiations,Xiaofei Sun,http://arxiv.org/pdf/2311.01876v1.pdf,2023-11-03,['cs.cl'],2311.01876v1.pdf,"  A standard paradigm for sentiment analysis is to rely on a singular LLM and
makes the decision in a single round under the framework of in-context
learning. This framework suffers the key disadvantage that the single-turn
output generated by a single LLM might not deliver the perfect decision, just
as humans sometimes need multiple attempts to get things right. This is
especially true for the task of sentiment analysis where deep reasoning is
required to address the complex linguistic phenomenon (e.g., clause
composition, irony, etc) in the input.
  To address this issue, this paper introduces a multi-LLM negotiation
framework for sentiment analysis. The framework consists of a reasoning-infused
generator to provide decision along with rationale, a explanation-deriving
discriminator to evaluate the credibility of the generator. The generator and
the discriminator iterate until a consensus is reached. The proposed framework
naturally addressed the aforementioned challenge, as we are able to take the
complementary abilities of two LLMs, have them use rationale to persuade each
other for correction.
  Experiments on a wide range of sentiment analysis benchmarks (SST-2, Movie
Review, Twitter, yelp, amazon, IMDB) demonstrate the effectiveness of proposed
approach: it consistently yields better performances than the ICL baseline
across all benchmarks, and even superior performances to supervised baselines
on the Twitter and movie review datasets.
"
ChEF: A Comprehensive Evaluation Framework for Standardized Assessment  of Multimodal Large Language Models,Zhelun Shi,http://arxiv.org/pdf/2311.02692v1.pdf,2023-11-05,['cs.cv'],2311.02692v1.pdf,"  Multimodal Large Language Models (MLLMs) have shown impressive abilities in
interacting with visual content with myriad potential downstream tasks.
However, even though a list of benchmarks has been proposed, the capabilities
and limitations of MLLMs are still not comprehensively understood, due to a
lack of a standardized and holistic evaluation framework. To this end, we
present the first Comprehensive Evaluation Framework (ChEF) that can
holistically profile each MLLM and fairly compare different MLLMs. First, we
structure ChEF as four modular components, i.e., Scenario as scalable
multimodal datasets, Instruction as flexible instruction retrieving formulae,
Inferencer as reliable question answering strategies, and Metric as indicative
task-specific score functions. Based on them, ChEF facilitates versatile
evaluations in a standardized framework, and new evaluations can be built by
designing new Recipes (systematic selection of these four components). Notably,
current MLLM benchmarks can be readily summarized as recipes of ChEF. Second,
we introduce 6 new recipes to quantify competent MLLMs' desired capabilities
(or called desiderata, i.e., calibration, in-context learning, instruction
following, language performance, hallucination, and robustness) as reliable
agents that can perform real-world multimodal interactions. Third, we conduct a
large-scale evaluation of 9 prominent MLLMs on 9 scenarios and 6 desiderata.
Our evaluation summarized over 20 valuable observations concerning the
generalizability of MLLMs across various scenarios and the composite capability
of MLLMs required for multimodal interactions. We will publicly release all the
detailed implementations for further analysis, as well as an easy-to-use
modular toolkit for the integration of new recipes and models, so that ChEF can
be a growing evaluation framework for the MLLM community.
"
Kinematic-aware Prompting for Generalizable Articulated Object  Manipulation with LLMs,Wenke Xia,http://arxiv.org/pdf/2311.02847v2.pdf,2023-11-06,"['cs.ro', 'cs.ai']",2311.02847v2.pdf,"  Generalizable articulated object manipulation is essential for home-assistant
robots. Recent efforts focus on imitation learning from demonstrations or
reinforcement learning in simulation, however, due to the prohibitive costs of
real-world data collection and precise object simulation, it still remains
challenging for these works to achieve broad adaptability across diverse
articulated objects. Recently, many works have tried to utilize the strong
in-context learning ability of Large Language Models (LLMs) to achieve
generalizable robotic manipulation, but most of these researches focus on
high-level task planning, sidelining low-level robotic control. In this work,
building on the idea that the kinematic structure of the object determines how
we can manipulate it, we propose a kinematic-aware prompting framework that
prompts LLMs with kinematic knowledge of objects to generate low-level motion
trajectory waypoints, supporting various object manipulation. To effectively
prompt LLMs with the kinematic structure of different objects, we design a
unified kinematic knowledge parser, which represents various articulated
objects as a unified textual description containing kinematic joints and
contact location. Building upon this unified description, a kinematic-aware
planner model is proposed to generate precise 3D manipulation waypoints via a
designed kinematic-aware chain-of-thoughts prompting method. Our evaluation
spanned 48 instances across 16 distinct categories, revealing that our
framework not only outperforms traditional methods on 8 seen categories but
also shows a powerful zero-shot capability for 8 unseen articulated object
categories. Moreover, the real-world experiments on 7 different object
categories prove our framework's adaptability in practical scenarios. Code is
released at
\href{https://github.com/GeWu-Lab/LLM_articulated_object_manipulation/tree/main}{here}.
"
In-Context Learning for Knowledge Base Question Answering for Unmanned  Systems based on Large Language Models,Yunlong Chen,http://arxiv.org/pdf/2311.02956v1.pdf,2023-11-06,"['cs.cl', 'cs.ai', 'i.2.7']",2311.02956v1.pdf,"  Knowledge Base Question Answering (KBQA) aims to answer factoid questions
based on knowledge bases. However, generating the most appropriate knowledge
base query code based on Natural Language Questions (NLQ) poses a significant
challenge in KBQA. In this work, we focus on the CCKS2023 Competition of
Question Answering with Knowledge Graph Inference for Unmanned Systems.
Inspired by the recent success of large language models (LLMs) like ChatGPT and
GPT-3 in many QA tasks, we propose a ChatGPT-based Cypher Query Language (CQL)
generation framework to generate the most appropriate CQL based on the given
NLQ. Our generative framework contains six parts: an auxiliary model predicting
the syntax-related information of CQL based on the given NLQ, a proper noun
matcher extracting proper nouns from the given NLQ, a demonstration example
selector retrieving similar examples of the input sample, a prompt constructor
designing the input template of ChatGPT, a ChatGPT-based generation model
generating the CQL, and an ensemble model to obtain the final answers from
diversified outputs. With our ChatGPT-based CQL generation framework, we
achieved the second place in the CCKS 2023 Question Answering with Knowledge
Graph Inference for Unmanned Systems competition, achieving an F1-score of
0.92676.
"
Retrieval-Augmented Code Generation for Universal Information Extraction,Yucan Guo,http://arxiv.org/pdf/2311.02962v1.pdf,2023-11-06,"['cs.ai', 'cs.cl', 'cs.ir']",2311.02962v1.pdf,"  Information Extraction (IE) aims to extract structural knowledge (e.g.,
entities, relations, events) from natural language texts, which brings
challenges to existing methods due to task-specific schemas and complex text
expressions. Code, as a typical kind of formalized language, is capable of
describing structural knowledge under various schemas in a universal way. On
the other hand, Large Language Models (LLMs) trained on both codes and texts
have demonstrated powerful capabilities of transforming texts into codes, which
provides a feasible solution to IE tasks. Therefore, in this paper, we propose
a universal retrieval-augmented code generation framework based on LLMs, called
Code4UIE, for IE tasks. Specifically, Code4UIE adopts Python classes to define
task-specific schemas of various structural knowledge in a universal way. By so
doing, extracting knowledge under these schemas can be transformed into
generating codes that instantiate the predefined Python classes with the
information in texts. To generate these codes more precisely, Code4UIE adopts
the in-context learning mechanism to instruct LLMs with examples. In order to
obtain appropriate examples for different tasks, Code4UIE explores several
example retrieval strategies, which can retrieve examples semantically similar
to the given texts. Extensive experiments on five representative IE tasks
across nine datasets demonstrate the effectiveness of the Code4UIE framework.
"
Unified Low-Resource Sequence Labeling by Sample-Aware Dynamic Sparse  Finetuning,Sarkar Snigdha Sarathi Das,http://arxiv.org/pdf/2311.03748v1.pdf,2023-11-07,['cs.cl'],2311.03748v1.pdf,"  Unified Sequence Labeling that articulates different sequence labeling
problems such as Named Entity Recognition, Relation Extraction, Semantic Role
Labeling, etc. in a generalized sequence-to-sequence format opens up the
opportunity to make the maximum utilization of large language model knowledge
toward structured prediction. Unfortunately, this requires formatting them into
specialized augmented format unknown to the base pretrained language model
(PLMs) necessitating finetuning to the target format. This significantly bounds
its usefulness in data-limited settings where finetuning large models cannot
properly generalize to the target format. To address this challenge and
leverage PLM knowledge effectively, we propose FISH-DIP, a sample-aware dynamic
sparse finetuning strategy that selectively focuses on a fraction of
parameters, informed by feedback from highly regressing examples, during the
fine-tuning process. By leveraging the dynamism of sparsity, our approach
mitigates the impact of well-learned samples and prioritizes underperforming
instances for improvement in generalization. Across five tasks of sequence
labeling, we demonstrate that FISH-DIP can smoothly optimize the model in low
resource settings offering upto 40% performance improvements over full
fine-tuning depending on target evaluation settings. Also, compared to
in-context learning and other parameter-efficient fine-tuning approaches,
FISH-DIP performs comparably or better, notably in extreme low-resource
settings.
"
UL2: Unifying Language Learning Paradigms,Yi Tay,http://arxiv.org/pdf/2205.05131v3.pdf,2022-05-10,['cs.cl'],2205.05131v3.pdf,"  Existing pre-trained models are generally geared towards a particular class
of problems. To date, there seems to be still no consensus on what the right
architecture and pre-training setup should be. This paper presents a unified
framework for pre-training models that are universally effective across
datasets and setups. We begin by disentangling architectural archetypes with
pre-training objectives -- two concepts that are commonly conflated. Next, we
present a generalized & unified perspective for self-supervision in NLP and
show how different pre-training objectives can be cast as one another and how
interpolating between different objectives can be effective. We then propose
Mixture-of-Denoisers (MoD), a pre-training objective that combines diverse
pre-training paradigms together. We furthermore introduce a notion of mode
switching, wherein downstream fine-tuning is associated with specific
pre-training schemes. We conduct extensive ablative experiments to compare
multiple pre-training objectives and find that our method pushes the
Pareto-frontier by outperforming T5 & GPT-like models across multiple diverse
setups. By scaling our model up to 20B parameters, we achieve SOTA performance
on 50 well-established supervised finetuning based NLP tasks. Our model also
achieve strong results at in-context learning, outperforming 175B GPT-3 on
zero-shot SuperGLUE and tripling the performance of T5-XXL on one-shot
summarization. On 0-shot MMLU, UL2 20B outperforms T0 and T5 models. UL2 20B
also works well with chain-of-thought prompting and reasoning, making it an
appealing choice for research into reasoning at a small to medium scale of 20B
parameters. Finally, we apply FLAN instruction tuning to the UL2 20B model,
achieving MMLU and Big-Bench scores competitive to FLAN-PaLM 62B. We release
Flax-based T5X checkpoints for the UL2 20B & Flan-UL2 20B.
"
Human-Timescale Adaptation in an Open-Ended Task Space, Adaptive Agent Team,http://arxiv.org/pdf/2301.07608v1.pdf,2023-01-18,"['cs.lg', 'cs.ai', 'cs.ne']",2301.07608v1.pdf,"  Foundation models have shown impressive adaptation and scalability in
supervised and self-supervised learning problems, but so far these successes
have not fully translated to reinforcement learning (RL). In this work, we
demonstrate that training an RL agent at scale leads to a general in-context
learning algorithm that can adapt to open-ended novel embodied 3D problems as
quickly as humans. In a vast space of held-out environment dynamics, our
adaptive agent (AdA) displays on-the-fly hypothesis-driven exploration,
efficient exploitation of acquired knowledge, and can successfully be prompted
with first-person demonstrations. Adaptation emerges from three ingredients:
(1) meta-reinforcement learning across a vast, smooth and diverse task
distribution, (2) a policy parameterised as a large-scale attention-based
memory architecture, and (3) an effective automated curriculum that prioritises
tasks at the frontier of an agent's capabilities. We demonstrate characteristic
scaling laws with respect to network size, memory length, and richness of the
training task distribution. We believe our results lay the foundation for
increasingly general and adaptive RL agents that perform well across
ever-larger open-ended domains.
"
DeID-GPT: Zero-shot Medical Text De-Identification by GPT-4,Zhengliang Liu,http://arxiv.org/pdf/2303.11032v1.pdf,2023-03-20,"['cs.cl', 'cs.cy']",2303.11032v1.pdf,"  The digitization of healthcare has facilitated the sharing and re-using of
medical data but has also raised concerns about confidentiality and privacy.
HIPAA (Health Insurance Portability and Accountability Act) mandates removing
re-identifying information before the dissemination of medical records. Thus,
effective and efficient solutions for de-identifying medical data, especially
those in free-text forms, are highly needed. While various computer-assisted
de-identification methods, including both rule-based and learning-based, have
been developed and used in prior practice, such solutions still lack
generalizability or need to be fine-tuned according to different scenarios,
significantly imposing restrictions in wider use. The advancement of large
language models (LLM), such as ChatGPT and GPT-4, have shown great potential in
processing text data in the medical domain with zero-shot in-context learning,
especially in the task of privacy protection, as these models can identify
confidential information by their powerful named entity recognition (NER)
capability. In this work, we developed a novel GPT4-enabled de-identification
framework (""DeID-GPT"") to automatically identify and remove the identifying
information. Compared to existing commonly used medical text data
de-identification methods, our developed DeID-GPT showed the highest accuracy
and remarkable reliability in masking private information from the unstructured
medical text while preserving the original structure and meaning of the text.
This study is one of the earliest to utilize ChatGPT and GPT-4 for medical text
data processing and de-identification, which provides insights for further
research and solution development on the use of LLMs such as ChatGPT/GPT-4 in
healthcare. Codes and benchmarking data information are available at
https://github.com/yhydhx/ChatGPT-API.
"
TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with  Millions of APIs,Yaobo Liang,http://arxiv.org/pdf/2303.16434v1.pdf,2023-03-29,"['cs.ai', 'cs.cl']",2303.16434v1.pdf,"  Artificial Intelligence (AI) has made incredible progress recently. On the
one hand, advanced foundation models like ChatGPT can offer powerful
conversation, in-context learning and code generation abilities on a broad
range of open-domain tasks. They can also generate high-level solution outlines
for domain-specific tasks based on the common sense knowledge they have
acquired. However, they still face difficulties with some specialized tasks
because they lack enough domain-specific data during pre-training or they often
have errors in their neural network computations on those tasks that need
accurate executions. On the other hand, there are also many existing models and
systems (symbolic-based or neural-based) that can do some domain-specific tasks
very well. However, due to the different implementation or working mechanisms,
they are not easily accessible or compatible with foundation models. Therefore,
there is a clear and pressing need for a mechanism that can leverage foundation
models to propose task solution outlines and then automatically match some of
the sub-tasks in the outlines to the off-the-shelf models and systems with
special functionalities to complete them. Inspired by this, we introduce
TaskMatrix.AI as a new AI ecosystem that connects foundation models with
millions of APIs for task completion. Unlike most previous work that aimed to
improve a single AI model, TaskMatrix.AI focuses more on using existing
foundation models (as a brain-like central system) and APIs of other AI models
and systems (as sub-task solvers) to achieve diversified tasks in both digital
and physical domains. As a position paper, we will present our vision of how to
build such an ecosystem, explain each key component, and use study cases to
illustrate both the feasibility of this vision and the main challenges we need
to address next.
"
Subject-driven Text-to-Image Generation via Apprenticeship Learning,Wenhu Chen,http://arxiv.org/pdf/2304.00186v5.pdf,2023-04-01,"['cs.cv', 'cs.ai']",2304.00186v5.pdf,"  Recent text-to-image generation models like DreamBooth have made remarkable
progress in generating highly customized images of a target subject, by
fine-tuning an ``expert model'' for a given subject from a few examples.
However, this process is expensive, since a new expert model must be learned
for each subject. In this paper, we present SuTI, a Subject-driven
Text-to-Image generator that replaces subject-specific fine tuning with
in-context learning. Given a few demonstrations of a new subject, SuTI can
instantly generate novel renditions of the subject in different scenes, without
any subject-specific optimization. SuTI is powered by apprenticeship learning,
where a single apprentice model is learned from data generated by a massive
number of subject-specific expert models. Specifically, we mine millions of
image clusters from the Internet, each centered around a specific visual
subject. We adopt these clusters to train a massive number of expert models,
each specializing in a different subject. The apprentice model SuTI then learns
to imitate the behavior of these fine-tuned experts. SuTI can generate
high-quality and customized subject-specific images 20x faster than
optimization-based SoTA methods. On the challenging DreamBench and
DreamBench-v2, our human evaluation shows that SuTI significantly outperforms
existing models like InstructPix2Pix, Textual Inversion, Imagic, Prompt2Prompt,
Re-Imagen and DreamBooth, especially on the subject and text alignment aspects.
"
Large Language Models are Edge-Case Fuzzers: Testing Deep Learning  Libraries via FuzzGPT,Yinlin Deng,http://arxiv.org/pdf/2304.02014v1.pdf,2023-04-04,['cs.se'],2304.02014v1.pdf,"  Deep Learning (DL) library bugs affect downstream DL applications,
emphasizing the need for reliable systems. Generating valid input programs for
fuzzing DL libraries is challenging due to the need for satisfying both
language syntax/semantics and constraints for constructing valid computational
graphs. Recently, the TitanFuzz work demonstrates that modern Large Language
Models (LLMs) can be directly leveraged to implicitly learn all the constraints
to generate valid DL programs for fuzzing. However, LLMs tend to generate
ordinary programs following similar patterns seen in their massive training
corpora, while fuzzing favors unusual inputs that cover edge cases or are
unlikely to be manually produced.
  To fill this gap, this paper proposes FuzzGPT, the first technique to prime
LLMs to synthesize unusual programs for fuzzing. FuzzGPT is built on the
well-known hypothesis that historical bug-triggering programs may include
rare/valuable code ingredients important for bug finding. Traditional
techniques leveraging such historical information require intensive human
efforts to design dedicated generators and ensure the validity of generated
programs. FuzzGPT demonstrates that this process can be fully automated via the
intrinsic capabilities of LLMs (including fine-tuning and in-context learning),
while being generalizable and applicable to challenging domains. While FuzzGPT
can be applied with different LLMs, this paper focuses on the powerful
GPT-style models: Codex and CodeGen. Moreover, FuzzGPT also shows the potential
of directly leveraging the instruct-following capability of the recent ChatGPT
for effective fuzzing. Evaluation on two popular DL libraries (PyTorch and
TensorFlow) shows that FuzzGPT can substantially outperform TitanFuzz,
detecting 76 bugs, with 49 already confirmed as previously unknown bugs,
including 11 high-priority bugs or security vulnerabilities.
"
ImpressionGPT: An Iterative Optimizing Framework for Radiology Report  Summarization with ChatGPT,Chong Ma,http://arxiv.org/pdf/2304.08448v2.pdf,2023-04-17,"['cs.cl', 'cs.ai']",2304.08448v2.pdf,"  The 'Impression' section of a radiology report is a critical basis for
communication between radiologists and other physicians, and it is typically
written by radiologists based on the 'Findings' section. However, writing
numerous impressions can be laborious and error-prone for radiologists.
Although recent studies have achieved promising results in automatic impression
generation using large-scale medical text data for pre-training and fine-tuning
pre-trained language models, such models often require substantial amounts of
medical text data and have poor generalization performance. While large
language models (LLMs) like ChatGPT have shown strong generalization
capabilities and performance, their performance in specific domains, such as
radiology, remains under-investigated and potentially limited. To address this
limitation, we propose ImpressionGPT, which leverages the in-context learning
capability of LLMs by constructing dynamic contexts using domain-specific,
individualized data. This dynamic prompt approach enables the model to learn
contextual knowledge from semantically similar examples from existing data.
Additionally, we design an iterative optimization algorithm that performs
automatic evaluation on the generated impression results and composes the
corresponding instruction prompts to further optimize the model. The proposed
ImpressionGPT model achieves state-of-the-art performance on both MIMIC-CXR and
OpenI datasets without requiring additional training data or fine-tuning the
LLMs. This work presents a paradigm for localizing LLMs that can be applied in
a wide range of similar application scenarios, bridging the gap between
general-purpose LLMs and the specific language processing needs of various
domains.
"
NaturalSpeech 2: Latent Diffusion Models are Natural and Zero-Shot  Speech and Singing Synthesizers,Kai Shen,http://arxiv.org/pdf/2304.09116v3.pdf,2023-04-18,"['eess.as', 'cs.ai', 'cs.cl', 'cs.lg', 'cs.sd']",2304.09116v3.pdf,"  Scaling text-to-speech (TTS) to large-scale, multi-speaker, and in-the-wild
datasets is important to capture the diversity in human speech such as speaker
identities, prosodies, and styles (e.g., singing). Current large TTS systems
usually quantize speech into discrete tokens and use language models to
generate these tokens one by one, which suffer from unstable prosody, word
skipping/repeating issue, and poor voice quality. In this paper, we develop
NaturalSpeech 2, a TTS system that leverages a neural audio codec with residual
vector quantizers to get the quantized latent vectors and uses a diffusion
model to generate these latent vectors conditioned on text input. To enhance
the zero-shot capability that is important to achieve diverse speech synthesis,
we design a speech prompting mechanism to facilitate in-context learning in the
diffusion model and the duration/pitch predictor. We scale NaturalSpeech 2 to
large-scale datasets with 44K hours of speech and singing data and evaluate its
voice quality on unseen speakers. NaturalSpeech 2 outperforms previous TTS
systems by a large margin in terms of prosody/timbre similarity, robustness,
and voice quality in a zero-shot setting, and performs novel zero-shot singing
synthesis with only a speech prompt. Audio samples are available at
https://speechresearch.github.io/naturalspeech2.
"
Improving Language Model Negotiation with Self-Play and In-Context  Learning from AI Feedback,Yao Fu,http://arxiv.org/pdf/2305.10142v1.pdf,2023-05-17,['cs.cl'],2305.10142v1.pdf,"  We study whether multiple large language models (LLMs) can autonomously
improve each other in a negotiation game by playing, reflecting, and
criticizing. We are interested in this question because if LLMs were able to
improve each other, it would imply the possibility of creating strong AI agents
with minimal human intervention. We ask two LLMs to negotiate with each other,
playing the roles of a buyer and a seller, respectively. They aim to reach a
deal with the buyer targeting a lower price and the seller a higher one. A
third language model, playing the critic, provides feedback to a player to
improve the player's negotiation strategies. We let the two agents play
multiple rounds, using previous negotiation history and AI feedback as
in-context demonstrations to improve the model's negotiation strategy
iteratively. We use different LLMs (GPT and Claude) for different roles and use
the deal price as the evaluation metric. Our experiments reveal multiple
intriguing findings: (1) Only a subset of the language models we consider can
self-play and improve the deal price from AI feedback, weaker models either do
not understand the game's rules or cannot incorporate AI feedback for further
improvement. (2) Models' abilities to learn from the feedback differ when
playing different roles. For example, it is harder for Claude-instant to
improve as the buyer than as the seller. (3) When unrolling the game to
multiple rounds, stronger agents can consistently improve their performance by
meaningfully using previous experiences and iterative AI feedback, yet have a
higher risk of breaking the deal. We hope our work provides insightful initial
explorations of having models autonomously improve each other with game playing
and AI feedback.
"
XTREME-UP: A User-Centric Scarce-Data Benchmark for Under-Represented  Languages,Sebastian Ruder,http://arxiv.org/pdf/2305.11938v2.pdf,2023-05-19,['cs.cl'],2305.11938v2.pdf,"  Data scarcity is a crucial issue for the development of highly multilingual
NLP systems. Yet for many under-represented languages (ULs) -- languages for
which NLP re-search is particularly far behind in meeting user needs -- it is
feasible to annotate small amounts of data. Motivated by this, we propose
XTREME-UP, a benchmark defined by: its focus on the scarce-data scenario rather
than zero-shot; its focus on user-centric tasks -- tasks with broad adoption by
speakers of high-resource languages; and its focus on under-represented
languages where this scarce-data scenario tends to be most realistic. XTREME-UP
evaluates the capabilities of language models across 88 under-represented
languages over 9 key user-centric technologies including ASR, OCR, MT, and
information access tasks that are of general utility. We create new datasets
for OCR, autocomplete, semantic parsing, and transliteration, and build on and
refine existing datasets for other tasks. XTREME-UP provides methodology for
evaluating many modeling scenarios including text-only, multi-modal (vision,
audio, and text),supervised parameter tuning, and in-context learning. We
evaluate commonly used models on the benchmark. We release all code and scripts
to train and evaluate models
"
Memory-Efficient Fine-Tuning of Compressed Large Language Models via  sub-4-bit Integer Quantization,Jeonghoon Kim,http://arxiv.org/pdf/2305.14152v2.pdf,2023-05-23,"['cs.lg', 'cs.ai']",2305.14152v2.pdf,"  Large language models (LLMs) face the challenges in fine-tuning and
deployment due to their high memory demands and computational costs. While
parameter-efficient fine-tuning (PEFT) methods aim to reduce the memory usage
of the optimizer state during fine-tuning, the inherent size of pre-trained LLM
weights continues to be a pressing concern. Even though quantization techniques
are widely proposed to ease memory demands and accelerate LLM inference, most
of these techniques are geared towards the deployment phase. To bridge this
gap, this paper presents Parameter-Efficient and Quantization-aware Adaptation
(PEQA) - a simple yet effective method that combines the advantages of PEFT
with quantized LLMs. By updating solely the quantization scales, PEQA can be
directly applied to quantized LLMs, ensuring seamless task transitions.
Parallel to existing PEFT methods, PEQA significantly reduces the memory
overhead associated with the optimizer state. Furthermore, it leverages the
advantages of quantization to substantially reduce model sizes. Even after
fine-tuning, the quantization structure of a PEQA-tuned LLM remains intact,
allowing for accelerated inference on the deployment stage. We employ
PEQA-tuning for task-specific adaptation on LLMs with up to 65 billion
parameters. To assess the logical reasoning and language comprehension of
PEQA-tuned LLMs, we fine-tune low-bit quantized LLMs using a instruction
dataset. Our results show that even when LLMs are quantized to below 4-bit
precision, their capabilities in language modeling, few-shot in-context
learning, and comprehension can be resiliently restored to (or even improved
over) their full-precision original performances with PEQA.
"
PaLI-X: On Scaling up a Multilingual Vision and Language Model,Xi Chen,http://arxiv.org/pdf/2305.18565v1.pdf,2023-05-29,"['cs.cv', 'cs.cl', 'cs.lg']",2305.18565v1.pdf,"  We present the training recipe and results of scaling up PaLI-X, a
multilingual vision and language model, both in terms of size of the components
and the breadth of its training task mixture. Our model achieves new levels of
performance on a wide-range of varied and complex tasks, including multiple
image-based captioning and question-answering tasks, image-based document
understanding and few-shot (in-context) learning, as well as object detection,
video question answering, and video captioning. PaLI-X advances the
state-of-the-art on most vision-and-language benchmarks considered (25+ of
them). Finally, we observe emerging capabilities, such as complex counting and
multilingual object detection, tasks that are not explicitly in the training
mix.
"
"Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis,  and LLMs Evaluations",Lifan Yuan,http://arxiv.org/pdf/2306.04618v2.pdf,2023-06-07,"['cs.cl', 'cs.cr', 'cs.lg']",2306.04618v2.pdf,"  This paper reexamines the research on out-of-distribution (OOD) robustness in
the field of NLP. We find that the distribution shift settings in previous
studies commonly lack adequate challenges, hindering the accurate evaluation of
OOD robustness. To address these issues, we propose a benchmark construction
protocol that ensures clear differentiation and challenging distribution
shifts. Then we introduce BOSS, a Benchmark suite for Out-of-distribution
robustneSS evaluation covering 5 tasks and 20 datasets. Based on BOSS, we
conduct a series of experiments on pre-trained language models for analysis and
evaluation of OOD robustness. First, for vanilla fine-tuning, we examine the
relationship between in-distribution (ID) and OOD performance. We identify
three typical types that unveil the inner learning mechanism, which could
potentially facilitate the forecasting of OOD robustness, correlating with the
advancements on ID datasets. Then, we evaluate 5 classic methods on BOSS and
find that, despite exhibiting some effectiveness in specific cases, they do not
offer significant improvement compared to vanilla fine-tuning. Further, we
evaluate 5 LLMs with various adaptation paradigms and find that when sufficient
ID data is available, fine-tuning domain-specific models outperform LLMs on ID
examples significantly. However, in the case of OOD instances, prioritizing
LLMs with in-context learning yields better results. We identify that both
fine-tuned small models and LLMs face challenges in effectively addressing
downstream tasks. The code is public at
\url{https://github.com/lifan-yuan/OOD_NLP}.
"
Transformers as Statisticians: Provable In-Context Learning with  In-Context Algorithm Selection,Yu Bai,http://arxiv.org/pdf/2306.04637v2.pdf,2023-06-07,"['cs.lg', 'cs.ai', 'cs.cl', 'math.st', 'stat.ml', 'stat.th']",2306.04637v2.pdf,"  Neural sequence models based on the transformer architecture have
demonstrated remarkable \emph{in-context learning} (ICL) abilities, where they
can perform new tasks when prompted with training and test examples, without
any parameter update to the model. This work first provides a comprehensive
statistical theory for transformers to perform ICL. Concretely, we show that
transformers can implement a broad class of standard machine learning
algorithms in context, such as least squares, ridge regression, Lasso, learning
generalized linear models, and gradient descent on two-layer neural networks,
with near-optimal predictive power on various in-context data distributions.
Using an efficient implementation of in-context gradient descent as the
underlying mechanism, our transformer constructions admit mild size bounds, and
can be learned with polynomially many pretraining sequences.
  Building on these ``base'' ICL algorithms, intriguingly, we show that
transformers can implement more complex ICL procedures involving
\emph{in-context algorithm selection}, akin to what a statistician can do in
real life -- A \emph{single} transformer can adaptively select different base
ICL algorithms -- or even perform qualitatively different tasks -- on different
input sequences, without any explicit prompting of the right algorithm or task.
We both establish this in theory by explicit constructions, and also observe
this phenomenon experimentally. In theory, we construct two general mechanisms
for algorithm selection with concrete examples: pre-ICL testing, and post-ICL
validation. As an example, we use the post-ICL validation mechanism to
construct a transformer that can perform nearly Bayes-optimal ICL on a
challenging task -- noisy linear models with mixed noise levels.
Experimentally, we demonstrate the strong in-context algorithm selection
capabilities of standard transformer architectures.
"
Instruction Tuned Models are Quick Learners,Himanshu Gupta,http://arxiv.org/pdf/2306.05539v1.pdf,2023-05-17,['cs.cl'],2306.05539v1.pdf,"  Instruction tuning of language models has demonstrated the ability to enhance
model generalization to unseen tasks via in-context learning using a few
examples. However, typical supervised learning still requires a plethora of
downstream training data for finetuning. Often in real-world situations, there
is a scarcity of data available for finetuning, falling somewhere between few
shot inference and fully supervised finetuning. In this work, we demonstrate
the sample efficiency of instruction tuned models over various tasks by
estimating the minimal downstream training data required by them to perform
transfer learning and match the performance of state-of-the-art (SOTA)
supervised models. We conduct experiments on 119 tasks from Super Natural
Instructions (SuperNI) in both the single task learning (STL) and multi task
learning (MTL) settings. Our findings reveal that, in the STL setting,
instruction tuned models equipped with 25% of the downstream train data surpass
the SOTA performance on the downstream tasks. In the MTL setting, an
instruction tuned model trained on only 6% of downstream training data achieve
SOTA, while using 100% of the training data results in a 3.69% points
improvement (ROUGE-L 74.68) over the previous SOTA. We conduct an analysis on
T5 vs Tk-Instruct by developing several baselines to demonstrate that
instruction tuning aids in increasing both sample efficiency and transfer
learning. Additionally, we observe a consistent ~4% performance increase in
both settings when pre-finetuning is performed with instructions. Finally, we
conduct a categorical study and find that contrary to previous results, tasks
in the question rewriting and title generation categories suffer from
instruction tuning.
"
Synapse: Trajectory-as-Exemplar Prompting with Memory for Computer  Control,Longtao Zheng,http://arxiv.org/pdf/2306.07863v2.pdf,2023-06-13,['cs.ai'],2306.07863v2.pdf,"  Building agents using large language models (LLMs) to control computers is an
emerging research field, where the agent perceives computer states and performs
actions to accomplish complex tasks. Previous computer agents have demonstrated
the benefits of in-context learning (ICL); however, their performance is
hindered by several issues. First, the limited context length of LLMs and
complex computer states restrict the number of exemplars, as a single webpage
can consume the entire context. Second, the exemplars in current methods, such
as high-level plans and multi-choice questions, cannot represent complete
trajectories, leading to suboptimal performance in tasks that require many
steps or repeated actions. Third, existing computer agents rely on
task-specific exemplars and overlook the similarity among tasks, resulting in
poor generalization to novel tasks. To address these challenges, we introduce
Synapse, featuring three key components: i) state abstraction, which filters
out task-irrelevant information from raw states, allowing more exemplars within
the limited context, ii) trajectory-as-exemplar prompting, which prompts the
LLM with complete trajectories of the abstracted states and actions for
improved multi-step decision-making, and iii) exemplar memory, which stores the
embeddings of exemplars and retrieves them via similarity search for
generalization to novel tasks. We evaluate Synapse on MiniWoB++, a standard
task suite, and Mind2Web, a real-world website benchmark. In MiniWoB++, Synapse
achieves a 99.2% average success rate (a 10% relative improvement) across 64
tasks using demonstrations from only 48 tasks. Notably, Synapse is the first
ICL method to solve the book-flight task in MiniWoB++. Synapse also exhibits a
53% relative improvement in average step success rate over the previous
state-of-the-art prompting scheme in Mind2Web.
"
Language to Rewards for Robotic Skill Synthesis,Wenhao Yu,http://arxiv.org/pdf/2306.08647v2.pdf,2023-06-14,"['cs.ro', 'cs.ai', 'cs.lg']",2306.08647v2.pdf,"  Large language models (LLMs) have demonstrated exciting progress in acquiring
diverse new capabilities through in-context learning, ranging from logical
reasoning to code-writing. Robotics researchers have also explored using LLMs
to advance the capabilities of robotic control. However, since low-level robot
actions are hardware-dependent and underrepresented in LLM training corpora,
existing efforts in applying LLMs to robotics have largely treated LLMs as
semantic planners or relied on human-engineered control primitives to interface
with the robot. On the other hand, reward functions are shown to be flexible
representations that can be optimized for control policies to achieve diverse
tasks, while their semantic richness makes them suitable to be specified by
LLMs. In this work, we introduce a new paradigm that harnesses this realization
by utilizing LLMs to define reward parameters that can be optimized and
accomplish variety of robotic tasks. Using reward as the intermediate interface
generated by LLMs, we can effectively bridge the gap between high-level
language instructions or corrections to low-level robot actions. Meanwhile,
combining this with a real-time optimizer, MuJoCo MPC, empowers an interactive
behavior creation experience where users can immediately observe the results
and provide feedback to the system. To systematically evaluate the performance
of our proposed method, we designed a total of 17 tasks for a simulated
quadruped robot and a dexterous manipulator robot. We demonstrate that our
proposed method reliably tackles 90% of the designed tasks, while a baseline
using primitive skills as the interface with Code-as-policies achieves 50% of
the tasks. We further validated our method on a real robot arm where complex
manipulation skills such as non-prehensile pushing emerge through our
interactive system.
"
Trained Transformers Learn Linear Models In-Context,Ruiqi Zhang,http://arxiv.org/pdf/2306.09927v3.pdf,2023-06-16,"['stat.ml', 'cs.ai', 'cs.cl', 'cs.lg']",2306.09927v3.pdf,"  Attention-based neural networks such as transformers have demonstrated a
remarkable ability to exhibit in-context learning (ICL): Given a short prompt
sequence of tokens from an unseen task, they can formulate relevant per-token
and next-token predictions without any parameter updates. By embedding a
sequence of labeled training data and unlabeled test data as a prompt, this
allows for transformers to behave like supervised learning algorithms. Indeed,
recent work has shown that when training transformer architectures over random
instances of linear regression problems, these models' predictions mimic those
of ordinary least squares.
  Towards understanding the mechanisms underlying this phenomenon, we
investigate the dynamics of ICL in transformers with a single linear
self-attention layer trained by gradient flow on linear regression tasks. We
show that despite non-convexity, gradient flow with a suitable random
initialization finds a global minimum of the objective function. At this global
minimum, when given a test prompt of labeled examples from a new prediction
task, the transformer achieves prediction error competitive with the best
linear predictor over the test prompt distribution. We additionally
characterize the robustness of the trained transformer to a variety of
distribution shifts and show that although a number of shifts are tolerated,
shifts in the covariate distribution of the prompts are not. Motivated by this,
we consider a generalized ICL setting where the covariate distributions can
vary across prompts. We show that although gradient flow succeeds at finding a
global minimum in this setting, the trained transformer is still brittle under
mild covariate shifts. We complement this finding with experiments on large,
nonlinear transformer architectures which we show are more robust under
covariate shifts.
"
HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide  Resolution,Eric Nguyen,http://arxiv.org/pdf/2306.15794v1.pdf,2023-06-27,"['cs.lg', 'q-bio.gn']",2306.15794v1.pdf,"  Genomic (DNA) sequences encode an enormous amount of information for gene
regulation and protein synthesis. Similar to natural language models,
researchers have proposed foundation models in genomics to learn generalizable
features from unlabeled genome data that can then be fine-tuned for downstream
tasks such as identifying regulatory elements. Due to the quadratic scaling of
attention, previous Transformer-based genomic models have used 512 to 4k tokens
as context (<0.001% of the human genome), significantly limiting the modeling
of long-range interactions in DNA. In addition, these methods rely on
tokenizers to aggregate meaningful DNA units, losing single nucleotide
resolution where subtle genetic variations can completely alter protein
function via single nucleotide polymorphisms (SNPs). Recently, Hyena, a large
language model based on implicit convolutions was shown to match attention in
quality while allowing longer context lengths and lower time complexity.
Leveraging Hyenas new long-range capabilities, we present HyenaDNA, a genomic
foundation model pretrained on the human reference genome with context lengths
of up to 1 million tokens at the single nucleotide-level, an up to 500x
increase over previous dense attention-based models. HyenaDNA scales
sub-quadratically in sequence length (training up to 160x faster than
Transformer), uses single nucleotide tokens, and has full global context at
each layer. We explore what longer context enables - including the first use of
in-context learning in genomics for simple adaptation to novel tasks without
updating pretrained model weights. On fine-tuned benchmarks from the Nucleotide
Transformer, HyenaDNA reaches state-of-the-art (SotA) on 12 of 17 datasets
using a model with orders of magnitude less parameters and pretraining data. On
the GenomicBenchmarks, HyenaDNA surpasses SotA on all 8 datasets on average by
+9 accuracy points.
"
Generative Type Inference for Python,Yun Peng,http://arxiv.org/pdf/2307.09163v1.pdf,2023-07-18,['cs.se'],2307.09163v1.pdf,"  Python is a popular dynamic programming language, evidenced by its ranking as
the second most commonly used language on GitHub. However, its dynamic type
system can lead to potential type errors, leading researchers to explore
automatic type inference approaches for Python programs. The rule-based type
inference approaches can ensure the accuracy of predicted variable types, but
they suffer from low coverage problems. Supervised type inference approaches,
while feature-agnostic, require large, high-quality annotated datasets and are
limited to pre-defined types. As zero-shot approaches, the cloze-style
approaches reformulate the type inference problem into a fill-in-the-blank
problem. However, their performance is limited.
  This paper introduces TypeGen, a few-shot generative type inference approach
that incorporates static domain knowledge from static analysis. TypeGen creates
chain-of-thought (COT) prompts by translating the type inference steps of
static analysis into prompts based on the type dependency graphs (TDGs),
enabling language models to learn from how static analysis infers types. By
combining COT prompts with code slices and type hints, TypeGen constructs
example prompts from human annotations. TypeGen only requires very few
annotated examples to teach language models to generate similar COT prompts via
in-context learning. Moreover, TypeGen enhances the interpretability of results
through the use of the input-explanation-output strategy. Experiments show that
TypeGen outperforms the best baseline Type4Py by 10.0% for argument type
prediction and 22.5% in return value type prediction in terms of top-1 Exact
Match by using only five examples. Furthermore, TypeGen achieves substantial
improvements of 27% to 84% compared to the zero-shot performance of large
language models with parameter sizes ranging from 1.3B to 175B in terms of
top-1 Exact Match.
"
Hypothesis Search: Inductive Reasoning with Language Models,Ruocheng Wang,http://arxiv.org/pdf/2309.05660v1.pdf,2023-09-11,"['cs.lg', 'cs.ai', 'cs.cl']",2309.05660v1.pdf,"  Inductive reasoning is a core problem-solving capacity: humans can identify
underlying principles from a few examples, which can then be robustly
generalized to novel scenarios. Recent work has evaluated large language models
(LLMs) on inductive reasoning tasks by directly prompting them yielding ""in
context learning."" This can work well for straightforward inductive tasks, but
performs very poorly on more complex tasks such as the Abstraction and
Reasoning Corpus (ARC). In this work, we propose to improve the inductive
reasoning ability of LLMs by generating explicit hypotheses at multiple levels
of abstraction: we prompt the LLM to propose multiple abstract hypotheses about
the problem, in natural language, then implement the natural language
hypotheses as concrete Python programs. These programs can be directly verified
by running on the observed examples and generalized to novel inputs. Because of
the prohibitive cost of generation with state-of-the-art LLMs, we consider a
middle step to filter the set of hypotheses that will be implemented into
programs: we either ask the LLM to summarize into a smaller set of hypotheses,
or ask human annotators to select a subset of the hypotheses. We verify our
pipeline's effectiveness on the ARC visual inductive reasoning benchmark, its
variant 1D-ARC, and string transformation dataset SyGuS. On a random 40-problem
subset of ARC, our automated pipeline using LLM summaries achieves 27.5%
accuracy, significantly outperforming the direct prompting baseline (accuracy
of 12.5%). With the minimal human input of selecting from LLM-generated
candidates, the performance is boosted to 37.5%. (And we argue this is a lower
bound on the performance of our approach without filtering.) Our ablation
studies show that abstract hypothesis generation and concrete program
representations are both beneficial for LLMs to perform inductive reasoning
tasks.
"
How FaR Are Large Language Models From Agents with Theory-of-Mind?,Pei Zhou,http://arxiv.org/pdf/2310.03051v1.pdf,2023-10-04,"['cs.cl', 'cs.ai']",2310.03051v1.pdf,"  ""Thinking is for Doing."" Humans can infer other people's mental states from
observations--an ability called Theory-of-Mind (ToM)--and subsequently act
pragmatically on those inferences. Existing question answering benchmarks such
as ToMi ask models questions to make inferences about beliefs of characters in
a story, but do not test whether models can then use these inferences to guide
their actions. We propose a new evaluation paradigm for large language models
(LLMs): Thinking for Doing (T4D), which requires models to connect inferences
about others' mental states to actions in social scenarios. Experiments on T4D
demonstrate that LLMs such as GPT-4 and PaLM 2 seemingly excel at tracking
characters' beliefs in stories, but they struggle to translate this capability
into strategic action. Our analysis reveals the core challenge for LLMs lies in
identifying the implicit inferences about mental states without being
explicitly asked about as in ToMi, that lead to choosing the correct action in
T4D. To bridge this gap, we introduce a zero-shot prompting framework, Foresee
and Reflect (FaR), which provides a reasoning structure that encourages LLMs to
anticipate future challenges and reason about potential actions. FaR boosts
GPT-4's performance from 50% to 71% on T4D, outperforming other prompting
methods such as Chain-of-Thought and Self-Ask. Moreover, FaR generalizes to
diverse out-of-distribution story structures and scenarios that also require
ToM inferences to choose an action, consistently outperforming other methods
including few-shot in-context learning.
"
Entity Matching using Large Language Models,Ralph Peeters,http://arxiv.org/pdf/2310.11244v1.pdf,2023-10-17,"['cs.cl', 'cs.lg']",2310.11244v1.pdf,"  Entity Matching is the task of deciding whether two entity descriptions refer
to the same real-world entity. Entity Matching is a central step in most data
integration pipelines and an enabler for many e-commerce applications which
require to match products offers from different vendors. State-of-the-art
entity matching methods often rely on pre-trained language models (PLMs) such
as BERT or RoBERTa. Two major drawbacks of these models for entity matching are
that (i) the models require significant amounts of task-specific training data
and (ii) the fine-tuned models are not robust concerning out-of-distribution
entities. In this paper, we investigate using large language models (LLMs) for
entity matching as a less domain-specific training data reliant and more robust
alternative to PLM-based matchers. Our study covers hosted LLMs, such as GPT3.5
and GPT4, as well as open source LLMs based on Llama2 which can be run locally.
We evaluate these models in a zero-shot scenario as well as a scenario where
task-specific training data is available. We compare different prompt designs
as well as the prompt sensitivity of the models in the zero-shot scenario. We
investigate (i) the selection of in-context demonstrations, (ii) the generation
of matching rules, as well as (iii) fine-tuning GPT3.5 in the second scenario
using the same pool of training data across the different approaches. Our
experiments show that GPT4 without any task-specific training data outperforms
fine-tuned PLMs (RoBERTa and Ditto) on three out of five benchmark datasets
reaching F1 scores around 90%. The experiments with in-context learning and
rule generation show that all models beside of GPT4 benefit from these
techniques (on average 5.9% and 2.2% F1), while GPT4 does not need such
additional guidance in most cases...
"
CycleAlign: Iterative Distillation from Black-box LLM to White-box  Models for Better Human Alignment,Jixiang Hong,http://arxiv.org/pdf/2310.16271v1.pdf,2023-10-25,"['cs.cl', 'cs.ai']",2310.16271v1.pdf,"  Language models trained on large-scale corpus often generate content that is
harmful, toxic, or contrary to human preferences, making their alignment with
human values a critical concern. Reinforcement learning from human feedback
(RLHF) with algorithms like PPO is a prevalent approach for alignment but is
often complex, unstable, and resource-intensive. Recently, ranking-based
alignment methods have emerged, offering stability and effectiveness by
replacing the RL framework with supervised fine-tuning, but they are costly due
to the need for annotated data. Considering that existing large language models
(LLMs) like ChatGPT are already relatively well-aligned and cost-friendly,
researchers have begun to align the language model with human preference from
AI feedback. The common practices, which unidirectionally distill the
instruction-following responses from LLMs, are constrained by their bottleneck.
Thus we introduce CycleAlign to distill alignment capabilities from
parameter-invisible LLMs (black-box) to a parameter-visible model (white-box)
in an iterative manner. With in-context learning (ICL) as the core of the
cycle, the black-box models are able to rank the model-generated responses
guided by human-craft instruction and demonstrations about their preferences.
During iterative interaction, the white-box models also have a judgment about
responses generated by them. Consequently, the agreement ranking could be
viewed as a pseudo label to dynamically update the in-context demonstrations
and improve the preference ranking ability of black-box models. Through
multiple interactions, the CycleAlign framework could align the white-box model
with the black-box model effectively in a low-resource way. Empirical results
illustrate that the model fine-tuned by CycleAlign remarkably exceeds existing
methods, and achieves the state-of-the-art performance in alignment with human
value.
"
Transformers are Efficient In-Context Estimators for Wireless  Communication,Vicram Rajagopalan,http://arxiv.org/pdf/2311.00226v1.pdf,2023-11-01,"['eess.sp', 'cs.lg']",2311.00226v1.pdf,"  Pre-trained transformers can perform in-context learning, where they adapt to
a new task using only a small number of prompts without any explicit model
optimization. Inspired by this attribute, we propose a novel approach, called
in-context estimation, for the canonical communication problem of estimating
transmitted symbols from received symbols. A communication channel is
essentially a noisy function that maps transmitted symbols to received symbols,
and this function can be represented by an unknown parameter whose statistics
depend on an (also unknown) latent context. Conventional approaches ignore this
hierarchical structure and simply attempt to use known transmissions, called
pilots, to perform a least-squares estimate of the channel parameter, which is
then used to estimate successive, unknown transmitted symbols. We make the
basic connection that transformers show excellent contextual sequence
completion with a few prompts, and so they should be able to implicitly
determine the latent context from pilot symbols to perform end-to-end
in-context estimation of transmitted symbols. Furthermore, the transformer
should use information efficiently, i.e., it should utilize any pilots received
to attain the best possible symbol estimates. Through extensive simulations, we
show that in-context estimation not only significantly outperforms standard
approaches, but also achieves the same performance as an estimator with perfect
knowledge of the latent context within a few context examples. Thus, we make a
strong case that transformers are efficient in-context estimators in the
communication setting.
"
Multimodal Prompt Learning for Product Title Generation with Extremely  Limited Labels,Bang Yang,http://arxiv.org/pdf/2307.01969v1.pdf,2023-07-05,['cs.cv'],2307.01969v1.pdf,"  Generating an informative and attractive title for the product is a crucial
task for e-commerce. Most existing works follow the standard multimodal natural
language generation approaches, e.g., image captioning, and employ the large
scale of human-labelled datasets to train desirable models. However, for novel
products, especially in a different domain, there are few existing labelled
data. In this paper, we propose a prompt-based approach, i.e., the Multimodal
Prompt Learning framework, to accurately and efficiently generate titles for
novel products with limited labels. We observe that the core challenges of
novel product title generation are the understanding of novel product
characteristics and the generation of titles in a novel writing style. To this
end, we build a set of multimodal prompts from different modalities to preserve
the corresponding characteristics and writing styles of novel products. As a
result, with extremely limited labels for training, the proposed method can
retrieve the multimodal prompts to generate desirable titles for novel
products. The experiments and analyses are conducted on five novel product
categories under both the in-domain and out-of-domain experimental settings.
The results show that, with only 1% of downstream labelled data for training,
our proposed approach achieves the best few-shot results and even achieves
competitive results with fully-supervised methods trained on 100% of training
data; With the full labelled data for training, our method achieves
state-of-the-art results.
"
Few-shot Joint Multimodal Aspect-Sentiment Analysis Based on Generative  Multimodal Prompt,Xiaocui Yang,http://arxiv.org/pdf/2305.10169v2.pdf,2023-05-17,['cs.mm'],2305.10169v2.pdf,"  We have witnessed the rapid proliferation of multimodal data on numerous
social media platforms. Conventional studies typically require massive labeled
data to train models for Multimodal Aspect-Based Sentiment Analysis (MABSA).
However, collecting and annotating fine-grained multimodal data for MABSA is
tough. To alleviate the above issue, we perform three MABSA-related tasks with
quite a small number of labeled multimodal samples. We first build diverse and
comprehensive multimodal few-shot datasets according to the data distribution.
To capture the specific prompt for each aspect term in a few-shot scenario, we
propose a novel Generative Multimodal Prompt (GMP) model for MABSA, which
includes the Multimodal Encoder module and the N-Stream Decoders module. We
further introduce a subtask to predict the number of aspect terms in each
instance to construct the multimodal prompt. Extensive experiments on two
datasets demonstrate that our approach outperforms strong baselines on two
MABSA-related tasks in the few-shot setting.
"
VIMA: General Robot Manipulation with Multimodal Prompts,Yunfan Jiang,http://arxiv.org/pdf/2210.03094v2.pdf,2022-10-06,"['cs.ro', 'cs.ai', 'cs.lg']",2210.03094v2.pdf,"  Prompt-based learning has emerged as a successful paradigm in natural
language processing, where a single general-purpose language model can be
instructed to perform any task specified by input prompts. Yet task
specification in robotics comes in various forms, such as imitating one-shot
demonstrations, following language instructions, and reaching visual goals.
They are often considered different tasks and tackled by specialized models. We
show that a wide spectrum of robot manipulation tasks can be expressed with
multimodal prompts, interleaving textual and visual tokens. Accordingly, we
develop a new simulation benchmark that consists of thousands of
procedurally-generated tabletop tasks with multimodal prompts, 600K+ expert
trajectories for imitation learning, and a four-level evaluation protocol for
systematic generalization. We design a transformer-based robot agent, VIMA,
that processes these prompts and outputs motor actions autoregressively. VIMA
features a recipe that achieves strong model scalability and data efficiency.
It outperforms alternative designs in the hardest zero-shot generalization
setting by up to $2.9\times$ task success rate given the same training data.
With $10\times$ less training data, VIMA still performs $2.7\times$ better than
the best competing variant. Code and video demos are available at
https://vimalabs.github.io/
"
Delving into Multimodal Prompting for Fine-grained Visual Classification,Xin Jiang,http://arxiv.org/pdf/2309.08912v1.pdf,2023-09-16,"['cs.cv', 'cs.mm']",2309.08912v1.pdf,"  Fine-grained visual classification (FGVC) involves categorizing fine
subdivisions within a broader category, which poses challenges due to subtle
inter-class discrepancies and large intra-class variations. However, prevailing
approaches primarily focus on uni-modal visual concepts. Recent advancements in
pre-trained vision-language models have demonstrated remarkable performance in
various high-level vision tasks, yet the applicability of such models to FGVC
tasks remains uncertain. In this paper, we aim to fully exploit the
capabilities of cross-modal description to tackle FGVC tasks and propose a
novel multimodal prompting solution, denoted as MP-FGVC, based on the
contrastive language-image pertaining (CLIP) model. Our MP-FGVC comprises a
multimodal prompts scheme and a multimodal adaptation scheme. The former
includes Subcategory-specific Vision Prompt (SsVP) and Discrepancy-aware Text
Prompt (DaTP), which explicitly highlights the subcategory-specific
discrepancies from the perspectives of both vision and language. The latter
aligns the vision and text prompting elements in a common semantic space,
facilitating cross-modal collaborative reasoning through a Vision-Language
Fusion Module (VLFM) for further improvement on FGVC. Moreover, we tailor a
two-stage optimization strategy for MP-FGVC to fully leverage the pre-trained
CLIP model and expedite efficient adaptation for FGVC. Extensive experiments
conducted on four FGVC datasets demonstrate the effectiveness of our MP-FGVC.
"
Multimodal Prompt Transformer with Hybrid Contrastive Learning for  Emotion Recognition in Conversation,Shihao Zou,http://arxiv.org/pdf/2310.04456v1.pdf,2023-10-04,"['cs.cl', 'cs.sd', 'eess.as']",2310.04456v1.pdf,"  Emotion Recognition in Conversation (ERC) plays an important role in driving
the development of human-machine interaction. Emotions can exist in multiple
modalities, and multimodal ERC mainly faces two problems: (1) the noise problem
in the cross-modal information fusion process, and (2) the prediction problem
of less sample emotion labels that are semantically similar but different
categories. To address these issues and fully utilize the features of each
modality, we adopted the following strategies: first, deep emotion cues
extraction was performed on modalities with strong representation ability, and
feature filters were designed as multimodal prompt information for modalities
with weak representation ability. Then, we designed a Multimodal Prompt
Transformer (MPT) to perform cross-modal information fusion. MPT embeds
multimodal fusion information into each attention layer of the Transformer,
allowing prompt information to participate in encoding textual features and
being fused with multi-level textual information to obtain better multimodal
fusion features. Finally, we used the Hybrid Contrastive Learning (HCL)
strategy to optimize the model's ability to handle labels with few samples.
This strategy uses unsupervised contrastive learning to improve the
representation ability of multimodal fusion and supervised contrastive learning
to mine the information of labels with few samples. Experimental results show
that our proposed model outperforms state-of-the-art models in ERC on two
benchmark datasets.
"
2nd Place Winning Solution for the CVPR2023 Visual Anomaly and Novelty  Detection Challenge: Multimodal Prompting for Data-centric Anomaly Detection,Yunkang Cao,http://arxiv.org/pdf/2306.09067v2.pdf,2023-06-15,['cs.cv'],2306.09067v2.pdf,"  This technical report introduces the winning solution of the team Segment Any
Anomaly for the CVPR2023 Visual Anomaly and Novelty Detection (VAND) challenge.
Going beyond uni-modal prompt, e.g., language prompt, we present a novel
framework, i.e., Segment Any Anomaly + (SAA$+$), for zero-shot anomaly
segmentation with multi-modal prompts for the regularization of cascaded modern
foundation models. Inspired by the great zero-shot generalization ability of
foundation models like Segment Anything, we first explore their assembly (SAA)
to leverage diverse multi-modal prior knowledge for anomaly localization.
Subsequently, we further introduce multimodal prompts (SAA$+$) derived from
domain expert knowledge and target image context to enable the non-parameter
adaptation of foundation models to anomaly segmentation. The proposed SAA$+$
model achieves state-of-the-art performance on several anomaly segmentation
benchmarks, including VisA and MVTec-AD, in the zero-shot setting. We will
release the code of our winning solution for the CVPR2023 VAN.
"
Multimodal Prompt Retrieval for Generative Visual Question Answering,Timothy Ossowski,http://arxiv.org/pdf/2306.17675v1.pdf,2023-06-30,"['cs.cv', 'cs.ai']",2306.17675v1.pdf,"  Recent years have witnessed impressive results of pre-trained vision-language
models on knowledge-intensive tasks such as visual question answering (VQA).
Despite the recent advances in VQA, existing methods mainly adopt a
discriminative formulation that predicts answers within a pre-defined label
set, leading to easy overfitting on low-resource domains with limited labeled
data (e.g., medicine) and poor generalization under domain shift to another
dataset. To tackle this limitation, we propose a novel generative model
enhanced by multimodal prompt retrieval (MPR) that integrates retrieved prompts
and multimodal features to generate answers in free text. Our generative model
enables rapid zero-shot dataset adaptation to unseen data distributions and
open-set answer labels across datasets. Our experiments on medical VQA tasks
show that MPR outperforms its non-retrieval counterpart by up to 30% accuracy
points in a few-shot domain adaptation setting.
"
Zero-Shot and Few-Shot Video Question Answering with Multi-Modal Prompts,Deniz Engin,http://arxiv.org/pdf/2309.15915v1.pdf,2023-09-27,['cs.cv'],2309.15915v1.pdf,"  Recent vision-language models are driven by large-scale pretrained models.
However, adapting pretrained models on limited data presents challenges such as
overfitting, catastrophic forgetting, and the cross-modal gap between vision
and language. We introduce a parameter-efficient method to address these
challenges, combining multimodal prompt learning and a transformer-based
mapping network, while keeping the pretrained models frozen. Our experiments on
several video question answering benchmarks demonstrate the superiority of our
approach in terms of performance and parameter efficiency on both zero-shot and
few-shot settings. Our code is available at https://engindeniz.github.io/vitis.
"
Vita-CLIP: Video and text adaptive CLIP via Multimodal Prompting,Syed Talal Wasim,http://arxiv.org/pdf/2304.03307v1.pdf,2023-04-06,"['cs.cv', 'eess.iv']",2304.03307v1.pdf,"  Adopting contrastive image-text pretrained models like CLIP towards video
classification has gained attention due to its cost-effectiveness and
competitive performance. However, recent works in this area face a trade-off.
Finetuning the pretrained model to achieve strong supervised performance
results in low zero-shot generalization. Similarly, freezing the backbone to
retain zero-shot capability causes significant drop in supervised accuracy.
Because of this, recent works in literature typically train separate models for
supervised and zero-shot action recognition. In this work, we propose a
multimodal prompt learning scheme that works to balance the supervised and
zero-shot performance under a single unified training. Our prompting approach
on the vision side caters for three aspects: 1) Global video-level prompts to
model the data distribution; 2) Local frame-level prompts to provide per-frame
discriminative conditioning; and 3) a summary prompt to extract a condensed
video representation. Additionally, we define a prompting scheme on the text
side to augment the textual context. Through this prompting scheme, we can
achieve state-of-the-art zero-shot performance on Kinetics-600, HMDB51 and
UCF101 while remaining competitive in the supervised setting. By keeping the
pretrained backbone frozen, we optimize a much lower number of parameters and
retain the existing general representation which helps achieve the strong
zero-shot performance. Our codes/models are released at
https://github.com/TalalWasim/Vita-CLIP.
"
Similarity-Aware Multimodal Prompt Learning for Fake News Detection,Ye Jiang,http://arxiv.org/pdf/2304.04187v3.pdf,2023-04-09,['cs.cl'],2304.04187v3.pdf,"  The standard paradigm for fake news detection mainly utilizes text
information to model the truthfulness of news. However, the discourse of online
fake news is typically subtle and it requires expert knowledge to use textual
information to debunk fake news. Recently, studies focusing on multimodal fake
news detection have outperformed text-only methods. Recent approaches utilizing
the pre-trained model to extract unimodal features, or fine-tuning the
pre-trained model directly, have become a new paradigm for detecting fake news.
Again, this paradigm either requires a large number of training instances, or
updates the entire set of pre-trained model parameters, making real-world fake
news detection impractical. Furthermore, traditional multimodal methods fuse
the cross-modal features directly without considering that the uncorrelated
semantic representation might inject noise into the multimodal features. This
paper proposes a Similarity-Aware Multimodal Prompt Learning (SAMPLE)
framework. First, we incorporate prompt learning into multimodal fake news
detection. Prompt learning, which only tunes prompts with a frozen language
model, can reduce memory usage significantly and achieve comparable
performances, compared with fine-tuning. We analyse three prompt templates with
a soft verbalizer to detect fake news. In addition, we introduce the
similarity-aware fusing method to adaptively fuse the intensity of multimodal
representation and mitigate the noise injection via uncorrelated cross-modal
features. For evaluation, SAMPLE surpasses the F1 and the accuracies of
previous works on two benchmark multimodal datasets, demonstrating the
effectiveness of the proposed method in detecting fake news. In addition,
SAMPLE also is superior to other approaches regardless of few-shot and
data-rich settings.
"
Draw Your Art Dream: Diverse Digital Art Synthesis with Multimodal  Guided Diffusion,Nisha Huang,http://arxiv.org/pdf/2209.13360v2.pdf,2022-09-27,['cs.cv'],2209.13360v2.pdf,"  Digital art synthesis is receiving increasing attention in the multimedia
community because of engaging the public with art effectively. Current digital
art synthesis methods usually use single-modality inputs as guidance, thereby
limiting the expressiveness of the model and the diversity of generated
results. To solve this problem, we propose the multimodal guided artwork
diffusion (MGAD) model, which is a diffusion-based digital artwork generation
approach that utilizes multimodal prompts as guidance to control the
classifier-free diffusion model. Additionally, the contrastive language-image
pretraining (CLIP) model is used to unify text and image modalities. Extensive
experimental results on the quality and quantity of the generated digital art
paintings confirm the effectiveness of the combination of the diffusion model
and multimodal guidance. Code is available at
https://github.com/haha-lisa/MGAD-multimodal-guided-artwork-diffusion.
"
Multimodal Prompting with Missing Modalities for Visual Recognition,Yi-Lun Lee,http://arxiv.org/pdf/2303.03369v2.pdf,2023-03-06,['cs.cv'],2303.03369v2.pdf,"  In this paper, we tackle two challenges in multimodal learning for visual
recognition: 1) when missing-modality occurs either during training or testing
in real-world situations; and 2) when the computation resources are not
available to finetune on heavy transformer models. To this end, we propose to
utilize prompt learning and mitigate the above two challenges together.
Specifically, our modality-missing-aware prompts can be plugged into multimodal
transformers to handle general missing-modality cases, while only requiring
less than 1% learnable parameters compared to training the entire model. We
further explore the effect of different prompt configurations and analyze the
robustness to missing modality. Extensive experiments are conducted to show the
effectiveness of our prompt learning framework that improves the performance
under various missing-modality cases, while alleviating the requirement of
heavy model re-training. Code is available.
"
Audio Visual Language Maps for Robot Navigation,Chenguang Huang,http://arxiv.org/pdf/2303.07522v2.pdf,2023-03-13,"['cs.ro', 'cs.ai', 'cs.cl', 'cs.cv', 'cs.lg']",2303.07522v2.pdf,"  While interacting in the world is a multi-sensory experience, many robots
continue to predominantly rely on visual perception to map and navigate in
their environments. In this work, we propose Audio-Visual-Language Maps
(AVLMaps), a unified 3D spatial map representation for storing cross-modal
information from audio, visual, and language cues. AVLMaps integrate the
open-vocabulary capabilities of multimodal foundation models pre-trained on
Internet-scale data by fusing their features into a centralized 3D voxel grid.
In the context of navigation, we show that AVLMaps enable robot systems to
index goals in the map based on multimodal queries, e.g., textual descriptions,
images, or audio snippets of landmarks. In particular, the addition of audio
information enables robots to more reliably disambiguate goal locations.
Extensive experiments in simulation show that AVLMaps enable zero-shot
multimodal goal navigation from multimodal prompts and provide 50% better
recall in ambiguous scenarios. These capabilities extend to mobile robots in
the real world - navigating to landmarks referring to visual, audio, and
spatial concepts. Videos and code are available at: https://avlmaps.github.io.
"
Multitask Multimodal Prompted Training for Interactive Embodied Task  Completion,Georgios Pantazopoulos,http://arxiv.org/pdf/2311.04067v1.pdf,2023-11-07,"['cs.lg', 'cs.ai', 'cs.cv']",2311.04067v1.pdf,"  Interactive and embodied tasks pose at least two fundamental challenges to
existing Vision & Language (VL) models, including 1) grounding language in
trajectories of actions and observations, and 2) referential disambiguation. To
tackle these challenges, we propose an Embodied MultiModal Agent (EMMA): a
unified encoder-decoder model that reasons over images and trajectories, and
casts action prediction as multimodal text generation. By unifying all tasks as
text generation, EMMA learns a language of actions which facilitates transfer
across tasks. Different to previous modular approaches with independently
trained components, we use a single multitask model where each task contributes
to goal completion. EMMA performs on par with similar models on several VL
benchmarks and sets a new state-of-the-art performance (36.81% success rate) on
the Dialog-guided Task Completion (DTC), a benchmark to evaluate dialog-guided
agents in the Alexa Arena
"
MaPLe: Multi-modal Prompt Learning,Muhammad Uzair Khattak,http://arxiv.org/pdf/2210.03117v3.pdf,2022-10-06,['cs.cv'],2210.03117v3.pdf,"  Pre-trained vision-language (V-L) models such as CLIP have shown excellent
generalization ability to downstream tasks. However, they are sensitive to the
choice of input text prompts and require careful selection of prompt templates
to perform well. Inspired by the Natural Language Processing (NLP) literature,
recent CLIP adaptation approaches learn prompts as the textual inputs to
fine-tune CLIP for downstream tasks. We note that using prompting to adapt
representations in a single branch of CLIP (language or vision) is sub-optimal
since it does not allow the flexibility to dynamically adjust both
representation spaces on a downstream task. In this work, we propose
Multi-modal Prompt Learning (MaPLe) for both vision and language branches to
improve alignment between the vision and language representations. Our design
promotes strong coupling between the vision-language prompts to ensure mutual
synergy and discourages learning independent uni-modal solutions. Further, we
learn separate prompts across different early stages to progressively model the
stage-wise feature relationships to allow rich context learning. We evaluate
the effectiveness of our approach on three representative tasks of
generalization to novel classes, new target datasets and unseen domain shifts.
Compared with the state-of-the-art method Co-CoOp, MaPLe exhibits favorable
performance and achieves an absolute gain of 3.45% on novel classes and 2.72%
on overall harmonic-mean, averaged over 11 diverse image recognition datasets.
Our code and pre-trained models are available at
https://github.com/muzairkhattak/multimodal-prompt-learning.
"
Few-shot Multimodal Sentiment Analysis based on Multimodal Probabilistic  Fusion Prompts,Xiaocui Yang,http://arxiv.org/pdf/2211.06607v2.pdf,2022-11-12,"['cs.cl', 'cs.mm']",2211.06607v2.pdf,"  Multimodal sentiment analysis has gained significant attention due to the
proliferation of multimodal content on social media. However, existing studies
in this area rely heavily on large-scale supervised data, which is
time-consuming and labor-intensive to collect. Thus, there is a need to address
the challenge of few-shot multimodal sentiment analysis. To tackle this
problem, we propose a novel method called Multimodal Probabilistic Fusion
Prompts (MultiPoint) that leverages diverse cues from different modalities for
multimodal sentiment detection in the few-shot scenario. Specifically, we start
by introducing a Consistently Distributed Sampling approach called CDS, which
ensures that the few-shot dataset has the same category distribution as the
full dataset. Unlike previous approaches primarily using prompts based on the
text modality, we design unified multimodal prompts to reduce discrepancies
between different modalities and dynamically incorporate multimodal
demonstrations into the context of each multimodal instance. To enhance the
model's robustness, we introduce a probabilistic fusion method to fuse output
predictions from multiple diverse prompts for each input. Our extensive
experiments on six datasets demonstrate the effectiveness of our approach.
First, our method outperforms strong baselines in the multimodal few-shot
setting. Furthermore, under the same amount of data (1% of the full dataset),
our CDS-based experimental results significantly outperform those based on
previously sampled datasets constructed from the same number of instances of
each class.
"
Multimodal Garment Designer: Human-Centric Latent Diffusion Models for  Fashion Image Editing,Alberto Baldrati,http://arxiv.org/pdf/2304.02051v2.pdf,2023-04-04,"['cs.cv', 'cs.ai', 'cs.mm']",2304.02051v2.pdf,"  Fashion illustration is used by designers to communicate their vision and to
bring the design idea from conceptualization to realization, showing how
clothes interact with the human body. In this context, computer vision can thus
be used to improve the fashion design process. Differently from previous works
that mainly focused on the virtual try-on of garments, we propose the task of
multimodal-conditioned fashion image editing, guiding the generation of
human-centric fashion images by following multimodal prompts, such as text,
human body poses, and garment sketches. We tackle this problem by proposing a
new architecture based on latent diffusion models, an approach that has not
been used before in the fashion domain. Given the lack of existing datasets
suitable for the task, we also extend two existing fashion datasets, namely
Dress Code and VITON-HD, with multimodal annotations collected in a
semi-automatic manner. Experimental results on these new datasets demonstrate
the effectiveness of our proposal, both in terms of realism and coherence with
the given multimodal inputs. Source code and collected multimodal annotations
are publicly available at:
https://github.com/aimagelab/multimodal-garment-designer.
"
Parameter-efficient Tuning of Large-scale Multimodal Foundation Model,Haixin Wang,http://arxiv.org/pdf/2305.08381v3.pdf,2023-05-15,['cs.cv'],2305.08381v3.pdf,"  Driven by the progress of large-scale pre-training, parameter-efficient
transfer learning has gained immense popularity across different subfields of
Artificial Intelligence. The core is to adapt the model to downstream tasks
with only a small set of parameters. Recently, researchers have leveraged such
proven techniques in multimodal tasks and achieve promising results. However,
two critical issues remain unresolved: how to further reduce the complexity
with lightweight design and how to boost alignment between modalities under
extremely low parameters. In this paper, we propose A graceful prompt framework
for cross-modal transfer (Aurora) to overcome these challenges. Considering the
redundancy in existing architectures, we first utilize the mode approximation
to generate 0.1M trainable parameters to implement the multimodal prompt
tuning, which explores the low intrinsic dimension with only 0.04% parameters
of the pre-trained model. Then, for better modality alignment, we propose the
Informative Context Enhancement and Gated Query Transformation module under
extremely few parameters scenes. A thorough evaluation on six cross-modal
benchmarks shows that it not only outperforms the state-of-the-art but even
outperforms the full fine-tuning approach. Our code is available at:
https://github.com/WillDreamer/Aurora.
"
RM-PRT: Realistic Robotic Manipulation Simulator and Benchmark with  Progressive Reasoning Tasks,Pengzhen Ren,http://arxiv.org/pdf/2306.11335v2.pdf,2023-06-20,"['cs.ro', 'cs.ai', 'cs.cv', 'cs.lg']",2306.11335v2.pdf,"  Recently, the advent of pre-trained large-scale language models (LLMs) like
ChatGPT and GPT-4 have significantly advanced the machine's natural language
understanding capabilities. This breakthrough has allowed us to seamlessly
integrate these open-source LLMs into a unified robot simulator environment to
help robots accurately understand and execute human natural language
instructions. To this end, in this work, we introduce a realistic robotic
manipulation simulator and build a Robotic Manipulation with Progressive
Reasoning Tasks (RM-PRT) benchmark on this basis. Specifically, the RM-PRT
benchmark builds a new high-fidelity digital twin scene based on Unreal Engine
5, which includes 782 categories, 2023 objects, and 15K natural language
instructions generated by ChatGPT for a detailed evaluation of robot
manipulation. We propose a general pipeline for the RM-PRT benchmark that takes
as input multimodal prompts containing natural language instructions and
automatically outputs actions containing the movement and position transitions.
We set four natural language understanding tasks with progressive reasoning
levels and evaluate the robot's ability to understand natural language
instructions in two modes of adsorption and grasping. In addition, we also
conduct a comprehensive analysis and comparison of the differences and
advantages of 10 different LLMs in instruction understanding and generation
quality. We hope the new simulator and benchmark will facilitate future
research on language-guided robotic manipulation. Project website:
https://necolizer.github.io/RM-PRT/ .
"
Reframing Instructional Prompts to GPTk's Language,Swaroop Mishra,http://arxiv.org/pdf/2109.07830v3.pdf,2021-09-16,"['cs.cl', 'cs.ai', 'cs.lg']",2109.07830v3.pdf,"  What kinds of instructional prompts are easier to follow for Language Models
(LMs)? We study this question by conducting extensive empirical analysis that
shed light on important features of successful instructional prompts.
Specifically, we study several classes of reframing techniques for manual
reformulation of prompts into more effective ones. Some examples include
decomposing a complex task instruction into multiple simpler tasks or itemizing
instructions into sequential steps. Our experiments compare the zero-shot and
few-shot performance of LMs prompted with reframed instructions on 12 NLP tasks
across 6 categories. Compared with original instructions, our reframed
instructions lead to significant improvements across LMs with different sizes.
For example, the same reframed prompts boost few-shot performance of
GPT3-series and GPT2-series by 12.5% and 6.7% respectively averaged over all
tasks. Furthermore, reframed instructions reduce the number of examples
required to prompt LMs in the few-shot setting. We hope these
empirically-driven techniques will pave the way towards more effective future
prompting algorithms.
"
Prompt-Based Learning for Thread Structure Prediction in Cybersecurity  Forums,Kazuaki Kashihara,http://arxiv.org/pdf/2303.05400v1.pdf,2023-03-05,"['cs.cl', 'cs.ai', 'cs.cr']",2303.05400v1.pdf,"  With recent trends indicating cyber crimes increasing in both frequency and
cost, it is imperative to develop new methods that leverage data-rich hacker
forums to assist in combating ever evolving cyber threats. Defining
interactions within these forums is critical as it facilitates identifying
highly skilled users, which can improve prediction of novel threats and future
cyber attacks. We propose a method called Next Paragraph Prediction with
Instructional Prompting (NPP-IP) to predict thread structures while grounded on
the context around posts. This is the first time to apply an instructional
prompting approach to the cybersecurity domain. We evaluate our NPP-IP with the
Reddit dataset and Hacker Forums dataset that has posts and thread structures
of real hacker forums' threads, and compare our method's performance with
existing methods. The experimental evaluation shows that our proposed method
can predict the thread structure significantly better than existing methods
allowing for better social network prediction based on forum interactions.
"
Red Teaming Language Model Detectors with Language Models,Zhouxing Shi,http://arxiv.org/pdf/2305.19713v2.pdf,2023-05-31,"['cs.cl', 'cs.lg']",2305.19713v2.pdf,"  The prevalence and strong capability of large language models (LLMs) present
significant safety and ethical risks if exploited by malicious users. To
prevent the potentially deceptive usage of LLMs, recent works have proposed
algorithms to detect LLM-generated text and protect LLMs. In this paper, we
investigate the robustness and reliability of these LLM detectors under
adversarial attacks. We study two types of attack strategies: 1) replacing
certain words in an LLM's output with their synonyms given the context; 2)
automatically searching for an instructional prompt to alter the writing style
of the generation. In both strategies, we leverage an auxiliary LLM to generate
the word replacements or the instructional prompt. Different from previous
works, we consider a challenging setting where the auxiliary LLM can also be
protected by a detector. Experiments reveal that our attacks effectively
compromise the performance of all detectors in the study with plausible
generations, underscoring the urgent need to improve the robustness of
LLM-generated text detection systems.
"
Large Language Models Encode Clinical Knowledge,Karan Singhal,http://arxiv.org/pdf/2212.13138v1.pdf,2022-12-26,['cs.cl'],2212.13138v1.pdf,"  Large language models (LLMs) have demonstrated impressive capabilities in
natural language understanding and generation, but the quality bar for medical
and clinical applications is high. Today, attempts to assess models' clinical
knowledge typically rely on automated evaluations on limited benchmarks. There
is no standard to evaluate model predictions and reasoning across a breadth of
tasks. To address this, we present MultiMedQA, a benchmark combining six
existing open question answering datasets spanning professional medical exams,
research, and consumer queries; and HealthSearchQA, a new free-response dataset
of medical questions searched online. We propose a framework for human
evaluation of model answers along multiple axes including factuality,
precision, possible harm, and bias. In addition, we evaluate PaLM (a
540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on
MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves
state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA,
MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US
Medical License Exam questions), surpassing prior state-of-the-art by over 17%.
However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve
this we introduce instruction prompt tuning, a parameter-efficient approach for
aligning LLMs to new domains using a few exemplars. The resulting model,
Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show
that comprehension, recall of knowledge, and medical reasoning improve with
model scale and instruction prompt tuning, suggesting the potential utility of
LLMs in medicine. Our human evaluations reveal important limitations of today's
models, reinforcing the importance of both evaluation frameworks and method
development in creating safe, helpful LLM models for clinical applications.
"
Layout and Task Aware Instruction Prompt for Zero-shot Document Image  Question Answering,Wenjin Wang,http://arxiv.org/pdf/2306.00526v4.pdf,2023-06-01,"['cs.cl', 'cs.ai', 'cs.cv']",2306.00526v4.pdf,"  Layout-aware pre-trained models has achieved significant progress on document
image question answering. They introduce extra learnable modules into existing
language models to capture layout information within document images from text
bounding box coordinates obtained by OCR tools. However, extra modules
necessitate pre-training on extensive document images. This prevents these
methods from directly utilizing off-the-shelf instruction-tuning language
foundation models, which have recently shown promising potential in zero-shot
learning. Instead, in this paper, we find that instruction-tuning language
models like Claude and ChatGPT can understand layout by spaces and line breaks.
Based on this observation, we propose the LAyout and Task aware Instruction
Prompt (LATIN-Prompt), which consists of layout-aware document content and
task-aware instruction. Specifically, the former uses appropriate spaces and
line breaks to recover the layout information among text segments obtained by
OCR tools, and the latter ensures that generated answers adhere to formatting
requirements. Moreover, we propose the LAyout and Task aware Instruction Tuning
(LATIN-Tuning) to improve the performance of small instruction-tuning models
like Alpaca. Experimental results show that LATIN-Prompt enables zero-shot
performance of Claude and ChatGPT to be comparable to the fine-tuning
performance of SOTAs on document image question answering, and LATIN-Tuning
enhances the zero-shot performance of Alpaca significantly. For example,
LATIN-Prompt improves the performance of Claude and ChatGPT on DocVQA by 263%
and 20% respectively. LATIN-Tuning improves the performance of Alpaca on DocVQA
by 87.7%. Quantitative and qualitative analyses demonstrate the effectiveness
of LATIN-Prompt and LATIN-Tuning. We provide the code in supplementary and will
release it to facilitate future research.
"
InstructUIE: Multi-task Instruction Tuning for Unified Information  Extraction,Xiao Wang,http://arxiv.org/pdf/2304.08085v1.pdf,2023-04-17,"['cs.cl', 'cs.ai']",2304.08085v1.pdf,"  Large language models have unlocked strong multi-task capabilities from
reading instructive prompts. However, recent studies have shown that existing
large models still have difficulty with information extraction tasks. For
example, gpt-3.5-turbo achieved an F1 score of 18.22 on the Ontonotes dataset,
which is significantly lower than the state-of-the-art performance. In this
paper, we propose InstructUIE, a unified information extraction framework based
on instruction tuning, which can uniformly model various information extraction
tasks and capture the inter-task dependency. To validate the proposed method,
we introduce IE INSTRUCTIONS, a benchmark of 32 diverse information extraction
datasets in a unified text-to-text format with expert-written instructions.
Experimental results demonstrate that our method achieves comparable
performance to Bert in supervised settings and significantly outperforms the
state-of-the-art and gpt3.5 in zero-shot settings.
"
Camoscio: an Italian Instruction-tuned LLaMA,Andrea Santilli,http://arxiv.org/pdf/2307.16456v1.pdf,2023-07-31,['cs.cl'],2307.16456v1.pdf,"  In recent years Large Language Models (LLMs) have increased the state of the
art on several natural language processing tasks. However, their accessibility
is often limited to paid API services, posing challenges for researchers in
conducting extensive investigations. On the other hand, while some open-source
models have been proposed by the community, they are typically multilingual and
not specifically tailored for the Italian language. In an effort to democratize
the available and open resources for the Italian language, in this paper we
introduce Camoscio: a language model specifically tuned to follow users'
prompts in Italian. Specifically, we finetuned the smallest variant of LLaMA
(7b) with LoRA on a corpus of instruction prompts translated to Italian via
ChatGPT. Results indicate that the model's zero-shot performance on various
downstream tasks in Italian competes favorably with existing models
specifically finetuned for those tasks. All the artifacts (code, dataset,
model) are released to the community at the following url:
https://github.com/teelinsan/camoscio
"
Self-Alignment with Instruction Backtranslation,Xian Li,http://arxiv.org/pdf/2308.06259v2.pdf,2023-08-11,['cs.cl'],2308.06259v2.pdf,"  We present a scalable method to build a high quality instruction following
language model by automatically labelling human-written text with corresponding
instructions. Our approach, named instruction backtranslation, starts with a
language model finetuned on a small amount of seed data, and a given web
corpus. The seed model is used to construct training examples by generating
instruction prompts for web documents (self-augmentation), and then selecting
high quality examples from among these candidates (self-curation). This data is
then used to finetune a stronger model. Finetuning LLaMa on two iterations of
our approach yields a model that outperforms all other LLaMa-based models on
the Alpaca leaderboard not relying on distillation data, demonstrating highly
effective self-alignment.
"
Discrete Prompt Compression with Reinforcement Learning,Hoyoun Jung,http://arxiv.org/pdf/2308.08758v1.pdf,2023-08-17,"['cs.cl', 'cs.ai']",2308.08758v1.pdf,"  Instruction-tuned Language Models (LMs) are widely used by users to address
various problems with task-specific prompts. Constraints associated with the
context window length and computational costs encourage the development of
compressed prompts. Existing methods rely heavily on training embeddings, which
are designed to accommodate multiple token meanings. This presents challenges
in terms of interpretability, a fixed number of embedding tokens, reusability
across different LMs, and inapplicability when interacting with black-box APIs.
This study proposes prompt compression with reinforcement learning (PCRL), a
novel discrete prompt compression method that addresses these issues. PCRL
employs a computationally efficient policy network that directly edits prompts.
The PCRL training approach can be flexibly applied to various types of LMs, as
well as decoder-only and encoder-decoder architecture, and can be trained
without gradient access to LMs or labeled data. PCRL achieves an average
reduction of 24.6% in token count across various instruction prompts while
preserving performance. Further, we demonstrate that the learned policy can be
transferred to larger LMs, and through various analyses, we aid the
understanding of token importance within prompts.
"
Casteist but Not Racist? Quantifying Disparities in Large Language Model  Bias between India and the West,Khyati Khandelwal,http://arxiv.org/pdf/2309.08573v1.pdf,2023-09-15,"['cs.cl', 'cs.cy']",2309.08573v1.pdf,"  Large Language Models (LLMs), now used daily by millions of users, can encode
societal biases, exposing their users to representational harms. A large body
of scholarship on LLM bias exists but it predominantly adopts a Western-centric
frame and attends comparatively less to bias levels and potential harms in the
Global South. In this paper, we quantify stereotypical bias in popular LLMs
according to an Indian-centric frame and compare bias levels between the Indian
and Western contexts. To do this, we develop a novel dataset which we call
Indian-BhED (Indian Bias Evaluation Dataset), containing stereotypical and
anti-stereotypical examples for caste and religion contexts. We find that the
majority of LLMs tested are strongly biased towards stereotypes in the Indian
context, especially as compared to the Western context. We finally investigate
Instruction Prompting as a simple intervention to mitigate such bias and find
that it significantly reduces both stereotypical and anti-stereotypical biases
in the majority of cases for GPT-3.5. The findings of this work highlight the
need for including more diverse voices when evaluating LLMs.
"
Harnessing Large Language Models' Empathetic Response Generation  Capabilities for Online Mental Health Counselling Support,Siyuan Brandon Loh,http://arxiv.org/pdf/2310.08017v1.pdf,2023-10-12,"['cs.cl', 'i.2']",2310.08017v1.pdf,"  Large Language Models (LLMs) have demonstrated remarkable performance across
various information-seeking and reasoning tasks. These computational systems
drive state-of-the-art dialogue systems, such as ChatGPT and Bard. They also
carry substantial promise in meeting the growing demands of mental health care,
albeit relatively unexplored. As such, this study sought to examine LLMs'
capability to generate empathetic responses in conversations that emulate those
in a mental health counselling setting. We selected five LLMs: version 3.5 and
version 4 of the Generative Pre-training (GPT), Vicuna FastChat-T5, Pathways
Language Model (PaLM) version 2, and Falcon-7B-Instruct. Based on a simple
instructional prompt, these models responded to utterances derived from the
EmpatheticDialogues (ED) dataset. Using three empathy-related metrics, we
compared their responses to those from traditional response generation dialogue
systems, which were fine-tuned on the ED dataset, along with human-generated
responses. Notably, we discovered that responses from the LLMs were remarkably
more empathetic in most scenarios. We position our findings in light of
catapulting advancements in creating empathetic conversational systems.
"
Few-shot Instruction Prompts for Pretrained Language Models to Detect  Social Biases,Shrimai Prabhumoye,http://arxiv.org/pdf/2112.07868v2.pdf,2021-12-15,"['cs.cl', 'cs.ai']",2112.07868v2.pdf,"  Detecting social bias in text is challenging due to nuance, subjectivity, and
difficulty in obtaining good quality labeled datasets at scale, especially
given the evolving nature of social biases and society. To address these
challenges, we propose a few-shot instruction-based method for prompting
pre-trained language models (LMs). We select a few class-balanced exemplars
from a small support repository that are closest to the query to be labeled in
the embedding space. We then provide the LM with instruction that consists of
this subset of labeled exemplars, the query text to be classified, a definition
of bias, and prompt it to make a decision. We demonstrate that large LMs used
in a few-shot context can detect different types of fine-grained biases with
similar and sometimes superior accuracy to fine-tuned models. We observe that
the largest 530B parameter model is significantly more effective in detecting
social bias compared to smaller models (achieving at least 13% improvement in
AUC metric compared to other models). It also maintains a high AUC (dropping
less than 2%) when the labeled repository is reduced to as few as $100$
samples. Large pretrained language models thus make it easier and quicker to
build new bias detectors.
"
"GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large  Language Models",Archiki Prasad,http://arxiv.org/pdf/2203.07281v2.pdf,2022-03-14,"['cs.cl', 'cs.ai', 'cs.lg']",2203.07281v2.pdf,"  Providing natural language instructions in prompts is a useful new paradigm
for improving task performance of large language models in a zero-shot setting.
Recent work has aimed to improve such prompts via manual rewriting or
gradient-based tuning. However, manual rewriting is time-consuming and requires
subjective interpretation, while gradient-based tuning can be extremely
computationally demanding for large models and may not be feasible for
API-based models. In this work, we introduce Gradient-free Instructional Prompt
Search (GrIPS), a gradient-free, edit-based search approach for improving task
instructions for large language models. GrIPS takes in instructions designed
for humans and automatically returns an improved, edited prompt, while allowing
for API-based tuning. With InstructGPT models, GrIPS improves the average task
performance by up to 4.30 percentage points on eight classification tasks from
the Natural Instructions dataset (with similar improvements for OPT, BLOOM, and
FLAN-T5). We see improvements for both instruction-only prompts and instruction
+ k-shot examples prompts. Notably, GrIPS outperforms manual rewriting and
purely example-based prompts while controlling for the available compute and
data budget. Further, performance of GrIPS is comparable to select
gradient-based tuning approaches. Qualitatively, we show our edits can simplify
instructions and at times make them incoherent but nonetheless improve
accuracy. Our code is available at: https://github.com/archiki/GrIPS
"
LINGUIST: Language Model Instruction Tuning to Generate Annotated  Utterances for Intent Classification and Slot Tagging,Andy Rosenbaum,http://arxiv.org/pdf/2209.09900v1.pdf,2022-09-20,"['cs.cl', 'cs.ai', 'cs.lg']",2209.09900v1.pdf,"  We present LINGUIST, a method for generating annotated data for Intent
Classification and Slot Tagging (IC+ST), via fine-tuning AlexaTM 5B, a
5-billion-parameter multilingual sequence-to-sequence (seq2seq) model, on a
flexible instruction prompt. In a 10-shot novel intent setting for the SNIPS
dataset, LINGUIST surpasses state-of-the-art approaches (Back-Translation and
Example Extrapolation) by a wide margin, showing absolute improvement for the
target intents of +1.9 points on IC Recall and +2.5 points on ST F1 Score. In
the zero-shot cross-lingual setting of the mATIS++ dataset, LINGUIST
out-performs a strong baseline of Machine Translation with Slot Alignment by
+4.14 points absolute on ST F1 Score across 6 languages, while matching
performance on IC. Finally, we verify our results on an internal large-scale
multilingual dataset for conversational agent IC+ST and show significant
improvements over a baseline which uses Back-Translation, Paraphrasing and Slot
Catalog Resampling. To our knowledge, we are the first to demonstrate
instruction fine-tuning of a large-scale seq2seq model to control the outputs
of multilingual intent- and slot-labeled data generation.
"
InferFix: End-to-End Program Repair with LLMs,Matthew Jin,http://arxiv.org/pdf/2303.07263v1.pdf,2023-03-13,['cs.se'],2303.07263v1.pdf,"  Software development life cycle is profoundly influenced by bugs: their
introduction, identification, and eventual resolution account for a significant
portion of software cost. This has motivated software engineering researchers
and practitioners to propose different approaches for automating the
identification and repair of software defects. Large language models have been
adapted to the program repair task through few-shot demonstration learning and
instruction prompting, treating this as an infilling task. However, these
models have only focused on learning general bug-fixing patterns for
uncategorized bugs mined from public repositories. In this paper, we propose
InferFix: a transformer-based program repair framework paired with a
state-of-the-art static analyzer to fix critical security and performance bugs.
InferFix combines a Retriever -- transformer encoder model pretrained via
contrastive learning objective, which aims at searching for semantically
equivalent bugs and corresponding fixes; and a Generator -- a large language
model (Codex Cushman) finetuned on supervised bug-fix data with prompts
augmented via bug type annotations and semantically similar fixes retrieved
from an external non-parametric memory. To train and evaluate our approach, we
curated InferredBugs, a novel, metadata-rich dataset of bugs extracted by
executing the Infer static analyzer on the change histories of thousands of
Java and C# repositories. Our evaluation demonstrates that InferFix outperforms
strong LLM baselines, with a top-1 accuracy of 65.6% for generating fixes in C#
and 76.8% in Java. We discuss the deployment of InferFix alongside Infer at
Microsoft which offers an end-to-end solution for detection, classification,
and localization of bugs, as well as fixing and validation of candidate
patches, integrated in the continuous integration pipeline to automate the
software development workflow.
"
Text-based Person Search without Parallel Image-Text Data,Yang Bai,http://arxiv.org/pdf/2305.12964v2.pdf,2023-05-22,['cs.cv'],2305.12964v2.pdf,"  Text-based person search (TBPS) aims to retrieve the images of the target
person from a large image gallery based on a given natural language
description. Existing methods are dominated by training models with parallel
image-text pairs, which are very costly to collect. In this paper, we make the
first attempt to explore TBPS without parallel image-text data ($\mu$-TBPS), in
which only non-parallel images and texts, or even image-only data, can be
adopted. Towards this end, we propose a two-stage framework,
generation-then-retrieval (GTR), to first generate the corresponding pseudo
text for each image and then perform the retrieval in a supervised manner. In
the generation stage, we propose a fine-grained image captioning strategy to
obtain an enriched description of the person image, which firstly utilizes a
set of instruction prompts to activate the off-the-shelf pretrained
vision-language model to capture and generate fine-grained person attributes,
and then converts the extracted attributes into a textual description via the
finetuned large language model or the hand-crafted template. In the retrieval
stage, considering the noise interference of the generated texts for training
model, we develop a confidence score-based training scheme by enabling more
reliable texts to contribute more during the training. Experimental results on
multiple TBPS benchmarks (i.e., CUHK-PEDES, ICFG-PEDES and RSTPReid) show that
the proposed GTR can achieve a promising performance without relying on
parallel image-text data.
"
EDM3: Event Detection as Multi-task Text Generation,Ujjwala Anantheswaran,http://arxiv.org/pdf/2305.16357v1.pdf,2023-05-25,['cs.cl'],2305.16357v1.pdf,"  Event detection refers to identifying event occurrences in a text and
comprises of two subtasks; event identification and classification. We present
EDM3, a novel approach for Event Detection that formulates three generative
tasks: identification, classification, and combined detection. We show that
EDM3 helps to learn transferable knowledge that can be leveraged to perform
Event Detection and its subtasks concurrently, mitigating the error propagation
inherent in pipelined approaches. Unlike previous dataset- or domain-specific
approaches, EDM3 utilizes the existing knowledge of language models, allowing
it to be trained over any classification schema. We evaluate EDM3 on multiple
event detection datasets: RAMS, WikiEvents, MAVEN, and MLEE, showing that EDM3
outperforms 1) single-task performance by 8.4% on average and 2) multi-task
performance without instructional prompts by 2.4% on average. We obtain SOTA
results on RAMS (71.3% vs. 65.1% F-1) and competitive performance on other
datasets. We analyze our approach to demonstrate its efficacy in low-resource
and multi-sentence settings. We also show the effectiveness of this approach on
non-standard event configurations such as multi-word and multi-class event
triggers. Overall, our results show that EDM3 is a promising approach for Event
Detection that has the potential for real-world applications.
"
VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and  Dataset,Sihan Chen,http://arxiv.org/pdf/2305.18500v2.pdf,2023-05-29,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg', 'eess.as']",2305.18500v2.pdf,"  Vision and text have been fully explored in contemporary video-text
foundational models, while other modalities such as audio and subtitles in
videos have not received sufficient attention. In this paper, we resort to
establish connections between multi-modality video tracks, including Vision,
Audio, and Subtitle, and Text by exploring an automatically generated
large-scale omni-modality video caption dataset called VAST-27M. Specifically,
we first collect 27 million open-domain video clips and separately train a
vision and an audio captioner to generate vision and audio captions. Then, we
employ an off-the-shelf Large Language Model (LLM) to integrate the generated
captions, together with subtitles and instructional prompts into omni-modality
captions. Based on the proposed VAST-27M dataset, we train an omni-modality
video-text foundational model named VAST, which can perceive and process
vision, audio, and subtitle modalities from video, and better support various
tasks including vision-text, audio-text, and multi-modal video-text tasks
(retrieval, captioning and QA). Extensive experiments have been conducted to
demonstrate the effectiveness of our proposed VAST-27M corpus and VAST
foundation model. VAST achieves 22 new state-of-the-art results on various
cross-modality benchmarks. Code, model and dataset will be released at
https://github.com/TXH-mercury/VAST.
"
Mondrian: Prompt Abstraction Attack Against Large Language Models for  Cheaper API Pricing,Wai Man Si,http://arxiv.org/pdf/2308.03558v1.pdf,2023-08-07,"['cs.cr', 'cs.cl']",2308.03558v1.pdf,"  The Machine Learning as a Service (MLaaS) market is rapidly expanding and
becoming more mature. For example, OpenAI's ChatGPT is an advanced large
language model (LLM) that generates responses for various queries with
associated fees. Although these models can deliver satisfactory performance,
they are far from perfect. Researchers have long studied the vulnerabilities
and limitations of LLMs, such as adversarial attacks and model toxicity.
Inevitably, commercial ML models are also not exempt from such issues, which
can be problematic as MLaaS continues to grow. In this paper, we discover a new
attack strategy against LLM APIs, namely the prompt abstraction attack.
Specifically, we propose Mondrian, a simple and straightforward method that
abstracts sentences, which can lower the cost of using LLM APIs. In this
approach, the adversary first creates a pseudo API (with a lower established
price) to serve as the proxy of the target API (with a higher established
price). Next, the pseudo API leverages Mondrian to modify the user query,
obtain the abstracted response from the target API, and forward it back to the
end user. Our results show that Mondrian successfully reduces user queries'
token length ranging from 13% to 23% across various tasks, including text
classification, generation, and question answering. Meanwhile, these abstracted
queries do not significantly affect the utility of task-specific and general
language models like ChatGPT. Mondrian also reduces instruction prompts' token
length by at least 11% without compromising output quality. As a result, the
prompt abstraction attack enables the adversary to profit without bearing the
cost of API development and deployment.
"
Neuro Symbolic Reasoning for Planning: Counterexample Guided Inductive  Synthesis using Large Language Models and Satisfiability Solving,Sumit Kumar Jha,http://arxiv.org/pdf/2309.16436v1.pdf,2023-09-28,"['cs.ai', 'cs.lo']",2309.16436v1.pdf,"  Generative large language models (LLMs) with instruct training such as GPT-4
can follow human-provided instruction prompts and generate human-like responses
to these prompts. Apart from natural language responses, they have also been
found to be effective at generating formal artifacts such as code, plans, and
logical specifications from natural language prompts. Despite their remarkably
improved accuracy, these models are still known to produce factually incorrect
or contextually inappropriate results despite their syntactic coherence - a
phenomenon often referred to as hallucination. This limitation makes it
difficult to use these models to synthesize formal artifacts that are used in
safety-critical applications. Unlike tasks such as text summarization and
question-answering, bugs in code, plan, and other formal artifacts produced by
LLMs can be catastrophic. We posit that we can use the satisfiability modulo
theory (SMT) solvers as deductive reasoning engines to analyze the generated
solutions from the LLMs, produce counterexamples when the solutions are
incorrect, and provide that feedback to the LLMs exploiting the dialog
capability of instruct-trained LLMs. This interaction between inductive LLMs
and deductive SMT solvers can iteratively steer the LLM to generate the correct
response. In our experiments, we use planning over the domain of blocks as our
synthesis task for evaluating our approach. We use GPT-4, GPT3.5 Turbo,
Davinci, Curie, Babbage, and Ada as the LLMs and Z3 as the SMT solver. Our
method allows the user to communicate the planning problem in natural language;
even the formulation of queries to SMT solvers is automatically generated from
natural language. Thus, the proposed technique can enable non-expert users to
describe their problems in natural language, and the combination of LLMs and
SMT solvers can produce provably correct solutions.
"
Benchmarking a foundation LLM on its ability to re-label structure names  in accordance with the AAPM TG-263 report,Jason Holmes,http://arxiv.org/pdf/2310.03874v1.pdf,2023-10-05,"['physics.med-ph', 'cs.cl']",2310.03874v1.pdf,"  Purpose: To introduce the concept of using large language models (LLMs) to
re-label structure names in accordance with the American Association of
Physicists in Medicine (AAPM) Task Group (TG)-263 standard, and to establish a
benchmark for future studies to reference.
  Methods and Materials: The Generative Pre-trained Transformer (GPT)-4
application programming interface (API) was implemented as a Digital Imaging
and Communications in Medicine (DICOM) storage server, which upon receiving a
structure set DICOM file, prompts GPT-4 to re-label the structure names of both
target volumes and normal tissues according to the AAPM TG-263. Three disease
sites, prostate, head and neck, and thorax were selected for evaluation. For
each disease site category, 150 patients were randomly selected for manually
tuning the instructions prompt (in batches of 50) and 50 patients were randomly
selected for evaluation. Structure names that were considered were those that
were most likely to be relevant for studies utilizing structure contours for
many patients.
  Results: The overall re-labeling accuracy of both target volumes and normal
tissues for prostate, head and neck, and thorax cases was 96.0%, 98.5%, and
96.9% respectively. Re-labeling of target volumes was less accurate on average
except for prostate - 100%, 93.1%, and 91.1% respectively.
  Conclusions: Given the accuracy of GPT-4 in re-labeling structure names of
both target volumes and normal tissues as presented in this work, LLMs are
poised to be the preferred method for standardizing structure names in
radiation oncology, especially considering the rapid advancements in LLM
capabilities that are likely to continue.
"
What Makes Pre-trained Language Models Better Zero-shot Learners?,Jinghui Lu,http://arxiv.org/pdf/2209.15206v3.pdf,2022-09-30,"['cs.cl', 'cs.ai']",2209.15206v3.pdf,"  Current methods for prompt learning in zeroshot scenarios widely rely on a
development set with sufficient human-annotated data to select the
best-performing prompt template a posteriori. This is not ideal because in a
realworld zero-shot scenario of practical relevance, no labelled data is
available. Thus, we propose a simple yet effective method for screening
reasonable prompt templates in zero-shot text classification: Perplexity
Selection (Perplection). We hypothesize that language discrepancy can be used
to measure the efficacy of prompt templates, and thereby develop a
substantiated perplexity-based scheme allowing for forecasting the performance
of prompt templates in advance. Experiments show that our method leads to
improved prediction performance in a realistic zero-shot setting, eliminating
the need for any labelled examples.
"
IIE-NLP-NUT at SemEval-2020 Task 4: Guiding PLM with Prompt Template  Reconstruction Strategy for ComVE,Luxi Xing,http://arxiv.org/pdf/2007.00924v1.pdf,2020-07-02,['cs.cl'],2007.00924v1.pdf,"  This paper introduces our systems for the first two subtasks of SemEval
Task4: Commonsense Validation and Explanation. To clarify the intention for
judgment and inject contrastive information for selection, we propose the input
reconstruction strategy with prompt templates. Specifically, we formalize the
subtasks into the multiple-choice question answering format and construct the
input with the prompt templates, then, the final prediction of question
answering is considered as the result of subtasks. Experimental results show
that our approaches achieve significant performance compared with the baseline
systems. Our approaches secure the third rank on both official test sets of the
first two subtasks with an accuracy of 96.4 and an accuracy of 94.3
respectively.
"
GraphPrompt: Biomedical Entity Normalization Using Graph-based Prompt  Templates,Jiayou Zhang,http://arxiv.org/pdf/2112.03002v1.pdf,2021-11-13,"['cs.cl', 'cs.ai']",2112.03002v1.pdf,"  Biomedical entity normalization unifies the language across biomedical
experiments and studies, and further enables us to obtain a holistic view of
life sciences. Current approaches mainly study the normalization of more
standardized entities such as diseases and drugs, while disregarding the more
ambiguous but crucial entities such as pathways, functions and cell types,
hindering their real-world applications. To achieve biomedical entity
normalization on these under-explored entities, we first introduce an
expert-curated dataset OBO-syn encompassing 70 different types of entities and
2 million curated entity-synonym pairs. To utilize the unique graph structure
in this dataset, we propose GraphPrompt, a prompt-based learning approach that
creates prompt templates according to the graphs. GraphPrompt obtained 41.0%
and 29.9% improvement on zero-shot and few-shot settings respectively,
indicating the effectiveness of these graph-based prompt templates. We envision
that our method GraphPrompt and OBO-syn dataset can be broadly applied to
graph-based NLP tasks, and serve as the basis for analyzing diverse and
accumulating biomedical data.
"
CCPrompt: Counterfactual Contrastive Prompt-Tuning for Many-Class  Classification,Yang Li,http://arxiv.org/pdf/2211.05987v1.pdf,2022-11-11,['cs.cl'],2211.05987v1.pdf,"  With the success of the prompt-tuning paradigm in Natural Language Processing
(NLP), various prompt templates have been proposed to further stimulate
specific knowledge for serving downstream tasks, e.g., machine translation,
text generation, relation extraction, and so on. Existing prompt templates are
mainly shared among all training samples with the information of task
description. However, training samples are quite diverse. The sharing task
description is unable to stimulate the unique task-related information in each
training sample, especially for tasks with the finite-label space. To exploit
the unique task-related information, we imitate the human decision process
which aims to find the contrastive attributes between the objective factual and
their potential counterfactuals. Thus, we propose the \textbf{C}ounterfactual
\textbf{C}ontrastive \textbf{Prompt}-Tuning (CCPrompt) approach for many-class
classification, e.g., relation classification, topic classification, and entity
typing. Compared with simple classification tasks, these tasks have more
complex finite-label spaces and are more rigorous for prompts. First of all, we
prune the finite label space to construct fact-counterfactual pairs. Then, we
exploit the contrastive attributes by projecting training instances onto every
fact-counterfactual pair. We further set up global prototypes corresponding
with all contrastive attributes for selecting valid contrastive attributes as
additional tokens in the prompt template. Finally, a simple Siamese
representation learning is employed to enhance the robustness of the model. We
conduct experiments on relation classification, topic classification, and
entity typing tasks in both fully supervised setting and few-shot setting. The
results indicate that our model outperforms former baselines.
"
Low-Resource Multi-Granularity Academic Function Recognition Based on  Multiple Prompt Knowledge,Jiawei Liu,http://arxiv.org/pdf/2305.03287v1.pdf,2023-05-05,"['cs.cl', 'cs.ai']",2305.03287v1.pdf,"  Fine-tuning pre-trained language models (PLMs), e.g., SciBERT, generally
requires large numbers of annotated data to achieve state-of-the-art
performance on a range of NLP tasks in the scientific domain. However,
obtaining the fine-tune data for scientific NLP task is still challenging and
expensive. Inspired by recent advancement in prompt learning, in this paper, we
propose the Mix Prompt Tuning (MPT), which is a semi-supervised method to
alleviate the dependence on annotated data and improve the performance of
multi-granularity academic function recognition tasks with a small number of
labeled examples. Specifically, the proposed method provides multi-perspective
representations by combining manual prompt templates with automatically learned
continuous prompt templates to help the given academic function recognition
task take full advantage of knowledge in PLMs. Based on these prompt templates
and the fine-tuned PLM, a large number of pseudo labels are assigned to the
unlabeled examples. Finally, we fine-tune the PLM using the pseudo training
set. We evaluate our method on three academic function recognition tasks of
different granularity including the citation function, the abstract sentence
function, and the keyword function, with datasets from computer science domain
and biomedical domain. Extensive experiments demonstrate the effectiveness of
our method and statistically significant improvements against strong baselines.
In particular, it achieves an average increase of 5% in Macro-F1 score compared
with fine-tuning, and 6% in Macro-F1 score compared with other semi-supervised
method under low-resource settings. In addition, MPT is a general method that
can be easily applied to other low-resource scientific classification tasks.
"
AutoCLIP: Auto-tuning Zero-Shot Classifiers for Vision-Language Models,Jan Hendrik Metzen,http://arxiv.org/pdf/2309.16414v2.pdf,2023-09-28,"['cs.cv', 'cs.ai', 'cs.lg']",2309.16414v2.pdf,"  Classifiers built upon vision-language models such as CLIP have shown
remarkable zero-shot performance across a broad range of image classification
tasks. Prior work has studied different ways of automatically creating
descriptor sets for every class based on prompt templates, ranging from
manually engineered templates over templates obtained from a large language
model to templates built from random words and characters. Up until now,
deriving zero-shot classifiers from the respective encoded class descriptors
has remained nearly unchanged, i.e., classify to the class that maximizes
cosine similarity between its averaged encoded class descriptors and the image
encoding. However, weighing all class descriptors equally can be suboptimal
when certain descriptors match visual clues on a given image better than
others. In this work, we propose AutoCLIP, a method for auto-tuning zero-shot
classifiers. AutoCLIP tunes per-image weights to each prompt template at
inference time, based on statistics of class descriptor-image similarities.
AutoCLIP is fully unsupervised, has very low computational overhead, and can be
easily implemented in few lines of code. We show that AutoCLIP outperforms
baselines across a broad range of vision-language models, datasets, and prompt
templates consistently and by up to 3 percent point accuracy.
"
Position-based Prompting for Health Outcome Generation,M. Abaho,http://arxiv.org/pdf/2204.03489v1.pdf,2022-03-30,"['cs.cl', 'cs.lg']",2204.03489v1.pdf,"  Probing Pre-trained Language Models (PLMs) using prompts has indirectly
implied that language models (LMs) can be treated as knowledge bases. To this
end, this phenomena has been effective especially when these LMs are fine-tuned
towards not just data of a specific domain, but also to the style or linguistic
pattern of the prompts themselves. We observe that, satisfying a particular
linguistic pattern in prompts is an unsustainable constraint that unnecessarily
lengthens the probing task, especially because, they are often manually
designed and the range of possible prompt template patterns can vary depending
on the prompting objective and domain. We therefore explore an idea of using a
position-attention mechanism to capture positional information of each word in
a prompt relative to the mask to be filled, hence avoiding the need to
re-construct prompts when the prompts linguistic pattern changes. Using our
approach, we demonstrate the ability of eliciting answers to rare prompt
templates (in a case study on health outcome generation) such as Postfix and
Mixed patterns whose missing information is respectively at the start and in
multiple random places of the prompt. More so, using various biomedical PLMs,
our approach consistently outperforms a baseline in which the default mask
language model (MLM) representation is used to predict masked tokens.
"
Prompting Large Language Models With the Socratic Method,Edward Y. Chang,http://arxiv.org/pdf/2303.08769v2.pdf,2023-02-17,"['cs.lg', 'i.2.7']",2303.08769v2.pdf,"  This paper presents a systematic approach to using the Socratic method in
developing prompt templates that effectively interact with large language
models, including GPT-3. Various methods are examined, and those that yield
precise answers and justifications while fostering creativity and imagination
to enhance creative writing are identified. Techniques such as {\em
definition}, {\em elenchus}, {\em dialectic}, {\em maieutics}, {\em
generalization}, and {\em counterfactual reasoning} are discussed for their
application in engineering prompt templates and their connections to inductive,
deductive, and abductive reasoning. Through examples, the effectiveness of
these dialogue and reasoning methods is demonstrated. An interesting
observation is made that when the task's goal and user intent are conveyed to
GPT-3 via ChatGPT before the start of a dialogue, the large language model
seems to connect to the external context expressed in the intent and perform
more effectively.
"
Prompt Learning for News Recommendation,Zizhuo Zhang,http://arxiv.org/pdf/2304.05263v1.pdf,2023-04-11,"['cs.ir', 'cs.ai', 'h.3.3']",2304.05263v1.pdf,"  Some recent \textit{news recommendation} (NR) methods introduce a Pre-trained
Language Model (PLM) to encode news representation by following the vanilla
pre-train and fine-tune paradigm with carefully-designed
recommendation-specific neural networks and objective functions. Due to the
inconsistent task objective with that of PLM, we argue that their modeling
paradigm has not well exploited the abundant semantic information and
linguistic knowledge embedded in the pre-training process. Recently, the
pre-train, prompt, and predict paradigm, called \textit{prompt learning}, has
achieved many successes in natural language processing domain. In this paper,
we make the first trial of this new paradigm to develop a \textit{Prompt
Learning for News Recommendation} (Prompt4NR) framework, which transforms the
task of predicting whether a user would click a candidate news as a cloze-style
mask-prediction task. Specifically, we design a series of prompt templates,
including discrete, continuous, and hybrid templates, and construct their
corresponding answer spaces to examine the proposed Prompt4NR framework.
Furthermore, we use the prompt ensembling to integrate predictions from
multiple prompt templates. Extensive experiments on the MIND dataset validate
the effectiveness of our Prompt4NR with a set of new benchmark results.
"
Automatic Multi-Label Prompting: Simple and Interpretable Few-Shot  Classification,Han Wang,http://arxiv.org/pdf/2204.06305v2.pdf,2022-04-13,"['cs.cl', 'cs.ai', 'cs.lg']",2204.06305v2.pdf,"  Prompt-based learning (i.e., prompting) is an emerging paradigm for
exploiting knowledge learned by a pretrained language model. In this paper, we
propose Automatic Multi-Label Prompting (AMuLaP), a simple yet effective method
to automatically select label mappings for few-shot text classification with
prompting. Our method exploits one-to-many label mappings and a
statistics-based algorithm to select label mappings given a prompt template.
Our experiments demonstrate that AMuLaP achieves competitive performance on the
GLUE benchmark without human effort or external resources.
"
CoCoMo: Computational Consciousness Modeling for Generative and Ethical  AI,Edward Y. Chang,http://arxiv.org/pdf/2304.02438v2.pdf,2023-03-17,"['cs.oh', 'i.2.7']",2304.02438v2.pdf,"  The CoCoMo model proposes a computational solution to the challenge of
incorporating ethical and emotional intelligence considerations into AI
systems, with the aim of creating AI agents that combine knowledge with
compassion. To achieve this goal, CoCoMo prioritizes fairness, beneficence,
non-maleficence, empathy, adaptability, transparency, and critical and
exploratory thinking abilities. The model employs consciousness modeling,
reinforcement learning, and prompt template formulation to support these
desired traits. By incorporating ethical and emotional intelligence
considerations, a generative AI model can potentially lead to improved
fairness, reduced toxicity, and increased reliability.
"
PromptNER: Prompt Locating and Typing for Named Entity Recognition,Yongliang Shen,http://arxiv.org/pdf/2305.17104v1.pdf,2023-05-26,['cs.cl'],2305.17104v1.pdf,"  Prompt learning is a new paradigm for utilizing pre-trained language models
and has achieved great success in many tasks. To adopt prompt learning in the
NER task, two kinds of methods have been explored from a pair of symmetric
perspectives, populating the template by enumerating spans to predict their
entity types or constructing type-specific prompts to locate entities. However,
these methods not only require a multi-round prompting manner with a high time
overhead and computational cost, but also require elaborate prompt templates,
that are difficult to apply in practical scenarios. In this paper, we unify
entity locating and entity typing into prompt learning, and design a dual-slot
multi-prompt template with the position slot and type slot to prompt locating
and typing respectively. Multiple prompts can be input to the model
simultaneously, and then the model extracts all entities by parallel
predictions on the slots. To assign labels for the slots during training, we
design a dynamic template filling mechanism that uses the extended bipartite
graph matching between prompts and the ground-truth entities. We conduct
experiments in various settings, including resource-rich flat and nested NER
datasets and low-resource in-domain and cross-domain datasets. Experimental
results show that the proposed model achieves a significant performance
improvement, especially in the cross-domain few-shot setting, which outperforms
the state-of-the-art model by +7.7% on average.
"
Large Language and Text-to-3D Models for Engineering Design Optimization,Thiago Rios,http://arxiv.org/pdf/2307.01230v1.pdf,2023-07-03,"['cs.cl', 'cs.lg', 'cs.ne']",2307.01230v1.pdf,"  The current advances in generative AI for learning large neural network
models with the capability to produce essays, images, music and even 3D assets
from text prompts create opportunities for a manifold of disciplines. In the
present paper, we study the potential of deep text-to-3D models in the
engineering domain, with focus on the chances and challenges when integrating
and interacting with 3D assets in computational simulation-based design
optimization. In contrast to traditional design optimization of 3D geometries
that often searches for the optimum designs using numerical representations,
such as B-Spline surface or deformation parameters in vehicle aerodynamic
optimization, natural language challenges the optimization framework by
requiring a different interpretation of variation operators while at the same
time may ease and motivate the human user interaction. Here, we propose and
realize a fully automated evolutionary design optimization framework using
Shap-E, a recently published text-to-3D asset network by OpenAI, in the context
of aerodynamic vehicle optimization. For representing text prompts in the
evolutionary optimization, we evaluate (a) a bag-of-words approach based on
prompt templates and Wordnet samples, and (b) a tokenisation approach based on
prompt templates and the byte pair encoding method from GPT4. Our main findings
from the optimizations indicate that, first, it is important to ensure that the
designs generated from prompts are within the object class of application, i.e.
diverse and novel designs need to be realistic, and, second, that more research
is required to develop methods where the strength of text prompt variations and
the resulting variations of the 3D designs share causal relations to some
degree to improve the optimization.
"
Zero-shot information extraction from radiological reports using ChatGPT,Danqing Hu,http://arxiv.org/pdf/2309.01398v2.pdf,2023-09-04,['cs.cl'],2309.01398v2.pdf,"  Electronic health records contain an enormous amount of valuable information,
but many are recorded in free text. Information extraction is the strategy to
transform the sequence of characters into structured data, which can be
employed for secondary analysis. However, the traditional information
extraction components, such as named entity recognition and relation
extraction, require annotated data to optimize the model parameters, which has
become one of the major bottlenecks in building information extraction systems.
With the large language models achieving good performances on various
downstream NLP tasks without parameter tuning, it becomes possible to use large
language models for zero-shot information extraction. In this study, we aim to
explore whether the most popular large language model, ChatGPT, can extract
useful information from the radiological reports. We first design the prompt
template for the interested information in the CT reports. Then, we generate
the prompts by combining the prompt template with the CT reports as the inputs
of ChatGPT to obtain the responses. A post-processing module is developed to
transform the responses into structured extraction results. We conducted the
experiments with 847 CT reports collected from Peking University Cancer
Hospital. The experimental results indicate that ChatGPT can achieve
competitive performances for some extraction tasks compared with the baseline
information extraction system, but some limitations need to be further
improved.
"
Can Language Models be Biomedical Knowledge Bases?,Mujeen Sung,http://arxiv.org/pdf/2109.07154v1.pdf,2021-09-15,['cs.cl'],2109.07154v1.pdf,"  Pre-trained language models (LMs) have become ubiquitous in solving various
natural language processing (NLP) tasks. There has been increasing interest in
what knowledge these LMs contain and how we can extract that knowledge,
treating LMs as knowledge bases (KBs). While there has been much work on
probing LMs in the general domain, there has been little attention to whether
these powerful LMs can be used as domain-specific KBs. To this end, we create
the BioLAMA benchmark, which is comprised of 49K biomedical factual knowledge
triples for probing biomedical LMs. We find that biomedical LMs with recently
proposed probing methods can achieve up to 18.51% Acc@5 on retrieving
biomedical knowledge. Although this seems promising given the task difficulty,
our detailed analyses reveal that most predictions are highly correlated with
prompt templates without any subjects, hence producing similar results on each
relation and hindering their capabilities to be used as domain-specific KBs. We
hope that BioLAMA can serve as a challenging benchmark for biomedical factual
probing.
"
HealthPrompt: A Zero-shot Learning Paradigm for Clinical Natural  Language Processing,Sonish Sivarajkumar,http://arxiv.org/pdf/2203.05061v1.pdf,2022-03-09,"['cs.cl', 'cs.ai', 'cs.ir']",2203.05061v1.pdf,"  Deep learning algorithms are dependent on the availability of large-scale
annotated clinical text datasets. The lack of such publicly available datasets
is the biggest bottleneck for the development of clinical Natural Language
Processing(NLP) systems. Zero-Shot Learning(ZSL) refers to the use of deep
learning models to classify instances from new classes of which no training
data have been seen before. Prompt-based learning is an emerging ZSL technique
where we define task-based templates for NLP tasks. We developed a novel
prompt-based clinical NLP framework called HealthPrompt and applied the
paradigm of prompt-based learning on clinical texts. In this technique, rather
than fine-tuning a Pre-trained Language Model(PLM), the task definitions are
tuned by defining a prompt template. We performed an in-depth analysis of
HealthPrompt on six different PLMs in a no-data setting. Our experiments prove
that prompts effectively capture the context of clinical texts and perform
remarkably well without any training data.
"
RelationPrompt: Leveraging Prompts to Generate Synthetic Data for  Zero-Shot Relation Triplet Extraction,Yew Ken Chia,http://arxiv.org/pdf/2203.09101v1.pdf,2022-03-17,['cs.cl'],2203.09101v1.pdf,"  Despite the importance of relation extraction in building and representing
knowledge, less research is focused on generalizing to unseen relations types.
We introduce the task setting of Zero-Shot Relation Triplet Extraction
(ZeroRTE) to encourage further research in low-resource relation extraction
methods. Given an input sentence, each extracted triplet consists of the head
entity, relation label, and tail entity where the relation label is not seen at
the training stage. To solve ZeroRTE, we propose to synthesize relation
examples by prompting language models to generate structured texts. Concretely,
we unify language model prompts and structured text approaches to design a
structured prompt template for generating synthetic relation samples when
conditioning on relation label prompts (RelationPrompt). To overcome the
limitation for extracting multiple relation triplets in a sentence, we design a
novel Triplet Search Decoding method. Experiments on FewRel and Wiki-ZSL
datasets show the efficacy of RelationPrompt for the ZeroRTE task and zero-shot
relation classification. Our code and data are available at
github.com/declare-lab/RelationPrompt.
"
CUP: Curriculum Learning based Prompt Tuning for Implicit Event Argument  Extraction,Jiaju Lin,http://arxiv.org/pdf/2205.00498v2.pdf,2022-05-01,['cs.cl'],2205.00498v2.pdf,"  Implicit event argument extraction (EAE) aims to identify arguments that
could scatter over the document. Most previous work focuses on learning the
direct relations between arguments and the given trigger, while the implicit
relations with long-range dependency are not well studied. Moreover, recent
neural network based approaches rely on a large amount of labeled data for
training, which is unavailable due to the high labelling cost. In this paper,
we propose a Curriculum learning based Prompt tuning (CUP) approach, which
resolves implicit EAE by four learning stages. The stages are defined according
to the relations with the trigger node in a semantic graph, which well captures
the long-range dependency between arguments and the trigger. In addition, we
integrate a prompt-based encoder-decoder model to elicit related knowledge from
pre-trained language models (PLMs) in each stage, where the prompt templates
are adapted with the learning progress to enhance the reasoning for arguments.
Experimental results on two well-known benchmark datasets show the great
advantages of our proposed approach. In particular, we outperform the
state-of-the-art models in both fully-supervised and low-data scenarios.
"
Let Me Check the Examples: Enhancing Demonstration Learning via Explicit  Imitation,Sirui Wang,http://arxiv.org/pdf/2209.00455v1.pdf,2022-08-31,"['cs.lg', 'cs.ai']",2209.00455v1.pdf,"  Demonstration learning aims to guide the prompt prediction via providing
answered demonstrations in the few shot settings. Despite achieving promising
results, existing work only concatenates the answered examples as
demonstrations to the prompt template (including the raw context) without any
additional operation, neglecting the prompt-demonstration dependencies.
Besides, prior research found that randomly replacing the labels of
demonstrations marginally hurts performance, illustrating that the model could
not properly learn the knowledge brought by the demonstrations. Inspired by the
human learning process, in this paper, we introduce Imitation DEMOnstration
Learning (Imitation-Demo) to strengthen demonstration learning via explicitly
imitating human review behaviour, which includes: (1) contrastive learning
mechanism to concentrate on the similar demonstrations. (2) demonstration-label
re-prediction method to consolidate known knowledge. Experiment results show
that our proposed method achieves state-of-the-art performance on 11 out of 14
classification corpora. Further studies also prove that Imitation-Demo
strengthen the association between prompt and demonstrations, which could
provide the basis for exploring how demonstration learning works.
"
A Few-shot Approach to Resume Information Extraction via Prompts,Chengguang Gan,http://arxiv.org/pdf/2209.09450v2.pdf,2022-09-20,['cs.cl'],2209.09450v2.pdf,"  Prompt learning's fine-tune performance on text classification tasks has
attracted the NLP community. This paper applies it to resume information
extraction, improving existing methods for this task. We created manual
templates and verbalizers tailored to resume texts and compared the performance
of Masked Language Model (MLM) and Seq2Seq PLMs. Also, we enhanced the
verbalizer design for Knowledgeable Prompt-tuning, contributing to prompt
template design across NLP tasks. We present the Manual Knowledgeable
Verbalizer (MKV), a rule for constructing verbalizers for specific
applications. Our tests show that MKV rules yield more effective, robust
templates and verbalizers than existing methods. Our MKV approach resolved
sample imbalance, surpassing current automatic prompt methods. This study
underscores the value of tailored prompt learning for resume extraction,
stressing the importance of custom-designed templates and verbalizers.
"
Distilling Task-specific Logical Rules from Large Pre-trained Models,Tao Chen,http://arxiv.org/pdf/2210.02768v1.pdf,2022-10-06,['cs.cl'],2210.02768v1.pdf,"  Logical rules, both transferable and explainable, are widely used as weakly
supervised signals for many downstream tasks such as named entity tagging. To
reduce the human effort of writing rules, previous researchers adopt an
iterative approach to automatically learn logical rules from several seed
rules. However, obtaining more seed rules can only be accomplished by extra
human annotation with heavy costs. Limited by the size and quality of the seed
rules, the model performance of previous systems is bounded. In this paper, we
develop a novel framework STREAM to distill task-specific logical rules from
large pre-trained models. Specifically, we borrow recent prompt-based language
models as the knowledge expert to yield initial seed rules, and based on the
formed high-quality instance pool that acts as an intermediary role, we keep
teaching the expert to fit our task and learning task-specific logical rules.
Experiments on three public named entity tagging benchmarks demonstrate the
effectiveness of our proposed framework. With several predefined prompt
templates, our system has gained significant improvements over previous
state-of-the-art methods.
"
CLIP model is an Efficient Continual Learner,Vishal Thengane,http://arxiv.org/pdf/2210.03114v1.pdf,2022-10-06,['cs.cv'],2210.03114v1.pdf,"  The continual learning setting aims to learn new tasks over time without
forgetting the previous ones. The literature reports several significant
efforts to tackle this problem with limited or no access to previous task data.
Among such efforts, typical solutions offer sophisticated techniques involving
memory replay, knowledge distillation, model regularization, and dynamic
network expansion. The resulting methods have a retraining cost at each
learning task, dedicated memory requirements, and setting-specific design
choices. In this work, we show that a frozen CLIP (Contrastive Language-Image
Pretraining) model offers astounding continual learning performance without any
fine-tuning (zero-shot evaluation). We evaluate CLIP under a variety of
settings including class-incremental, domain-incremental and task-agnostic
incremental learning on five popular benchmarks (ImageNet-100 & 1K, CORe50,
CIFAR-100, and TinyImageNet). Without any bells and whistles, the CLIP model
outperforms the state-of-the-art continual learning approaches in the majority
of the settings. We show the effect on the CLIP model's performance by varying
text inputs with simple prompt templates. To the best of our knowledge, this is
the first work to report the CLIP zero-shot performance in a continual setting.
We advocate the use of this strong yet embarrassingly simple baseline for
future comparisons in the continual learning tasks.
"
A Unified Framework for Multi-intent Spoken Language Understanding with  prompting,Feifan Song,http://arxiv.org/pdf/2210.03337v1.pdf,2022-10-07,"['cs.cl', 'cs.ai']",2210.03337v1.pdf,"  Multi-intent Spoken Language Understanding has great potential for widespread
implementation. Jointly modeling Intent Detection and Slot Filling in it
provides a channel to exploit the correlation between intents and slots.
However, current approaches are apt to formulate these two sub-tasks
differently, which leads to two issues: 1) It hinders models from effective
extraction of shared features. 2) Pretty complicated structures are involved to
enhance expression ability while causing damage to the interpretability of
frameworks. In this work, we describe a Prompt-based Spoken Language
Understanding (PromptSLU) framework, to intuitively unify two sub-tasks into
the same form by offering a common pre-trained Seq2Seq model. In detail, ID and
SF are completed by concisely filling the utterance into task-specific prompt
templates as input, and sharing output formats of key-value pairs sequence.
Furthermore, variable intents are predicted first, then naturally embedded into
prompts to guide slot-value pairs inference from a semantic perspective.
Finally, we are inspired by prevalent multi-task learning to introduce an
auxiliary sub-task, which helps to learn relationships among provided labels.
Experiment results show that our framework outperforms several state-of-the-art
baselines on two public datasets.
"
UniHD at TSAR-2022 Shared Task: Is Compute All We Need for Lexical  Simplification?,Dennis Aumiller,http://arxiv.org/pdf/2301.01764v2.pdf,2023-01-04,['cs.cl'],2301.01764v2.pdf,"  Previous state-of-the-art models for lexical simplification consist of
complex pipelines with several components, each of which requires deep
technical knowledge and fine-tuned interaction to achieve its full potential.
As an alternative, we describe a frustratingly simple pipeline based on
prompted GPT-3 responses, beating competing approaches by a wide margin in
settings with few training instances. Our best-performing submission to the
English language track of the TSAR-2022 shared task consists of an ``ensemble''
of six different prompt templates with varying context levels. As a
late-breaking result, we further detail a language transfer technique that
allows simplification in languages other than English. Applied to the Spanish
and Portuguese subset, we achieve state-of-the-art results with only minor
modification to the original prompts. Aside from detailing the implementation
and setup, we spend the remainder of this work discussing the particularities
of prompting and implications for future work. Code for the experiments is
available online at https://github.com/dennlinger/TSAR-2022-Shared-Task
"
Prompting Large Language Model for Machine Translation: A Case Study,Biao Zhang,http://arxiv.org/pdf/2301.07069v2.pdf,2023-01-17,"['cs.cl', 'cs.lg']",2301.07069v2.pdf,"  Research on prompting has shown excellent performance with little or even no
supervised training across many tasks. However, prompting for machine
translation is still under-explored in the literature. We fill this gap by
offering a systematic study on prompting strategies for translation, examining
various factors for prompt template and demonstration example selection. We
further explore the use of monolingual data and the feasibility of
cross-lingual, cross-domain, and sentence-to-document transfer learning in
prompting. Extensive experiments with GLM-130B (Zeng et al., 2022) as the
testbed show that 1) the number and the quality of prompt examples matter,
where using suboptimal examples degenerates translation; 2) several features of
prompt examples, such as semantic similarity, show significant Spearman
correlation with their prompting performance; yet, none of the correlations are
strong enough; 3) using pseudo parallel prompt examples constructed from
monolingual data via zero-shot prompting could improve translation; and 4)
improved performance is achievable by transferring knowledge from prompt
examples selected in other settings. We finally provide an analysis on the
model outputs and discuss several problems that prompting still suffers from.
"
Global Constraints with Prompting for Zero-Shot Event Argument  Classification,Zizheng Lin,http://arxiv.org/pdf/2302.04459v1.pdf,2023-02-09,['cs.cl'],2302.04459v1.pdf,"  Determining the role of event arguments is a crucial subtask of event
extraction. Most previous supervised models leverage costly annotations, which
is not practical for open-domain applications. In this work, we propose to use
global constraints with prompting to effectively tackles event argument
classification without any annotation and task-specific training. Specifically,
given an event and its associated passage, the model first creates several new
passages by prefix prompts and cloze prompts, where prefix prompts indicate
event type and trigger span, and cloze prompts connect each candidate role with
the target argument span. Then, a pre-trained language model scores the new
passages, making the initial prediction. Our novel prompt templates can easily
adapt to all events and argument types without manual effort. Next, the model
regularizes the prediction by global constraints exploiting cross-task,
cross-argument, and cross-event relations. Extensive experiments demonstrate
our model's effectiveness: it outperforms the best zero-shot baselines by 12.5%
and 10.9% F1 on ACE and ERE with given argument spans and by 4.3% and 3.3% F1,
respectively, without given argument spans. We have made our code publicly
available.
"
Large Language Models Are State-of-the-Art Evaluators of Translation  Quality,Tom Kocmi,http://arxiv.org/pdf/2302.14520v2.pdf,2023-02-28,['cs.cl'],2302.14520v2.pdf,"  We describe GEMBA, a GPT-based metric for assessment of translation quality,
which works both with a reference translation and without. In our evaluation,
we focus on zero-shot prompting, comparing four prompt variants in two modes,
based on the availability of the reference. We investigate nine versions of GPT
models, including ChatGPT and GPT-4. We show that our method for translation
quality assessment only works with GPT~3.5 and larger models. Comparing to
results from WMT22's Metrics shared task, our method achieves state-of-the-art
accuracy in both modes when compared to MQM-based human labels. Our results are
valid on the system level for all three WMT22 Metrics shared task language
pairs, namely English into German, English into Russian, and Chinese into
English. This provides a first glimpse into the usefulness of pre-trained,
generative large language models for quality assessment of translations. We
publicly release all our code and prompt templates used for the experiments
described in this work, as well as all corresponding scoring results, to allow
for external validation and reproducibility.
"
The Prompt Artists,Minsuk Chang,http://arxiv.org/pdf/2303.12253v1.pdf,2023-03-22,['cs.hc'],2303.12253v1.pdf,"  This paper examines the art practices, artwork, and motivations of prolific
users of the latest generation of text-to-image models. Through interviews,
observations, and a user survey, we present a sampling of the artistic styles
and describe the developed community of practice around generative AI. We find
that: 1) the text prompt and the resulting image can be considered collectively
as an art piece prompts as art and 2) prompt templates (prompts with ``slots''
for others to fill in with their own words) are developed to create generative
art styles. We discover that the value placed by this community on unique
outputs leads to artists seeking specialized vocabulary to produce distinctive
art pieces (e.g., by reading architectural blogs to find phrases to describe
images). We also find that some artists use ""glitches"" in the model that can be
turned into artistic styles of their own right. From these findings, we outline
specific implications for design regarding future prompting and image editing
options.
"
WinCLIP: Zero-/Few-Shot Anomaly Classification and Segmentation,Jongheon Jeong,http://arxiv.org/pdf/2303.14814v1.pdf,2023-03-26,"['cs.cv', 'cs.ai', 'cs.cl']",2303.14814v1.pdf,"  Visual anomaly classification and segmentation are vital for automating
industrial quality inspection. The focus of prior research in the field has
been on training custom models for each quality inspection task, which requires
task-specific images and annotation. In this paper we move away from this
regime, addressing zero-shot and few-normal-shot anomaly classification and
segmentation. Recently CLIP, a vision-language model, has shown revolutionary
generality with competitive zero-/few-shot performance in comparison to
full-supervision. But CLIP falls short on anomaly classification and
segmentation tasks. Hence, we propose window-based CLIP (WinCLIP) with (1) a
compositional ensemble on state words and prompt templates and (2) efficient
extraction and aggregation of window/patch/image-level features aligned with
text. We also propose its few-normal-shot extension WinCLIP+, which uses
complementary information from normal images. In MVTec-AD (and VisA), without
further tuning, WinCLIP achieves 91.8%/85.1% (78.1%/79.6%) AUROC in zero-shot
anomaly classification and segmentation while WinCLIP+ does 93.1%/95.2%
(83.8%/96.4%) in 1-normal-shot, surpassing state-of-the-art by large margins.
"
MetricPrompt: Prompting Model as a Relevance Metric for Few-shot Text  Classification,Hongyuan Dong,http://arxiv.org/pdf/2306.08892v1.pdf,2023-06-15,['cs.cl'],2306.08892v1.pdf,"  Prompting methods have shown impressive performance in a variety of text
mining tasks and applications, especially few-shot ones. Despite the promising
prospects, the performance of prompting model largely depends on the design of
prompt template and verbalizer. In this work, we propose MetricPrompt, which
eases verbalizer design difficulty by reformulating few-shot text
classification task into text pair relevance estimation task. MetricPrompt
adopts prompting model as the relevance metric, further bridging the gap
between Pre-trained Language Model's (PLM) pre-training objective and text
classification task, making possible PLM's smooth adaption. Taking a training
sample and a query one simultaneously, MetricPrompt captures cross-sample
relevance information for accurate relevance estimation. We conduct experiments
on three widely used text classification datasets across four few-shot
settings. Results show that MetricPrompt outperforms manual verbalizer and
other automatic verbalizer design methods across all few-shot settings,
achieving new state-of-the-art (SOTA) performance.
"
TrustGPT: A Benchmark for Trustworthy and Responsible Large Language  Models,Yue Huang,http://arxiv.org/pdf/2306.11507v1.pdf,2023-06-20,"['cs.cl', 'cs.ai']",2306.11507v1.pdf,"  Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible.
"
DAPrompt: Deterministic Assumption Prompt Learning for Event Causality  Identification,Wei Xiang,http://arxiv.org/pdf/2307.09813v1.pdf,2023-07-19,['cs.cl'],2307.09813v1.pdf,"  Event Causality Identification (ECI) aims at determining whether there is a
causal relation between two event mentions. Conventional prompt learning
designs a prompt template to first predict an answer word and then maps it to
the final decision. Unlike conventional prompts, we argue that predicting an
answer word may not be a necessary prerequisite for the ECI task. Instead, we
can first make a deterministic assumption on the existence of causal relation
between two events and then evaluate its rationality to either accept or reject
the assumption. The design motivation is to try the most utilization of the
encyclopedia-like knowledge embedded in a pre-trained language model. In light
of such considerations, we propose a deterministic assumption prompt learning
model, called DAPrompt, for the ECI task. In particular, we design a simple
deterministic assumption template concatenating with the input event pair,
which includes two masks as predicted events' tokens. We use the probabilities
of predicted events to evaluate the assumption rationality for the final event
causality decision. Experiments on the EventStoryLine corpus and
Causal-TimeBank corpus validate our design objective in terms of significant
performance improvements over the state-of-the-art algorithms.
"
DiffuGen: Adaptable Approach for Generating Labeled Image Datasets using  Stable Diffusion Models,Michael Shenoda,http://arxiv.org/pdf/2309.00248v1.pdf,2023-09-01,"['cs.cv', 'cs.ai']",2309.00248v1.pdf,"  Generating high-quality labeled image datasets is crucial for training
accurate and robust machine learning models in the field of computer vision.
However, the process of manually labeling real images is often time-consuming
and costly. To address these challenges associated with dataset generation, we
introduce ""DiffuGen,"" a simple and adaptable approach that harnesses the power
of stable diffusion models to create labeled image datasets efficiently. By
leveraging stable diffusion models, our approach not only ensures the quality
of generated datasets but also provides a versatile solution for label
generation. In this paper, we present the methodology behind DiffuGen, which
combines the capabilities of diffusion models with two distinct labeling
techniques: unsupervised and supervised. Distinctively, DiffuGen employs prompt
templating for adaptable image generation and textual inversion to enhance
diffusion model capabilities.
"
Mitigating Word Bias in Zero-shot Prompt-based Classifiers,Adian Liusie,http://arxiv.org/pdf/2309.04992v1.pdf,2023-09-10,['cs.cl'],2309.04992v1.pdf,"  Prompt-based classifiers are an attractive approach for zero-shot
classification. However, the precise choice of the prompt template and label
words can largely influence performance, with semantically equivalent settings
often showing notable performance difference. This discrepancy can be partly
attributed to word biases, where the classifier may be biased towards classes.
To address this problem, it is possible to optimise classification thresholds
on a labelled data set, however, this mitigates some of the advantages of
prompt-based classifiers. This paper instead approaches this problem by
examining the expected marginal probabilities of the classes. Here,
probabilities are reweighted to have a uniform prior over classes, in an
unsupervised fashion. Further, we draw a theoretical connection between the
class priors and the language models' word prior, and offer the ability to set
a threshold in a zero-resource fashion. We show that matching class priors
correlates strongly with the oracle upper bound performance and demonstrate
large consistent performance gains for prompt settings over a range of NLP
tasks.
"
Prompt-Enhanced Self-supervised Representation Learning for Remote  Sensing Image Understanding,Mingming Zhang,http://arxiv.org/pdf/2310.00022v1.pdf,2023-09-28,['cs.cv'],2310.00022v1.pdf,"  Learning representations through self-supervision on a large-scale, unlabeled
dataset has proven to be highly effective for understanding diverse images,
such as those used in remote sensing image analysis. However, remote sensing
images often have complex and densely populated scenes, with multiple land
objects and no clear foreground objects. This intrinsic property can lead to
false positive pairs in contrastive learning, or missing contextual information
in reconstructive learning, which can limit the effectiveness of existing
self-supervised learning methods. To address these problems, we propose a
prompt-enhanced self-supervised representation learning method that uses a
simple yet efficient pre-training pipeline. Our approach involves utilizing
original image patches as a reconstructive prompt template, and designing a
prompt-enhanced generative branch that provides contextual information through
semantic consistency constraints. We collected a dataset of over 1.28 million
remote sensing images that is comparable to the popular ImageNet dataset, but
without specific temporal or geographical constraints. Our experiments show
that our method outperforms fully supervised learning models and
state-of-the-art self-supervised learning methods on various downstream tasks,
including land cover classification, semantic segmentation, object detection,
and instance segmentation. These results demonstrate that our approach learns
impressive remote sensing representations with high generalization and
transferability.
"
LLM4DV: Using Large Language Models for Hardware Test Stimuli Generation,Zixi Zhang,http://arxiv.org/pdf/2310.04535v1.pdf,2023-10-06,"['cs.lg', 'cs.ar']",2310.04535v1.pdf,"  Test stimuli generation has been a crucial but labor-intensive task in
hardware design verification. In this paper, we revolutionize this process by
harnessing the power of large language models (LLMs) and present a novel
benchmarking framework, LLM4DV. This framework introduces a prompt template for
interactively eliciting test stimuli from the LLM, along with four innovative
prompting improvements to support the pipeline execution and further enhance
its performance. We compare LLM4DV to traditional constrained-random testing
(CRT), using three self-designed design-under-test (DUT) modules. Experiments
demonstrate that LLM4DV excels in efficiently handling straightforward DUT
scenarios, leveraging its ability to employ basic mathematical reasoning and
pre-trained knowledge. While it exhibits reduced efficiency in complex task
settings, it still outperforms CRT in relative terms. The proposed framework
and the DUT modules used in our experiments will be open-sourced upon
publication.
"
Estimating Uncertainty in Multimodal Foundation Models using Public  Internet Data,Shiladitya Dutta,http://arxiv.org/pdf/2310.09926v1.pdf,2023-10-15,['cs.ai'],2310.09926v1.pdf,"  Foundation models are trained on vast amounts of data at scale using
self-supervised learning, enabling adaptation to a wide range of downstream
tasks. At test time, these models exhibit zero-shot capabilities through which
they can classify previously unseen (user-specified) categories. In this paper,
we address the problem of quantifying uncertainty in these zero-shot
predictions. We propose a heuristic approach for uncertainty estimation in
zero-shot settings using conformal prediction with web data. Given a set of
classes at test time, we conduct zero-shot classification with CLIP-style
models using a prompt template, e.g., ""an image of a <category>"", and use the
same template as a search query to source calibration data from the open web.
Given a web-based calibration set, we apply conformal prediction with a novel
conformity score that accounts for potential errors in retrieved web data. We
evaluate the utility of our proposed method in Biomedical foundation models;
our preliminary results show that web-based conformal prediction sets achieve
the target coverage with satisfactory efficiency on a variety of biomedical
datasets.
"
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring  Fine-Grained Relevance Labels,Honglei Zhuang,http://arxiv.org/pdf/2310.14122v2.pdf,2023-10-21,['cs.ir'],2310.14122v2.pdf,"  Zero-shot text rankers powered by recent LLMs achieve remarkable ranking
performance by simply prompting. Existing prompts for pointwise LLM rankers
mostly ask the model to choose from binary relevance labels like ""Yes"" and
""No"". However, the lack of intermediate relevance label options may cause the
LLM to provide noisy or biased answers for documents that are partially
relevant to the query. We propose to incorporate fine-grained relevance labels
into the prompt for LLM rankers, enabling them to better differentiate among
documents with different levels of relevance to the query and thus derive a
more accurate ranking. We study two variants of the prompt template, coupled
with different numbers of relevance levels. Our experiments on 8 BEIR data sets
show that adding fine-grained relevance labels significantly improves the
performance of LLM rankers.
"
"Large Language Models can Share Images, Too!",Young-Jun Lee,http://arxiv.org/pdf/2310.14804v1.pdf,2023-10-23,"['cs.cv', 'cs.ai', 'cs.cl']",2310.14804v1.pdf,"  This paper explores the image-sharing capability of Large Language Models
(LLMs), such as InstructGPT, ChatGPT, and GPT-4, in a zero-shot setting,
without the help of visual foundation models. Inspired by the two-stage process
of image-sharing in human dialogues, we propose a two-stage framework that
allows LLMs to predict potential image-sharing turns and generate related image
descriptions using our effective restriction-based prompt template. With
extensive experiments, we unlock the \textit{image-sharing} capability of LLMs
in zero-shot prompting, with GPT-4 achieving the best performance.
Additionally, we uncover the emergent \textit{image-sharing} ability in
zero-shot prompting, demonstrating the effectiveness of restriction-based
prompts in both stages of our framework. Based on this framework, we augment
the PhotoChat dataset with images generated by Stable Diffusion at predicted
turns, namely PhotoChat++. To our knowledge, this is the first study to assess
the image-sharing ability of LLMs in a zero-shot setting without visual
foundation models. The source code and the dataset will be released after
publication.
"
KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization  for Relation Extraction,Xiang Chen,http://arxiv.org/pdf/2104.07650v7.pdf,2021-04-15,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",2104.07650v7.pdf,"  Recently, prompt-tuning has achieved promising results for specific few-shot
classification tasks. The core idea of prompt-tuning is to insert text pieces
(i.e., templates) into the input and transform a classification task into a
masked language modeling problem. However, for relation extraction, determining
an appropriate prompt template requires domain expertise, and it is cumbersome
and time-consuming to obtain a suitable label word. Furthermore, there exists
abundant semantic and prior knowledge among the relation labels that cannot be
ignored. To this end, we focus on incorporating knowledge among relation labels
into prompt-tuning for relation extraction and propose a Knowledge-aware
Prompt-tuning approach with synergistic optimization (KnowPrompt).
Specifically, we inject latent knowledge contained in relation labels into
prompt construction with learnable virtual type words and answer words. Then,
we synergistically optimize their representation with structured constraints.
Extensive experimental results on five datasets with standard and low-resource
settings demonstrate the effectiveness of our approach. Our code and datasets
are available in https://github.com/zjunlp/KnowPrompt for reproducibility.
"
Prompt-based Zero-shot Relation Extraction with Semantic Knowledge  Augmentation,Jiaying Gong,http://arxiv.org/pdf/2112.04539v2.pdf,2021-12-08,['cs.cl'],2112.04539v2.pdf,"  In relation triplet extraction (RTE), recognizing unseen (new) relations for
which there are no training instances is a challenging task. Efforts have been
made to recognize unseen relations based on question-answering models or
relation descriptions. However, these approaches miss the semantic information
about connections between seen and unseen relations. In this paper, We propose
a prompt-based model with semantic knowledge augmentation (ZS-SKA) to recognize
unseen relations under the zero-shot setting. We present a new word-level
analogy-based sentence translation rule and generate augmented instances with
unseen relations from instances with seen relations using that new rule. We
design prompts with weighted virtual label construction based on an external
knowledge graph to integrate semantic knowledge information learned from seen
relations. Instead of using the actual label sets in the prompt template, we
construct weighted virtual label words. We learn the representations of both
seen and unseen relations with augmented instances and prompts. We then
calculate the distance between the generated representations using prototypical
networks to predict unseen relations. Extensive experiments conducted on three
public datasets FewRel, Wiki-ZSL, and NYT, show that ZS-SKA outperforms
state-of-the-art methods under the zero-shot scenarios. Our experimental
results also demonstrate the effectiveness and robustness of ZS-SKA.
"
DynaMaR: Dynamic Prompt with Mask Token Representation,Xiaodi Sun,http://arxiv.org/pdf/2206.02982v1.pdf,2022-06-07,"['cs.cl', 'cs.lg']",2206.02982v1.pdf,"  Recent research has shown that large language models pretrained using
unsupervised approaches can achieve significant performance improvement on many
downstream tasks. Typically when adapting these language models to downstream
tasks, like a classification or regression task, we employ a fine-tuning
paradigm in which the sentence representation from the language model is input
to a task-specific head; the model is then fine-tuned end-to-end. However, with
the emergence of models like GPT-3, prompt-based fine-tuning has been proven to
be a successful approach for few-shot tasks. Inspired by this work, we study
discrete prompt technologies in practice. There are two issues that arise with
the standard prompt approach. First, it can overfit on the prompt template.
Second, it requires manual effort to formulate the downstream task as a
language model problem. In this paper, we propose an improvement to
prompt-based fine-tuning that addresses these two issues. We refer to our
approach as DynaMaR -- Dynamic Prompt with Mask Token Representation. Results
show that DynaMaR can achieve an average improvement of 10% in few-shot
settings and improvement of 3.7% in data-rich settings over the standard
fine-tuning approach on four e-commerce applications.
"
Rethinking the Event Coding Pipeline with Prompt Entailment,Clément Lefebvre,http://arxiv.org/pdf/2210.05257v2.pdf,2022-10-11,"['cs.cl', 'cs.hc', 'cs.lg']",2210.05257v2.pdf,"  For monitoring crises, political events are extracted from the news. The
large amount of unstructured full-text event descriptions makes a case-by-case
analysis unmanageable, particularly for low-resource humanitarian aid
organizations. This creates a demand to classify events into event types, a
task referred to as event coding. Typically, domain experts craft an event type
ontology, annotators label a large dataset and technical experts develop a
supervised coding system. In this work, we propose PR-ENT, a new event coding
approach that is more flexible and resource-efficient, while maintaining
competitive accuracy: first, we extend an event description such as ""Military
injured two civilians'' by a template, e.g. ""People were [Z]"" and prompt a
pre-trained (cloze) language model to fill the slot Z. Second, we select answer
candidates Z* = {""injured'', ""hurt""...} by treating the event description as
premise and the filled templates as hypothesis in a textual entailment task.
This allows domain experts to draft the codebook directly as labeled prompts
and interpretable answer candidates. This human-in-the-loop process is guided
by our interactive codebook design tool. We evaluate PR-ENT in several
robustness checks: perturbing the event description and prompt template,
restricting the vocabulary and removing contextual information.
"
Visual Prompting for Adversarial Robustness,Aochuan Chen,http://arxiv.org/pdf/2210.06284v4.pdf,2022-10-12,"['cs.cv', 'cs.cr', 'cs.lg']",2210.06284v4.pdf,"  In this work, we leverage visual prompting (VP) to improve adversarial
robustness of a fixed, pre-trained model at testing time. Compared to
conventional adversarial defenses, VP allows us to design universal (i.e.,
data-agnostic) input prompting templates, which have plug-and-play capabilities
at testing time to achieve desired model performance without introducing much
computation overhead. Although VP has been successfully applied to improving
model generalization, it remains elusive whether and how it can be used to
defend against adversarial attacks. We investigate this problem and show that
the vanilla VP approach is not effective in adversarial defense since a
universal input prompt lacks the capacity for robust learning against
sample-specific adversarial perturbations. To circumvent it, we propose a new
VP method, termed Class-wise Adversarial Visual Prompting (C-AVP), to generate
class-wise visual prompts so as to not only leverage the strengths of ensemble
prompts but also optimize their interrelations to improve model robustness. Our
experiments show that C-AVP outperforms the conventional VP method, with 2.1X
standard accuracy gain and 2X robust accuracy gain. Compared to classical
test-time defenses, C-AVP also yields a 42X inference time speedup.
"
Continuous Prompt Tuning Based Textual Entailment Model for E-commerce  Entity Typing,Yibo Wang,http://arxiv.org/pdf/2211.02483v1.pdf,2022-11-04,['cs.cl'],2211.02483v1.pdf,"  The explosion of e-commerce has caused the need for processing and analysis
of product titles, like entity typing in product titles. However, the rapid
activity in e-commerce has led to the rapid emergence of new entities, which is
difficult to be solved by general entity typing. Besides, product titles in
e-commerce have very different language styles from text data in general
domain. In order to handle new entities in product titles and address the
special language styles problem of product titles in e-commerce domain, we
propose our textual entailment model with continuous prompt tuning based
hypotheses and fusion embeddings for e-commerce entity typing. First, we
reformulate the entity typing task into a textual entailment problem to handle
new entities that are not present during training. Second, we design a model to
automatically generate textual entailment hypotheses using a continuous prompt
tuning method, which can generate better textual entailment hypotheses without
manual design. Third, we utilize the fusion embeddings of BERT embedding and
CharacterBERT embedding with a two-layer MLP classifier to solve the problem
that the language styles of product titles in e-commerce are different from
that of general domain. To analyze the effect of each contribution, we compare
the performance of entity typing and textual entailment model, and conduct
ablation studies on continuous prompt tuning and fusion embeddings. We also
evaluate the impact of different prompt template initialization for the
continuous prompt tuning. We show our proposed model improves the average F1
score by around 2% compared to the baseline BERT entity typing model.
"
Multi-label Few-shot ICD Coding as Autoregressive Generation with Prompt,Zhichao Yang,http://arxiv.org/pdf/2211.13813v2.pdf,2022-11-24,"['cs.cl', 'cs.ai']",2211.13813v2.pdf,"  Automatic International Classification of Diseases (ICD) coding aims to
assign multiple ICD codes to a medical note with an average of 3,000+ tokens.
This task is challenging due to the high-dimensional space of multi-label
assignment (155,000+ ICD code candidates) and the long-tail challenge - Many
ICD codes are infrequently assigned yet infrequent ICD codes are important
clinically. This study addresses the long-tail challenge by transforming this
multi-label classification task into an autoregressive generation task.
Specifically, we first introduce a novel pretraining objective to generate free
text diagnoses and procedure using the SOAP structure, the medical logic
physicians use for note documentation. Second, instead of directly predicting
the high dimensional space of ICD codes, our model generates the lower
dimension of text descriptions, which then infer ICD codes. Third, we designed
a novel prompt template for multi-label classification. We evaluate our
Generation with Prompt model with the benchmark of all code assignment
(MIMIC-III-full) and few shot ICD code assignment evaluation benchmark
(MIMIC-III-few). Experiments on MIMIC-III-few show that our model performs with
a marco F1 30.2, which substantially outperforms the previous MIMIC-III-full
SOTA model (marco F1 4.3) and the model specifically designed for few/zero shot
setting (marco F1 18.7). Finally, we design a novel ensemble learner, a cross
attention reranker with prompts, to integrate previous SOTA and our best
few-shot coding predictions. Experiments on MIMIC-III-full show that our
ensemble learner substantially improves both macro and micro F1, from 10.4 to
14.6 and from 58.2 to 59.1, respectively.
"
LabelPrompt: Effective Prompt-based Learning for Relation Classification,Wenjie Zhang,http://arxiv.org/pdf/2302.08068v2.pdf,2023-02-16,"['cs.cl', 'cs.ai', 'cs.ir', 'cs.lg']",2302.08068v2.pdf,"  Recently, prompt-based learning has gained popularity across many natural
language processing (NLP) tasks by reformulating them into a cloze-style format
to better align pre-trained language models (PLMs) with downstream tasks.
However, applying this approach to relation classification poses unique
challenges. Specifically, associating natural language words that fill the
masked token with semantic relation labels (\textit{e.g.}
\textit{``org:founded\_by}'') is difficult. To address this challenge, this
paper presents a novel prompt-based learning method, namely LabelPrompt, for
the relation classification task. Motivated by the intuition to ``GIVE MODEL
CHOICES!'', we first define additional tokens to represent relation labels,
which regard these tokens as the verbaliser with semantic initialisation and
explicitly construct them with a prompt template method. Then, to mitigate
inconsistency between predicted relations and given entities, we implement an
entity-aware module with contrastive learning. Last, we conduct an attention
query strategy within the self-attention layer to differentiates prompt tokens
and sequence tokens. Together, these strategies enhance the adaptability of
prompt-based learning, especially when only small labelled datasets is
available. Comprehensive experiments on benchmark datasets demonstrate the
superiority of our method, particularly in the few-shot scenario.
"
Adapting Prompt for Few-shot Table-to-Text Generation,Zhixin Guo,http://arxiv.org/pdf/2302.12468v2.pdf,2023-02-24,['cs.cl'],2302.12468v2.pdf,"  Pretrained language models (PLMs) have made remarkable progress in
table-to-text generation tasks. However, the lack of domain-specific knowledge
makes it challenging to bridge the topological gap between tabular data and
text, especially in real-world applications with limited resources. To mitigate
the limitation of insufficient labeled data, we propose a novel framework:
Adapt-Prompt-to-Generate (AdaPTGen). The core insight of AdaPTGen is to adapt
prompt templates of domain-specific knowledge into the model, which brings at
least three benefits: (1) it injects representation of normal table-related
descriptions to bridge the topological gap between tabular data and texts; (2)
it enables us to use large amounts of unlabeled domain-specific knowledge
fully, which can alleviate the PLMs' inherent shortcomings of lacking domain
knowledge; (3) it allows us to design various tasks to explore the
domain-specific knowledge. Extensive experiments and analyses are conducted on
three open-domain few-shot natural language generation (NLG) data sets: Humans,
Songs, and Books. Compared to previous state-of-the-art approaches, our model
achieves superior performance in terms of both fluency and accuracy.
"
Model-tuning Via Prompts Makes NLP Models Adversarially Robust,Mrigank Raman,http://arxiv.org/pdf/2303.07320v1.pdf,2023-03-13,"['cs.cl', 'cs.lg']",2303.07320v1.pdf,"  In recent years, NLP practitioners have converged on the following practice:
(i) import an off-the-shelf pretrained (masked) language model; (ii) append a
multilayer perceptron atop the CLS token's hidden representation (with randomly
initialized weights); and (iii) fine-tune the entire model on a downstream task
(MLP). This procedure has produced massive gains on standard NLP benchmarks,
but these models remain brittle, even to mild adversarial perturbations, such
as word-level synonym substitutions. In this work, we demonstrate surprising
gains in adversarial robustness enjoyed by Model-tuning Via Prompts (MVP), an
alternative method of adapting to downstream tasks. Rather than modifying the
model (by appending an MLP head), MVP instead modifies the input (by appending
a prompt template). Across three classification datasets, MVP improves
performance against adversarial word-level synonym substitutions by an average
of 8% over standard methods and even outperforms adversarial training-based
state-of-art defenses by 3.5%. By combining MVP with adversarial training, we
achieve further improvements in robust accuracy while maintaining clean
accuracy. Finally, we conduct ablations to investigate the mechanism underlying
these gains. Notably, we find that the main causes of vulnerability of MLP can
be attributed to the misalignment between pre-training and fine-tuning tasks,
and the randomly initialized MLP parameters. Code is available at
https://github.com/acmi-lab/mvp
"
"PromptAid: Prompt Exploration, Perturbation, Testing and Iteration using  Visual Analytics for Large Language Models",Aditi Mishra,http://arxiv.org/pdf/2304.01964v2.pdf,2023-04-04,['cs.hc'],2304.01964v2.pdf,"  Large Language Models (LLMs) have gained widespread popularity due to their
ability to perform ad-hoc Natural Language Processing (NLP) tasks with a simple
natural language prompt. Part of the appeal for LLMs is their approachability
to the general public, including individuals with no prior technical experience
in NLP techniques. However, natural language prompts can vary significantly in
terms of their linguistic structure, context, and other semantics. Modifying
one or more of these aspects can result in significant differences in task
performance. Non-expert users may find it challenging to identify the changes
needed to improve a prompt, especially when they lack domain-specific knowledge
and lack appropriate feedback. To address this challenge, we present PromptAid,
a visual analytics system designed to interactively create, refine, and test
prompts through exploration, perturbation, testing, and iteration. PromptAid
uses multiple, coordinated visualizations which allow users to improve prompts
by using the three strategies: keyword perturbations, paraphrasing
perturbations, and obtaining the best set of in-context few-shot examples.
PromptAid was designed through an iterative prototyping process involving NLP
experts and was evaluated through quantitative and qualitative assessments for
LLMs. Our findings indicate that PromptAid helps users to iterate over prompt
template alterations with less cognitive overhead, generate diverse prompts
with help of recommendations, and analyze the performance of the generated
prompts while surpassing existing state-of-the-art prompting interfaces in
performance.
"
FashionSAP: Symbols and Attributes Prompt for Fine-grained Fashion  Vision-Language Pre-training,Yunpeng Han,http://arxiv.org/pdf/2304.05051v1.pdf,2023-04-11,"['cs.cv', 'cs.cl']",2304.05051v1.pdf,"  Fashion vision-language pre-training models have shown efficacy for a wide
range of downstream tasks. However, general vision-language pre-training models
pay less attention to fine-grained domain features, while these features are
important in distinguishing the specific domain tasks from general tasks. We
propose a method for fine-grained fashion vision-language pre-training based on
fashion Symbols and Attributes Prompt (FashionSAP) to model fine-grained
multi-modalities fashion attributes and characteristics. Firstly, we propose
the fashion symbols, a novel abstract fashion concept layer, to represent
different fashion items and to generalize various kinds of fine-grained fashion
features, making modelling fine-grained attributes more effective. Secondly,
the attributes prompt method is proposed to make the model learn specific
attributes of fashion items explicitly. We design proper prompt templates
according to the format of fashion data. Comprehensive experiments are
conducted on two public fashion benchmarks, i.e., FashionGen and FashionIQ, and
FashionSAP gets SOTA performances for four popular fashion tasks. The ablation
study also shows the proposed abstract fashion symbols, and the attribute
prompt method enables the model to acquire fine-grained semantics in the
fashion domain effectively. The obvious performance gains from FashionSAP
provide a new baseline for future fashion task research.
"
"A study on Prompt Design, Advantages and Limitations of ChatGPT for Deep  Learning Program Repair",Jialun Cao,http://arxiv.org/pdf/2304.08191v1.pdf,2023-04-17,['cs.se'],2304.08191v1.pdf,"  ChatGPT has revolutionized many research and industrial fields. ChatGPT has
shown great potential in software engineering to boost various traditional
tasks such as program repair, code understanding, and code generation. However,
whether automatic program repair (APR) applies to deep learning (DL) programs
is still unknown. DL programs, whose decision logic is not explicitly encoded
in the source code, have posed unique challenges to APR. While to repair DL
programs, an APR approach needs to not only parse the source code syntactically
but also needs to understand the code intention. With the best prior work, the
performance of fault localization is still far less than satisfactory (only
about 30\%). Therefore, in this paper, we explore ChatGPT's capability for DL
program repair by asking three research questions. (1) Can ChatGPT debug DL
programs effectively? (2) How can ChatGPT's repair performance be improved by
prompting? (3) In which way can dialogue help facilitate the repair? On top of
that, we categorize the common aspects useful for prompt design for DL program
repair. Also, we propose various prompt templates to facilitate the performance
and summarize the advantages and disadvantages of ChatGPT's abilities such as
detecting bad code smell, code refactoring, and detecting API
misuse/deprecation.
"
Prompt-Learning for Cross-Lingual Relation Extraction,Chiaming Hsu,http://arxiv.org/pdf/2304.10354v1.pdf,2023-04-20,['cs.cl'],2304.10354v1.pdf,"  Relation Extraction (RE) is a crucial task in Information Extraction, which
entails predicting relationships between entities within a given sentence.
However, extending pre-trained RE models to other languages is challenging,
particularly in real-world scenarios where Cross-Lingual Relation Extraction
(XRE) is required. Despite recent advancements in Prompt-Learning, which
involves transferring knowledge from Multilingual Pre-trained Language Models
(PLMs) to diverse downstream tasks, there is limited research on the effective
use of multilingual PLMs with prompts to improve XRE. In this paper, we present
a novel XRE algorithm based on Prompt-Tuning, referred to as Prompt-XRE. To
evaluate its effectiveness, we design and implement several prompt templates,
including hard, soft, and hybrid prompts, and empirically test their
performance on competitive multilingual PLMs, specifically mBART. Our extensive
experiments, conducted on the low-resource ACE05 benchmark across multiple
languages, demonstrate that our Prompt-XRE algorithm significantly outperforms
both vanilla multilingual PLMs and other existing models, achieving
state-of-the-art performance in XRE. To further show the generalization of our
Prompt-XRE on larger data scales, we construct and release a new XRE dataset-
WMT17-EnZh XRE, containing 0.9M English-Chinese pairs extracted from WMT 2017
parallel corpus. Experiments on WMT17-EnZh XRE also show the effectiveness of
our Prompt-XRE against other competitive baselines. The code and newly
constructed dataset are freely available at
\url{https://github.com/HSU-CHIA-MING/Prompt-XRE}.
"
CitePrompt: Using Prompts to Identify Citation Intent in Scientific  Papers,Avishek Lahiri,http://arxiv.org/pdf/2304.12730v2.pdf,2023-04-25,['cs.cl'],2304.12730v2.pdf,"  Citations in scientific papers not only help us trace the intellectual
lineage but also are a useful indicator of the scientific significance of the
work. Citation intents prove beneficial as they specify the role of the
citation in a given context. In this paper, we present CitePrompt, a framework
which uses the hitherto unexplored approach of prompt-based learning for
citation intent classification. We argue that with the proper choice of the
pretrained language model, the prompt template, and the prompt verbalizer, we
can not only get results that are better than or comparable to those obtained
with the state-of-the-art methods but also do it with much less exterior
information about the scientific document. We report state-of-the-art results
on the ACL-ARC dataset, and also show significant improvement on the SciCite
dataset over all baseline models except one. As suitably large labelled
datasets for citation intent classification can be quite hard to find, in a
first, we propose the conversion of this task to the few-shot and zero-shot
settings. For the ACL-ARC dataset, we report a 53.86% F1 score for the
zero-shot setting, which improves to 63.61% and 66.99% for the 5-shot and
10-shot settings, respectively.
"
Don't Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner,Zhengxiang Shi,http://arxiv.org/pdf/2305.01711v4.pdf,2023-05-02,['cs.cl'],2305.01711v4.pdf,"  Language models (LMs) trained on vast quantities of unlabelled data have
greatly advanced the field of natural language processing (NLP). In this study,
we re-visit the widely accepted notion in NLP that continued pre-training LMs
on task-related texts improves the performance of fine-tuning (FT) in
downstream tasks. Through experiments on eight single-sentence tasks and eight
sentence-pair tasks in both semi-supervised and fully-supervised settings, we
find that conventional continued pre-training does not consistently provide
benefits and can even be detrimental for sentence-pair tasks or when
prompt-based FT is used. To tackle these issues, we propose Prompt-based
Continued Pre-training (PCP), which combines the idea of instruction tuning
with conventional continued pre-training. Our approach aims to improve the
performance of prompt-based FT by presenting both task-related texts and prompt
templates to LMs through unsupervised pre-training objectives before
fine-tuning for the target task. Our empirical evaluations on 21 benchmarks
demonstrate that the PCP consistently improves the performance of
state-of-the-art prompt-based FT approaches (up to 20.1% absolute) in both
semi-supervised and fully-supervised settings, even with only hundreds of
unlabelled examples. Additionally, prompt-based FT with the PCP outperforms
state-of-the-art semi-supervised approaches with greater simplicity,
eliminating the need for an iterative process and extra data augmentation. Our
further analysis explores the performance lower bound of the PCP and reveals
that the advantages of PCP persist across different sizes of models and
datasets.
"
Large Language Models are Zero-Shot Rankers for Recommender Systems,Yupeng Hou,http://arxiv.org/pdf/2305.08845v1.pdf,2023-05-15,"['cs.ir', 'cs.cl']",2305.08845v1.pdf,"  Recently, large language models (LLMs) (e.g. GPT-4) have demonstrated
impressive general-purpose task-solving abilities, including the potential to
approach recommendation tasks. Along this line of research, this work aims to
investigate the capacity of LLMs that act as the ranking model for recommender
systems. To conduct our empirical study, we first formalize the recommendation
problem as a conditional ranking task, considering sequential interaction
histories as conditions and the items retrieved by the candidate generation
model as candidates. We adopt a specific prompting approach to solving the
ranking task by LLMs: we carefully design the prompting template by including
the sequential interaction history, the candidate items, and the ranking
instruction. We conduct extensive experiments on two widely-used datasets for
recommender systems and derive several key findings for the use of LLMs in
recommender systems. We show that LLMs have promising zero-shot ranking
abilities, even competitive to or better than conventional recommendation
models on candidates retrieved by multiple candidate generators. We also
demonstrate that LLMs struggle to perceive the order of historical interactions
and can be affected by biases like position bias, while these issues can be
alleviated via specially designed prompting and bootstrapping strategies. The
code to reproduce this work is available at
https://github.com/RUCAIBox/LLMRank.
"
TEPrompt: Task Enlightenment Prompt Learning for Implicit Discourse  Relation Recognition,Wei Xiang,http://arxiv.org/pdf/2305.10866v1.pdf,2023-05-18,['cs.cl'],2305.10866v1.pdf,"  Implicit Discourse Relation Recognition (IDRR) aims at classifying the
relation sense between two arguments without an explicit connective. Recently,
the ConnPrompt~\cite{Wei.X:et.al:2022:COLING} has leveraged the powerful prompt
learning for IDRR based on the fusion of multi-prompt decisions from three
different yet much similar connective prediction templates. Instead of
multi-prompt ensembling, we propose to design auxiliary tasks with enlightened
prompt learning for the IDRR task. Although an auxiliary task is not used to
directly output final prediction, we argue that during the joint training some
of its learned features can be useful to boost the main task. In light of such
motivations, we propose a task enlightenment prompt learning model, called
TEPrompt, to fuse learned features from three related tasks for IDRR. In
particular, the TEPrompt contains three tasks, viz., Discourse Relation
Recognition (DRR), Sense Semantics Classification (SSC) and Annotated
Connective Prediction (ACP), each with a unique prompt template and an answer
space. In the training phase, we jointly train three prompt learning tasks with
shared argument representation. In the testing phase, we only take the DRR
output with fused features as the final IDRR decision. Experiments with the
same conditions have shown that the proposed TEPrompt outperforms the
ConnPrompt. This can be attributed to the promoted decision features and
language models benefited from joint-training of auxiliary tasks.
"
Prompting ChatGPT in MNER: Enhanced Multimodal Named Entity Recognition  with Auxiliary Refined Knowledge,Jinyuan Li,http://arxiv.org/pdf/2305.12212v2.pdf,2023-05-20,['cs.cl'],2305.12212v2.pdf,"  Multimodal Named Entity Recognition (MNER) on social media aims to enhance
textual entity prediction by incorporating image-based clues. Existing studies
mainly focus on maximizing the utilization of pertinent image information or
incorporating external knowledge from explicit knowledge bases. However, these
methods either neglect the necessity of providing the model with external
knowledge, or encounter issues of high redundancy in the retrieved knowledge.
In this paper, we present PGIM -- a two-stage framework that aims to leverage
ChatGPT as an implicit knowledge base and enable it to heuristically generate
auxiliary knowledge for more efficient entity prediction. Specifically, PGIM
contains a Multimodal Similar Example Awareness module that selects suitable
examples from a small number of predefined artificial samples. These examples
are then integrated into a formatted prompt template tailored to the MNER and
guide ChatGPT to generate auxiliary refined knowledge. Finally, the acquired
knowledge is integrated with the original text and fed into a downstream model
for further processing. Extensive experiments show that PGIM outperforms
state-of-the-art methods on two classic MNER datasets and exhibits a stronger
robustness and generalization capability.
"
"Paradigm Shift in Sustainability Disclosure Analysis: Empowering  Stakeholders with CHATREPORT, a Language Model-Based Tool",Jingwei Ni,http://arxiv.org/pdf/2306.15518v1.pdf,2023-06-27,['cs.cl'],2306.15518v1.pdf,"  This paper introduces a novel approach to enhance Large Language Models
(LLMs) with expert knowledge to automate the analysis of corporate
sustainability reports by benchmarking them against the Task Force for
Climate-Related Financial Disclosures (TCFD) recommendations. Corporate
sustainability reports are crucial in assessing organizations' environmental
and social risks and impacts. However, analyzing these reports' vast amounts of
information makes human analysis often too costly. As a result, only a few
entities worldwide have the resources to analyze these reports, which could
lead to a lack of transparency. While AI-powered tools can automatically
analyze the data, they are prone to inaccuracies as they lack domain-specific
expertise. This paper introduces a novel approach to enhance LLMs with expert
knowledge to automate the analysis of corporate sustainability reports. We
christen our tool CHATREPORT, and apply it in a first use case to assess
corporate climate risk disclosures following the TCFD recommendations.
CHATREPORT results from collaborating with experts in climate science, finance,
economic policy, and computer science, demonstrating how domain experts can be
involved in developing AI tools. We make our prompt templates, generated data,
and scores available to the public to encourage transparency.
"
TIAM -- A Metric for Evaluating Alignment in Text-to-Image Generation,Paul Grimal,http://arxiv.org/pdf/2307.05134v1.pdf,2023-07-11,"['cs.cv', 'cs.ai', 'cs.cl', 'cs.lg']",2307.05134v1.pdf,"  The progress in the generation of synthetic images has made it crucial to
assess their quality. While several metrics have been proposed to assess the
rendering of images, it is crucial for Text-to-Image (T2I) models, which
generate images based on a prompt, to consider additional aspects such as to
which extent the generated image matches the important content of the prompt.
Moreover, although the generated images usually result from a random starting
point, the influence of this one is generally not considered. In this article,
we propose a new metric based on prompt templates to study the alignment
between the content specified in the prompt and the corresponding generated
images. It allows us to better characterize the alignment in terms of the type
of the specified objects, their number, and their color. We conducted a study
on several recent T2I models about various aspects. An additional interesting
result we obtained with our approach is that image quality can vary drastically
depending on the latent noise used as a seed for the images. We also quantify
the influence of the number of concepts in the prompt, their order as well as
their (color) attributes. Finally, our method allows us to identify some latent
seeds that produce better images than others, opening novel directions of
research on this understudied topic.
"
LLM-FuncMapper: Function Identification for Interpreting Complex Clauses  in Building Codes via LLM,Zhe Zheng,http://arxiv.org/pdf/2308.08728v1.pdf,2023-08-17,"['cs.ai', 'cs.cl']",2308.08728v1.pdf,"  As a vital stage of automated rule checking (ARC), rule interpretation of
regulatory texts requires considerable effort. However, interpreting regulatory
clauses with implicit properties or complex computational logic is still
challenging due to the lack of domain knowledge and limited expressibility of
conventional logic representations. Thus, LLM-FuncMapper, an approach to
identifying predefined functions needed to interpret various regulatory clauses
based on the large language model (LLM), is proposed. First, by systematically
analysis of building codes, a series of atomic functions are defined to capture
shared computational logics of implicit properties and complex constraints,
creating a database of common blocks for interpreting regulatory clauses. Then,
a prompt template with the chain of thought is developed and further enhanced
with a classification-based tuning strategy, to enable common LLMs for
effective function identification. Finally, the proposed approach is validated
with statistical analysis, experiments, and proof of concept. Statistical
analysis reveals a long-tail distribution and high expressibility of the
developed function database, with which almost 100% of computer-processible
clauses can be interpreted and represented as computer-executable codes.
Experiments show that LLM-FuncMapper achieve promising results in identifying
relevant predefined functions for rule interpretation. Further proof of concept
in automated rule interpretation also demonstrates the possibility of
LLM-FuncMapper in interpreting complex regulatory clauses. To the best of our
knowledge, this study is the first attempt to introduce LLM for understanding
and interpreting complex regulatory clauses, which may shed light on further
adoption of LLM in the construction domain.
"
Prompt-Based Length Controlled Generation with Reinforcement Learning,Renlong Jie,http://arxiv.org/pdf/2308.12030v2.pdf,2023-08-23,"['cs.cl', 'cs.ai', 'cs.lg']",2308.12030v2.pdf,"  Large language models (LLMs) like ChatGPT and GPT-4 have attracted great
attention given their surprising performance on a wide range of NLP tasks.
Length controlled generation of LLMs emerges as an important topic, which
enables users to fully leverage the capability of LLMs in more real-world
scenarios like generating a proper answer or essay of a desired length. In
addition, the autoregressive generation in LLMs is extremely time-consuming,
while the ability of controlling this generated length can reduce the inference
cost by limiting the length. Therefore, we propose a prompt-based length
control method to achieve high-accuracy length controlled generation. In
particular, we adopt reinforcement learning with the reward signal given by
either trainable or rule-based reward models, which further enhances the
length-control ability of LLMs by rewarding outputs that follows pre-defined
control instruction. To enable rule-based inference, we also introduce standard
prompt extractor to collect the standard control information from users' input.
Experiments show that our method significantly improves the accuracy of
prompt-based length control for summarization task on popular datasets like
CNNDM and NYT. Both the standard prompt extractor and the RL-tuned model have
show strong generalization ability to unseen control prompt templates.
"
LLM Powered Sim-to-real Transfer for Traffic Signal Control,Longchao Da,http://arxiv.org/pdf/2308.14284v3.pdf,2023-08-28,"['cs.ai', 'h.4.0']",2308.14284v3.pdf,"  Numerous solutions are proposed for the Traffic Signal Control (TSC) tasks
aiming to provide efficient transportation and mitigate congestion waste. In
recent, promising results have been attained by Reinforcement Learning (RL)
methods through trial and error in simulators, bringing confidence in solving
cities' congestion headaches. However, there still exist performance gaps when
simulator-trained policies are deployed to the real world. This issue is mainly
introduced by the system dynamic difference between the training simulator and
the real-world environments. The Large Language Models (LLMs) are trained on
mass knowledge and proved to be equipped with astonishing inference abilities.
In this work, we leverage LLMs to understand and profile the system dynamics by
a prompt-based grounded action transformation. Accepting the cloze prompt
template, and then filling in the answer based on accessible context, the
pre-trained LLM's inference ability is exploited and applied to understand how
weather conditions, traffic states, and road types influence traffic dynamics,
being aware of this, the policies' action is taken and grounded based on
realistic dynamics, thus help the agent learn a more realistic policy. We
conduct experiments using DQN to show the effectiveness of the proposed
PromptGAT's ability in mitigating the performance gap from simulation to
reality (sim-to-real).
"
AnoVL: Adapting Vision-Language Models for Unified Zero-shot Anomaly  Localization,Hanqiu Deng,http://arxiv.org/pdf/2308.15939v1.pdf,2023-08-30,['cs.cv'],2308.15939v1.pdf,"  Contrastive Language-Image Pre-training (CLIP) models have shown promising
performance on zero-shot visual recognition tasks by learning visual
representations under natural language supervision. Recent studies attempt the
use of CLIP to tackle zero-shot anomaly detection by matching images with
normal and abnormal state prompts. However, since CLIP focuses on building
correspondence between paired text prompts and global image-level
representations, the lack of patch-level vision to text alignment limits its
capability on precise visual anomaly localization. In this work, we introduce a
training-free adaptation (TFA) framework of CLIP for zero-shot anomaly
localization. In the visual encoder, we innovate a training-free value-wise
attention mechanism to extract intrinsic local tokens of CLIP for patch-level
local description. From the perspective of text supervision, we particularly
design a unified domain-aware contrastive state prompting template. On top of
the proposed TFA, we further introduce a test-time adaptation (TTA) mechanism
to refine anomaly localization results, where a layer of trainable parameters
in the adapter is optimized using TFA's pseudo-labels and synthetic
noise-corrupted tokens. With both TFA and TTA adaptation, we significantly
exploit the potential of CLIP for zero-shot anomaly localization and
demonstrate the effectiveness of our proposed methods on various datasets.
"
Investigating the Applicability of Self-Assessment Tests for Personality  Measurement of Large Language Models,Akshat Gupta,http://arxiv.org/pdf/2309.08163v1.pdf,2023-09-15,"['cs.cl', 'cs.ai']",2309.08163v1.pdf,"  As large language models (LLM) evolve in their capabilities, various recent
studies have tried to quantify their behavior using psychological tools created
to study human behavior. One such example is the measurement of ""personality""
of LLMs using personality self-assessment tests. In this paper, we take three
such studies on personality measurement of LLMs that use personality
self-assessment tests created to study human behavior. We use the prompts used
in these three different papers to measure the personality of the same LLM. We
find that all three prompts lead very different personality scores. This simple
test reveals that personality self-assessment scores in LLMs depend on the
subjective choice of the prompter. Since we don't know the ground truth value
of personality scores for LLMs as there is no correct answer to such questions,
there's no way of claiming if one prompt is more or less correct than the
other. We then introduce the property of option order symmetry for personality
measurement of LLMs. Since most of the self-assessment tests exist in the form
of multiple choice question (MCQ) questions, we argue that the scores should
also be robust to not just the prompt template but also the order in which the
options are presented. This test unsurprisingly reveals that the answers to the
self-assessment tests are not robust to the order of the options. These simple
tests, done on ChatGPT and Llama2 models show that self-assessment personality
tests created for humans are not appropriate for measuring personality in LLMs.
"
InstructCV: Instruction-Tuned Text-to-Image Diffusion Models as Vision  Generalists,Yulu Gan,http://arxiv.org/pdf/2310.00390v1.pdf,2023-09-30,['cs.cv'],2310.00390v1.pdf,"  Recent advances in generative diffusion models have enabled text-controlled
synthesis of realistic and diverse images with impressive quality. Despite
these remarkable advances, the application of text-to-image generative models
in computer vision for standard visual recognition tasks remains limited. The
current de facto approach for these tasks is to design model architectures and
loss functions that are tailored to the task at hand. In this paper, we develop
a unified language interface for computer vision tasks that abstracts away
task-specific design choices and enables task execution by following natural
language instructions. Our approach involves casting multiple computer vision
tasks as text-to-image generation problems. Here, the text represents an
instruction describing the task, and the resulting image is a visually-encoded
task output. To train our model, we pool commonly-used computer vision datasets
covering a range of tasks, including segmentation, object detection, depth
estimation, and classification. We then use a large language model to
paraphrase prompt templates that convey the specific tasks to be conducted on
each image, and through this process, we create a multi-modal and multi-task
training dataset comprising input and output images along with annotated
instructions. Following the InstructPix2Pix architecture, we apply
instruction-tuning to a text-to-image diffusion model using our constructed
dataset, steering its functionality from a generative model to an
instruction-guided multi-task vision learner. Experiments demonstrate that our
model, dubbed InstructCV, performs competitively compared to other generalist
and task-specific vision models. Moreover, it exhibits compelling
generalization capabilities to unseen data, categories, and user instructions.
"
Revisit Input Perturbation Problems for LLMs: A Unified Robustness  Evaluation Framework for Noisy Slot Filling Task,Guanting Dong,http://arxiv.org/pdf/2310.06504v1.pdf,2023-10-10,"['cs.cl', 'cs.ai', 'cs.lg']",2310.06504v1.pdf,"  With the increasing capabilities of large language models (LLMs), these
high-performance models have achieved state-of-the-art results on a wide range
of natural language processing (NLP) tasks. However, the models' performance on
commonly-used benchmark datasets often fails to accurately reflect their
reliability and robustness when applied to real-world noisy data. To address
these challenges, we propose a unified robustness evaluation framework based on
the slot-filling task to systematically evaluate the dialogue understanding
capability of LLMs in diverse input perturbation scenarios. Specifically, we
construct a input perturbation evaluation dataset, Noise-LLM, which contains
five types of single perturbation and four types of mixed perturbation data.
Furthermore, we utilize a multi-level data augmentation method (character,
word, and sentence levels) to construct a candidate data pool, and carefully
design two ways of automatic task demonstration construction strategies
(instance-level and entity-level) with various prompt templates. Our aim is to
assess how well various robustness methods of LLMs perform in real-world noisy
scenarios. The experiments have demonstrated that the current open-source LLMs
generally achieve limited perturbation robustness performance. Based on these
experimental observations, we make some forward-looking suggestions to fuel the
research in this direction.
"
Do Language Models Learn about Legal Entity Types during Pretraining?,Claire Barale,http://arxiv.org/pdf/2310.13092v1.pdf,2023-10-19,['cs.cl'],2310.13092v1.pdf,"  Language Models (LMs) have proven their ability to acquire diverse linguistic
knowledge during the pretraining phase, potentially serving as a valuable
source of incidental supervision for downstream tasks. However, there has been
limited research conducted on the retrieval of domain-specific knowledge, and
specifically legal knowledge. We propose to explore the task of Entity Typing,
serving as a proxy for evaluating legal knowledge as an essential aspect of
text comprehension, and a foundational task to numerous downstream legal NLP
applications. Through systematic evaluation and analysis and two types of
prompting (cloze sentences and QA-based templates) and to clarify the nature of
these acquired cues, we compare diverse types and lengths of entities both
general and domain-specific entities, semantics or syntax signals, and
different LM pretraining corpus (generic and legal-oriented) and architectures
(encoder BERT-based and decoder-only with Llama2). We show that (1) Llama2
performs well on certain entities and exhibits potential for substantial
improvement with optimized prompt templates, (2) law-oriented LMs show
inconsistent performance, possibly due to variations in their training corpus,
(3) LMs demonstrate the ability to type entities even in the case of
multi-token entities, (4) all models struggle with entities belonging to
sub-domains of the law (5) Llama2 appears to frequently overlook syntactic
cues, a shortcoming less present in BERT-based architectures.
"
LlamaRec: Two-Stage Recommendation using Large Language Models for  Ranking,Zhenrui Yue,http://arxiv.org/pdf/2311.02089v1.pdf,2023-10-25,"['cs.ir', 'cs.ai', 'cs.cl']",2311.02089v1.pdf,"  Recently, large language models (LLMs) have exhibited significant progress in
language understanding and generation. By leveraging textual features,
customized LLMs are also applied for recommendation and demonstrate
improvements across diverse recommendation scenarios. Yet the majority of
existing methods perform training-free recommendation that heavily relies on
pretrained knowledge (e.g., movie recommendation). In addition, inference on
LLMs is slow due to autoregressive generation, rendering existing methods less
effective for real-time recommendation. As such, we propose a two-stage
framework using large language models for ranking-based recommendation
(LlamaRec). In particular, we use small-scale sequential recommenders to
retrieve candidates based on the user interaction history. Then, both history
and retrieved items are fed to the LLM in text via a carefully designed prompt
template. Instead of generating next-item titles, we adopt a verbalizer-based
approach that transforms output logits into probability distributions over the
candidate items. Therefore, the proposed LlamaRec can efficiently rank items
without generating long text. To validate the effectiveness of the proposed
framework, we compare against state-of-the-art baseline methods on benchmark
datasets. Our experimental results demonstrate the performance of LlamaRec,
which consistently achieves superior performance in both recommendation
performance and efficiency.
"
Prompting Multilingual Large Language Models to Generate Code-Mixed  Texts: The Case of South East Asian Languages,Zheng-Xin Yong,http://arxiv.org/pdf/2303.13592v4.pdf,2023-03-23,"['cs.cl', 'cs.ai']",2303.13592v4.pdf,"  While code-mixing is a common linguistic practice in many parts of the world,
collecting high-quality and low-cost code-mixed data remains a challenge for
natural language processing (NLP) research. The recent proliferation of Large
Language Models (LLMs) compels one to ask: how capable are these systems in
generating code-mixed data? In this paper, we explore prompting multilingual
LLMs in a zero-shot manner to generate code-mixed data for seven languages in
South East Asia (SEA), namely Indonesian, Malay, Chinese, Tagalog, Vietnamese,
Tamil, and Singlish. We find that publicly available multilingual
instruction-tuned models such as BLOOMZ and Flan-T5-XXL are incapable of
producing texts with phrases or clauses from different languages. ChatGPT
exhibits inconsistent capabilities in generating code-mixed texts, wherein its
performance varies depending on the prompt template and language pairing. For
instance, ChatGPT generates fluent and natural Singlish texts (an English-based
creole spoken in Singapore), but for English-Tamil language pair, the system
mostly produces grammatically incorrect or semantically meaningless utterances.
Furthermore, it may erroneously introduce languages not specified in the
prompt. Based on our investigation, existing multilingual LLMs exhibit a wide
range of proficiency in code-mixed data generation for SEA languages. As such,
we advise against using LLMs in this context without extensive human checks.
"
"Reason for Future, Act for Now: A Principled Framework for Autonomous  LLM Agents with Provable Sample Efficiency",Zhihan Liu,http://arxiv.org/pdf/2309.17382v2.pdf,2023-09-29,"['cs.ai', 'cs.lg']",2309.17382v2.pdf,"  Large language models (LLMs) demonstrate impressive reasoning abilities, but
translating reasoning into actions in the real world remains challenging. In
particular, it remains unclear how to complete a given task provably within a
minimum number of interactions with the external environment, e.g., through an
internal mechanism of reasoning. To this end, we propose a principled framework
with provable regret guarantees to orchestrate reasoning and acting, which we
call ""reason for future, act for now"" (\texttt{RAFA}). Specifically, we design
a prompt template for reasoning that learns from the memory buffer and plans a
future trajectory over a long horizon (""reason for future""). At each step, the
LLM agent takes the initial action of the planned trajectory (""act for now""),
stores the collected feedback in the memory buffer, and reinvokes the reasoning
routine to replan the future trajectory from the new state.
  The key idea is to cast reasoning in LLMs as learning and planning in
Bayesian adaptive Markov decision processes (MDPs). Correspondingly, we prompt
LLMs to form an updated posterior of the unknown environment from the memory
buffer (learning) and generate an optimal trajectory for multiple future steps
that maximizes a value function (planning). The learning and planning
subroutines are performed in an ""in-context"" manner to emulate the actor-critic
update for MDPs. Our theoretical analysis proves that the novel combination of
long-term reasoning and short-term acting achieves a $\sqrt{T}$ regret. In
particular, the regret bound highlights an intriguing interplay between the
prior knowledge obtained through pretraining and the uncertainty reduction
achieved by reasoning and acting. Our empirical validation shows that it
outperforms various existing frameworks and achieves nearly perfect scores on a
few benchmarks.
"
ClickPrompt: CTR Models are Strong Prompt Generators for Adapting  Language Models to CTR Prediction,Jianghao Lin,http://arxiv.org/pdf/2310.09234v2.pdf,2023-10-13,"['cs.ir', 'cs.ai']",2310.09234v2.pdf,"  Click-through rate (CTR) prediction has become increasingly indispensable for
various Internet applications. Traditional CTR models convert the multi-field
categorical data into ID features via one-hot encoding, and extract the
collaborative signals among features. Such a paradigm suffers from the problem
of semantic information loss. Another line of research explores the potential
of pretrained language models (PLMs) for CTR prediction by converting input
data into textual sentences through hard prompt templates. Although semantic
signals are preserved, they generally fail to capture the collaborative
information (e.g., feature interactions, pure ID features), not to mention the
unacceptable inference overhead brought by the huge model size. In this paper,
we aim to model both the semantic knowledge and collaborative knowledge for
accurate CTR estimation, and meanwhile address the inference inefficiency
issue. To benefit from both worlds and close their gaps, we propose a novel
model-agnostic framework (i.e., ClickPrompt), where we incorporate CTR models
to generate interaction-aware soft prompts for PLMs. We design a
prompt-augmented masked language modeling (PA-MLM) pretraining task, where PLM
has to recover the masked tokens based on the language context, as well as the
soft prompts generated by CTR model. The collaborative and semantic knowledge
from ID and textual features would be explicitly aligned and interacted via the
prompt interface. Then, we can either tune the CTR model with PLM for superior
performance, or solely tune the CTR model without PLM for inference efficiency.
Experiments on four real-world datasets validate the effectiveness of
ClickPrompt compared with existing baselines.
"
ALT: Towards Fine-grained Alignment between Language and CTR Models for  Click-Through Rate Prediction,Hangyu Wang,http://arxiv.org/pdf/2310.19453v1.pdf,2023-10-30,"['cs.ir', 'cs.ai']",2310.19453v1.pdf,"  Click-through rate (CTR) prediction plays as a core function module in
various personalized online services. According to the data modality and input
format, the models for CTR prediction can be mainly classified into two
categories. The first one is the traditional CTR models that take as inputs the
one-hot encoded ID features of tabular modality, which aims to capture the
collaborative signals via feature interaction modeling. The second category
takes as inputs the sentences of textual modality obtained by hard prompt
templates, where pretrained language models (PLMs) are adopted to extract the
semantic knowledge. These two lines of research generally focus on different
characteristics of the same input data (i.e., textual and tabular modalities),
forming a distinct complementary relationship with each other. Therefore, in
this paper, we propose to conduct fine-grained feature-level Alignment between
Language and CTR models (ALT) for CTR prediction. Apart from the common
CLIP-like instance-level contrastive learning, we further design a novel joint
reconstruction pretraining task for both masked language and tabular modeling.
Specifically, the masked data of one modality (i.e., tokens or features) has to
be recovered with the help of the other modality, which establishes the
feature-level interaction and alignment via sufficient mutual information
extraction between dual modalities. Moreover, we propose three different
finetuning strategies with the option to train the aligned language and CTR
models separately or jointly for downstream CTR prediction tasks, thus
accommodating the varying efficacy and efficiency requirements for industrial
applications. Extensive experiments on three real-world datasets demonstrate
that ALT outperforms SOTA baselines, and is highly compatible for various
language and CTR models.
"