Papers
arxiv:2401.17788

A Prompt-Engineered Large Language Model, Deep Learning Workflow for Materials Classification

Published on Jan 31, 2024
Authors:
,
,

Abstract

A general approach combining LLMs, prompt engineering, and deep learning significantly improves prediction accuracy in materials classification, especially with sparse datasets.

AI-generated summary

Large language models (LLMs) have demonstrated rapid progress across a wide array of domains. Owing to the very large number of parameters and training data in LLMs, these models inherently encompass an expansive and comprehensive materials knowledge database, far exceeding the capabilities of individual researcher. Nonetheless, devising methods to harness the knowledge embedded within LLMs for the design and discovery of novel materials remains a formidable challenge. We introduce a general approach for addressing materials classification problems, which incorporates LLMs, prompt engineering, and deep learning. Utilizing a dataset of metallic glasses as a case study, our methodology achieved an improvement of up to 463% in prediction accuracy compared to conventional classification models. These findings underscore the potential of leveraging textual knowledge generated by LLMs for materials especially in the common situation where datasets are sparse, thereby promoting innovation in materials discovery and design.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2401.17788 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2401.17788 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2401.17788 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.