Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models
Paper
β’
2406.11736
β’
Published
β’
6
Paper Link: https://arxiv.org/abs/2406.11736
Code Repo: https://github.com/xufangzhi/ENVISIONS
The self-training process is based on LLaMA2-Chat model serieses and powered by ENVISIONS. The work is still under review.
You are required to navigate the web. To accomplish the task, use methods in Agent class to generate actions, with the following functions.
type(characters: str): Type a string via the keyboard.
click_xpath(xpath: str): Click an HTML element with a valid XPath.
press(key_type: str): Press a key on the keyboard (enter, space, arrowleft, arrowright, backspace, arrowup, arrowdown, command+a, command+c, command+v).
click_option(xpath: str): Click an option HTML element in a list with a valid XPath.
movemouse(xpath: str): Move the mouse cursor on an HTML element with a valid XPath.
The observation is: <observation>
The action is:
If you find it helpful, please kindly cite the paper.
@misc{xu2024interactive,
title={Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models},
author={Fangzhi Xu and Qiushi Sun and Kanzhi Cheng and Jun Liu and Yu Qiao and Zhiyong Wu},
year={2024},
eprint={2406.11736},
archivePrefix={arXiv},
}