Spaces:
Runtime error
Runtime error
Delete BGG Blog MD.md
Browse files- BGG Blog MD.md +0 -569
BGG Blog MD.md
DELETED
@@ -1,569 +0,0 @@
|
|
1 |
-
*Auto-BG: The Board Game Concept Generator*
|
2 |
-
|
3 |
-
1. [Introduction]{.underline}
|
4 |
-
|
5 |
-
a. [Goals Statement - design aid/improved user control over
|
6 |
-
> generation]{.underline}
|
7 |
-
|
8 |
-
**A Gentle Introduction to Auto-BG & Board Game Data**
|
9 |
-
|
10 |
-
**What is Auto-BG?**
|
11 |
-
|
12 |
-
How does a board game transform from an idea to physically sitting on
|
13 |
-
your table?
|
14 |
-
|
15 |
-
This application attempts to augment one step, early in that journey,
|
16 |
-
when the seeds of an idea combine and sprout into a holistic concept. By
|
17 |
-
interpreting disparate mechanical and descriptive tags to identify a
|
18 |
-
game concept, Auto-BG uses a custom pipeline of GPT3 and T5 models to
|
19 |
-
create a new description and proposed titles for a game that doesn't
|
20 |
-
exist today. These descriptions support designers-to-be as alternatives
|
21 |
-
to current concepts, seeds for future concepts, or any user as,
|
22 |
-
hopefully, an entertaining thought experiment.
|
23 |
-
|
24 |
-
While, overall, ChatGPT solves this application case by generating
|
25 |
-
coherent descriptions from sentence-formed prompts, we believe this
|
26 |
-
design niche would be better served by a dedicated application
|
27 |
-
leveraging domain-specific transfer learning and granular control
|
28 |
-
through an extensive tag-prompt framework.
|
29 |
-
|
30 |
-
Before digging into the process of generating a board game concept,
|
31 |
-
let's ground some key terms:
|
32 |
-
|
33 |
-
- *Mechanical* - The gameplay mechanics which, together, create a
|
34 |
-
> distinct ruleset for a given game. Represented here by a class of
|
35 |
-
> tags each capturing individual mechanics such as "Worker
|
36 |
-
> Placement" or "Set Collection".
|
37 |
-
|
38 |
-
- *Cooperative* - Our application, and the dataset, singles out
|
39 |
-
> this mechanic as being important enough to have its own tag
|
40 |
-
> class. As an alternative to the domain-default of
|
41 |
-
> *Competitive* gaming, *Cooperative* represents both an
|
42 |
-
> individual mechanic and an entire design paradigm shift.
|
43 |
-
|
44 |
-
- *Descriptive* - In this context, the narrative, production, or
|
45 |
-
> family-connective elements of a game. This can include genre,
|
46 |
-
> categorical niches, or any relational tag not captured by the
|
47 |
-
> class of mechanical tags described above.
|
48 |
-
|
49 |
-
a. [Orientation to the data - understanding how assembled tags
|
50 |
-
> define a game item]{.underline}
|
51 |
-
|
52 |
-
**Understanding the Data**
|
53 |
-
|
54 |
-
To train our models, we utilized a processed dataset of 20,769 ranked
|
55 |
-
board games scraped from the board game database & forum
|
56 |
-
[[BoardGameGeek.com]{.underline}](https://boardgamegeek.com/)^1^. Each
|
57 |
-
game includes its name, description, and 3,800 feature tags separated
|
58 |
-
into five classes including Cooperative, Game Type, Category, Mechanic,
|
59 |
-
and Family; when you select tags within Auto-BG, you choose and assign
|
60 |
-
them within these distinct classes. To be ranked, these games must have
|
61 |
-
received a lifetime minimum of thirty user ratings; non-standalone
|
62 |
-
expansion and compilation game items have also been removed to reduce
|
63 |
-
replication in the training data.
|
64 |
-
|
65 |
-
As previous studies^2^ identified a disproportionate bias toward english
|
66 |
-
language descriptions, our approach used a soft pass to remove games
|
67 |
-
identified as having non-english descriptions using langdetect^3^. We
|
68 |
-
also implemented a check on all remaining titles to remove any that used
|
69 |
-
non-english characters. In total, this purged 1,423 or \~6.4% of the
|
70 |
-
data.
|
71 |
-
|
72 |
-
The transformed data powering AutoBG was originally scraped via the BGG
|
73 |
-
XML API by \@mshepard on GitLab^4^ and customized for The Impact of
|
74 |
-
Crowdfunding on Board Games (2022)^5^ by N. Canu, A. Oldenkamp, and J.
|
75 |
-
Ellis & Deconstructing Game Design (2022)^2^ by N. Canu, K. Chen, and R.
|
76 |
-
Shetty. That work, and Auto-BG, are derivatively licensed under
|
77 |
-
[[Creative Commons
|
78 |
-
Attribution-NonCommercial-ShareAlike]{.underline}](http://creativecommons.org/licenses/by-nc-sa/3.0/).
|
79 |
-
Our GitHub repository includes a python script collating the processing
|
80 |
-
steps required to turn a raw scraper output file into the final data
|
81 |
-
constructs needed to run an instance of Auto-BG.
|
82 |
-
|
83 |
-
b. [Ethical considerations in domain - inherited biases from both
|
84 |
-
> generative model and BGG framework, role of deep learning as
|
85 |
-
> supplement/replacement]{.underline}
|
86 |
-
|
87 |
-
**Ethical Considerations**
|
88 |
-
|
89 |
-
As a chaotician once said "your scientists were so preoccupied with
|
90 |
-
whether or not they could, they didn\'t stop to think if they
|
91 |
-
should"^6^ - but that's not what we're going to do. So before we
|
92 |
-
continue, a few notes regarding potential biases and concerns when
|
93 |
-
training or using Auto-BG.
|
94 |
-
|
95 |
-
First and foremost, Auto-BG inherits critical biases from the base GPT3
|
96 |
-
Curie model. Extensive work exists documenting the model family's biases
|
97 |
-
related to race^7^, gender^7,8^, religion^9^, and abstracted moral
|
98 |
-
frameworks on right and wrong^10^. A key vector for producing biased
|
99 |
-
output from these models comes from the conditioning context (i.e. user
|
100 |
-
input) which prompts a response^11^; while these inherited biases can't
|
101 |
-
be eliminated, AutoBG attempts to reduce their impact on output text
|
102 |
-
through strictly controlling the conditioning context by relating
|
103 |
-
unknown input to pre-approved keys.
|
104 |
-
|
105 |
-
Additionally, Auto-BG has inherited significant bias in generating
|
106 |
-
gender markers within descriptive text from the BoardGameGeek training
|
107 |
-
data. As seen in figure 1, game descriptions include a disproportionate
|
108 |
-
volume of male-associated gender markets in the source data. This
|
109 |
-
transfers through Auto-BG's pipeline, resulting in output that, as in
|
110 |
-
figure 2, retains a similar distribution. While outside the feasible
|
111 |
-
scope of this project, future iterations of Auto-BG should include
|
112 |
-
coreference cleaning on the training input to minimize inherited bias;
|
113 |
-
this approach would require a robust review of the source data through
|
114 |
-
the framework of coreference resolution to avoid further harms by making
|
115 |
-
inferences that fail to acknowledge the complexities of gender within
|
116 |
-
this data^12^.
|
117 |
-
|
118 |
-
Beyond these substantial issues, with any large language model (LLM)
|
119 |
-
project, we have to ask - are we harming human actors by deploying this
|
120 |
-
tool? In 2023, discussion of the prospective gains and potential danger
|
121 |
-
of LLMs has broken out of the data science domain and represents a
|
122 |
-
growing global conversation. Academics wrestle with the scope and
|
123 |
-
limitations of GPT models^13^ while crossover publications debate topics
|
124 |
-
such as the merits of ChatGPT as an author^14^ and which professions AI
|
125 |
-
models might replace in the near future^15^.
|
126 |
-
|
127 |
-
With these concerns in mind, we designed Auto-BG as a strictly
|
128 |
-
augmentative tool - a board game is more than its conceptual description
|
129 |
-
and while Auto-BG may suggest rule frameworks based on your input, it
|
130 |
-
can't generate any of the production elements such as full rulesets or
|
131 |
-
art needed to take the game from concept to table. In addition, we
|
132 |
-
strongly encourage the use of Auto-BG as a starting point, not a ground
|
133 |
-
truth generation. The model makes mistakes that a human actor may not;
|
134 |
-
while we have implemented catches to discourage it from replicating
|
135 |
-
existing board games, its GPT3 trained core has an extensive knowledge
|
136 |
-
of gaming in general. We recommend reviewing output text thoroughly
|
137 |
-
before committing to your new board games \[maybe insert pictures of it
|
138 |
-
generating these?\] *Age of Empires II: Age of Kings* or *Everquest^©^*.
|
139 |
-
|
140 |
-
c. [Overall generator framework - birds eye graphic/discussion of
|
141 |
-
> pipeline]{.underline}
|
142 |
-
|
143 |
-
**What's in a pipeline?**
|
144 |
-
|
145 |
-
Auto-BG relies on multiple LLMs to generate the full game concept from
|
146 |
-
your initial input. The entire process from turning your tags into a
|
147 |
-
prompt to presenting your finished concept is embedded in a pipeline, or
|
148 |
-
a framework of related code running sequentially, that enables these
|
149 |
-
distinct steps to work in unison. It all starts with an
|
150 |
-
idea...![](media/image1.png){width="0.2760422134733158in"
|
151 |
-
height="0.44166666666666665in"}
|
152 |
-
|
153 |
-
1. The most complex participant in Auto-BG's pipeline (that's you)
|
154 |
-
> translates different elements of a vague concept for a game into
|
155 |
-
> defined tags through the Auto-BG interface.
|
156 |
-
|
157 |
-
2. An input parser collates the tags, turning them into a game vector
|
158 |
-
> (a binary, or one-hot, representation of a game where all selected
|
159 |
-
> tags are valued at 1). This vector activates the internal keys
|
160 |
-
> representing each tag selected in step 1 and passes them as a
|
161 |
-
> prompt to Auto-BG's GPT3 model.
|
162 |
-
|
163 |
-
3. A generator function calls the model through the OpenAI API,
|
164 |
-
> returning a response using selected parameters for the input
|
165 |
-
> prompt. This response is decoded and lightly cleaned to remove
|
166 |
-
> trailing sentences for readability.
|
167 |
-
|
168 |
-
4. The cleaned output is passed downstream to a local t5 trained
|
169 |
-
> sequence-to-sequence model, it uses the output as a prompt to
|
170 |
-
> translate to potential candidate titles.
|
171 |
-
|
172 |
-
5. Because the GPT3 model has learned that games should include a
|
173 |
-
> title, it may include an artificial title already. The title
|
174 |
-
> generator runs once and if any candidate titles are identified in
|
175 |
-
> the text, it strips them out for a placeholder before running
|
176 |
-
> again.
|
177 |
-
|
178 |
-
6. A title selector rejoins the initial and secondary generations,
|
179 |
-
> removes duplicates, validates against existing game titles, then
|
180 |
-
> scores the cosine similarity of each title with its reference
|
181 |
-
> input. It attaches the highest scoring title to the output and
|
182 |
-
> fills all placeholders, storing backups of the unchosen titles.
|
183 |
-
|
184 |
-
7. The Auto-BG user interface returns the generated prompt with its
|
185 |
-
> title for you to give feedback on, review alternate titles, or
|
186 |
-
> save for future use.
|
187 |
-
|
188 |
-
At the end of the pipeline, you have a brand-new game concept customized
|
189 |
-
by your selected tags! And if you're not happy with the output, running
|
190 |
-
the generator again will produce a new description given the same tags.
|
191 |
-
|
192 |
-
To understand Auto-BG better, let's go through each step in detail,
|
193 |
-
looking at how an example prompt is transformed into its final output.
|
194 |
-
|
195 |
-
2. [Pipeline Ride Along - How does "your" input become a
|
196 |
-
> description/title pair?]{.underline}
|
197 |
-
|
198 |
-
**The Inner Workings of Auto-BG w/Examples**
|
199 |
-
|
200 |
-
a. *The Input Step*
|
201 |
-
|
202 |
-
***Interpreting User Input***
|
203 |
-
|
204 |
-
i. [User input as a "game" & input design - Building off 1b, cover
|
205 |
-
> design choices like dividing feature classes, uncaptured features
|
206 |
-
> in training, bridge to input selection.]{.underline}
|
207 |
-
|
208 |
-
*How do individual tags add up to a game profile?*
|
209 |
-
|
210 |
-
This key question guided every aspect of Auto-BGs design principles. The
|
211 |
-
complex series of transformations from input to generation required
|
212 |
-
understanding both how a human reader would interact and interpret these
|
213 |
-
tags but also how the GPT3 LLM at the heart of Auto-BG could best learn
|
214 |
-
them.
|
215 |
-
|
216 |
-
Auto-BG inherits tag classes from BoardGameGeek, if you go to any game
|
217 |
-
page, you'll see a collection of type, category, mechanic, and family
|
218 |
-
tags on the sidebar. We decided Auto-BG 1.0 would focus explicitly on
|
219 |
-
these four primary classes as they translated coherently to formatting
|
220 |
-
for a GPT3 prompt training structure, could be universally applied to an
|
221 |
-
unknown game, and don't rely on post-design information like user
|
222 |
-
recommendations or publisher data.
|
223 |
-
|
224 |
-
*What happens when you choose tags in Auto-BG?*
|
225 |
-
|
226 |
-
Inside Auto-BG, each tag has a hidden class prefix that denotes it as a
|
227 |
-
member of that class and tracks its affiliation throughout the pipeline.
|
228 |
-
This means when you select a new tag, even if it\'s semantically similar
|
229 |
-
to another overlapping tag in a different class, Auto-BG remembers its
|
230 |
-
associations; in this approach, Auto-BG effectively handles unknown
|
231 |
-
inputs by scanning the associated class, matching the unknown tag to its
|
232 |
-
closest existing sibling.
|
233 |
-
|
234 |
-
By grouping all of your selected tags, Auto-BG creates an approximate
|
235 |
-
profile of a game to generate. When generating your new concept, Auto-BG
|
236 |
-
tries to infer additional features based on the tags provided. Some
|
237 |
-
features like player count, age rating, and playtime may still show up
|
238 |
-
in your concept if Auto-BG thinks they should be included in the text.
|
239 |
-
By changing even a single tag out, Auto-BG will try to accommodate that
|
240 |
-
new design feature and generate a different concept!
|
241 |
-
|
242 |
-
With this sensitivity to the prompt selection, users must understand how
|
243 |
-
their choices impact downstream generation. Auto-BG needed an intuitive
|
244 |
-
user experience with coherent tutorials throughout to streamline this
|
245 |
-
process, alongside some targeted limits on profile creation.
|
246 |
-
|
247 |
-
ii. [UI design considerations - Making tags easily accessible w/volume
|
248 |
-
> of options, input tutorial design and approach to creating min/max
|
249 |
-
> tag caps.]{.underline}
|
250 |
-
|
251 |
-
##\# UI Input Discussion
|
252 |
-
|
253 |
-
iii. [Vectorizing user inputs - text key to vector transformation and
|
254 |
-
> syncing with the ground truth key list. Approach to handling
|
255 |
-
> unknown user input in tag selection.]{.underline}
|
256 |
-
|
257 |
-
1. Converting to final input - Need for a consistent prompt
|
258 |
-
> structure and design considerations to improve GPT3 prompt
|
259 |
-
> interpretation, and discuss autonomous generation from tags
|
260 |
-
> vs free prompt RE user's end goal. \<- joining these together
|
261 |
-
> for formatting.
|
262 |
-
|
263 |
-
Once you've created your new game profile, Auto-BG translates it into a
|
264 |
-
format that its LLM can understand for text generation. Ultimately,
|
265 |
-
Auto-BG tries to convert your profile tags following OpenAI's
|
266 |
-
recommended best practices for conditional generation prompts^16,17^.
|
267 |
-
This results in a text prompt version of your tag list with each tag
|
268 |
-
period stopped like so:\
|
269 |
-
\
|
270 |
-
##\# example here\
|
271 |
-
\
|
272 |
-
Fine-tune training has taught Auto-BGs model that this specific format
|
273 |
-
of text prompt should translate to a descriptive paragraph of the
|
274 |
-
associated tags. To create it, Auto-BG passes your tags through an input
|
275 |
-
manager that performs several steps in sequence before sending a
|
276 |
-
polished prompt to the generator.
|
277 |
-
|
278 |
-
First, Auto-BG establishes a ground truth key dictionary of all existing
|
279 |
-
tags. This format allows for iterative updating from future data, as new
|
280 |
-
tags appear on BoardGameGeek, Auto-BG can update to include them. This
|
281 |
-
python dictionary includes a key for every tag with the equivalent value
|
282 |
-
set to 0 - tracking that it does not appear. Eventually, it will change
|
283 |
-
the values for your selected tags to 1, telling Auto-BG that they appear
|
284 |
-
and should be added to the prompt, but first, it tries to correct for
|
285 |
-
unknown inputs.
|
286 |
-
|
287 |
-
Each tag not recognized as being present in the key dictionary moves to
|
288 |
-
a bucket matching its attached prefix. With these marked as unknown, the
|
289 |
-
input manager implements a within-class comparison between the unknown
|
290 |
-
tag and all known tags that share its prefix. Auto-BG estimates semantic
|
291 |
-
relevance between tags through token cosine similarity^18^, comparing
|
292 |
-
the unknown tag to each candidate with SpaCy's token similarity
|
293 |
-
method^19^. Once it evaluates this for all candidates, it adds the
|
294 |
-
highest scoring option for each unknown tag to the remaining input tags.
|
295 |
-
|
296 |
-
Auto-BGs underlying model inherits the characteristic sensitivity to
|
297 |
-
prompt design of LLMs; prompt design often requires significant human
|
298 |
-
effort, and many approaches have been suggested to address this
|
299 |
-
challenge. Our design philosophy focused on improving the quality and
|
300 |
-
control of prompts with goals of reliability, creativity, and coherence
|
301 |
-
to input, specifically inspired by concepts of metaprompting^20^ and
|
302 |
-
internal prompt generation within the LLM^21^. Auto-BG leverages
|
303 |
-
existing human tagging work done within the source data to provide the
|
304 |
-
generator model with a controllable abstracted prompt instead of a
|
305 |
-
potentially complex natural language sentence-structured prompt.
|
306 |
-
|
307 |
-
To pass the now cleaned input to the generator, Auto-BGs' input manager
|
308 |
-
converts the pooled tags into a text string as above, with each tag key
|
309 |
-
turned into a period-stopped sentence. The generator will send this
|
310 |
-
prompt to a remote model through the OpenAI API and return a matching
|
311 |
-
description!
|
312 |
-
|
313 |
-
b. *The Generation Step*
|
314 |
-
|
315 |
-
***Generating a Description***
|
316 |
-
|
317 |
-
i. Approach to model selection - comparative performance of GPT3 models
|
318 |
-
> vs cost, tradeoff with controllable output and quality.
|
319 |
-
|
320 |
-
ii. Training methodology - literature discussion on optimizing fine tune
|
321 |
-
> LLM training for generation relative to other tasks
|
322 |
-
> (classification in particular).
|
323 |
-
|
324 |
-
iii. Initial output generation - limited key parameters in API
|
325 |
-
> generation, tuning strategy, refining with validation set.
|
326 |
-
|
327 |
-
1. Challenges w/GPT3 - Discuss sensitivity to prompt construction
|
328 |
-
> and changes in keys & difficulty of low volume references for
|
329 |
-
> keys w/source data. Outside knowledge polluting output (i.e.
|
330 |
-
> using video games as names in text). Testing for prompt
|
331 |
-
> memorization (increasing chaos in output for learned
|
332 |
-
> keysets).
|
333 |
-
|
334 |
-
iv. Output processing - Increased text cleanup for user output tasks,
|
335 |
-
> approaches to mitigating potentially subpar generation from GPT 3
|
336 |
-
> including embedded titles and excessive referencing from BGG
|
337 |
-
> (incorrect publishers/designers included in text), challenge of
|
338 |
-
> removing references again.
|
339 |
-
|
340 |
-
```{=html}
|
341 |
-
<!-- -->
|
342 |
-
```
|
343 |
-
c. *The Title Step*
|
344 |
-
|
345 |
-
***A Fitting Title***
|
346 |
-
|
347 |
-
i. Text to title relationship - Domain considerations w/title as a
|
348 |
-
> product, distinct relationship to train on. Role of title in text
|
349 |
-
> as a key point when running models in sequence.
|
350 |
-
|
351 |
-
To complete the game concept, Auto-BG provides a selection of titles
|
352 |
-
chosen to fit the newly generated description. But what defines a good
|
353 |
-
title? It depends on the domain and it depends on the product! A sense
|
354 |
-
of tension may fit a thriller novel^22^ and sticking to positive
|
355 |
-
associations enhances commercial products^23^, but where do board games
|
356 |
-
fit in? To create titles that both fit their description and matched
|
357 |
-
domain trends as a whole, we needed a model that could learn the unique
|
358 |
-
relationship between title and description in board gaming.
|
359 |
-
|
360 |
-
ii. Transformation approach - seq2seq advantages for this task & model
|
361 |
-
> selection, advantages of extending transfer learning from domain
|
362 |
-
> to domain (news headlines to game titles)
|
363 |
-
|
364 |
-
While our generator uses the prompt-based GPT paradigm, we decided to
|
365 |
-
take a different approach for title generation inside Auto-BG. T5 models
|
366 |
-
fit within a unified framework for transfer learning known as
|
367 |
-
sequence-to-sequence or text-to-text^24^; at their core, they're an
|
368 |
-
encoder-decoder model operating by treating all NLP operations as a
|
369 |
-
translation task. This means that given text as input, the model will
|
370 |
-
attempt to output a predicted target that best fits the training data.
|
371 |
-
|
372 |
-
Transfer learning, in machine learning, applies a model trained on one
|
373 |
-
task or domain and performs additional training to apply the model to a
|
374 |
-
different task. The total corpus of descriptions for BGG, approximately
|
375 |
-
20,000 descriptions, is inadequate for fresh training of a generative
|
376 |
-
NLP model; instead Auto-BG leverages two-stages of upstream transfer
|
377 |
-
learning through HuggingFace's Transformer library. The final
|
378 |
-
implementation extends a model^25^ already fine-tuned on an additional
|
379 |
-
500,000 news articles from t5-base.
|
380 |
-
|
381 |
-
1. Relative model performance - approach to model selection based on
|
382 |
-
> above including comparative metrics. Discuss aligning metrics
|
383 |
-
> w/goal of tool + need for human review.
|
384 |
-
|
385 |
-
To scope the best generator for Auto-BG, we trained a sequence of models
|
386 |
-
on both t5-base^26^ and the headline-trained model. Beyond training
|
387 |
-
baseline models, we looked toward work by Tay, et al.^27^ on scaling and
|
388 |
-
fine-tune training for transformers to guide selecting training
|
389 |
-
parameters; the final round of testing utilized a higher learning rate
|
390 |
-
of .001 while introducing weight decay in the optimizer.
|
391 |
-
|
392 |
-
With several models trained
|
393 |
-
|
394 |
-
iii. Low-cost iterative generation - two-pass competitive scoring on
|
395 |
-
> multi-output, using the title generator to select and replace
|
396 |
-
> low-quality titles then scoring the maximum pool of two
|
397 |
-
> generations.
|
398 |
-
|
399 |
-
1. Scoring approach - title output cleanup and scoring final title
|
400 |
-
> to body text semantic similarity.
|
401 |
-
|
402 |
-
```{=html}
|
403 |
-
<!-- -->
|
404 |
-
```
|
405 |
-
d. *Final Output*
|
406 |
-
|
407 |
-
i. Rerunning in real-time - approaches to adjusting output
|
408 |
-
> including updating text references, finding text generator
|
409 |
-
> default "profiles" through hyperparameter testing, and
|
410 |
-
> implementation of update processes in the UI.
|
411 |
-
|
412 |
-
ii. The value of user feedback - considerations for future
|
413 |
-
> performance w/tuning generation settings, content reporting
|
414 |
-
> for title cleanup or problematic text content.
|
415 |
-
|
416 |
-
```{=html}
|
417 |
-
<!-- -->
|
418 |
-
```
|
419 |
-
3. [Discussion & Conclusion]{.underline}
|
420 |
-
|
421 |
-
a. What's the point? - Practical design tool / Interpreting how
|
422 |
-
> "mechanical" changes interact with language descriptions /
|
423 |
-
> Extending user interactivity beyond prompts
|
424 |
-
|
425 |
-
b. Discuss methodology - user interactivity through prompt control,
|
426 |
-
> did it work & other potential approaches to same application
|
427 |
-
> (reinforcement, different prompt structures)
|
428 |
-
|
429 |
-
c. Alternative applications - fine-tune human-written desc
|
430 |
-
> (training on extended prompt + more deterministic output),
|
431 |
-
> rules generation?
|
432 |
-
|
433 |
-
d. Future extensions/limitations - Expand on language scope, live
|
434 |
-
> data + training, discarded feature classes / discussion on
|
435 |
-
> approach not including these currently
|
436 |
-
|
437 |
-
e. What next? - Encourage readers to experiment with tool + give
|
438 |
-
> feedback in app
|
439 |
-
|
440 |
-
4. [Extras]{.underline}
|
441 |
-
|
442 |
-
a. Work Statement + Appendices + Citations
|
443 |
-
|
444 |
-
**References**
|
445 |
-
|
446 |
-
1. "BoardGameGeek." Accessed March 19, 2023.
|
447 |
-
> [[https://boardgamegeek.com/]{.underline}](https://boardgamegeek.com/).
|
448 |
-
|
449 |
-
2. Canu, Nicholas, Kuan Chen, and Rhea Shetty. "Deconstructing Game
|
450 |
-
> Design." Jupyter Notebook, October 20, 2022.
|
451 |
-
> [[https://github.com/canunj/deconstructing_games]{.underline}](https://github.com/canunj/deconstructing_games).
|
452 |
-
|
453 |
-
3. Danilák, Michal. "Langdetect." Python, March 15, 2023.
|
454 |
-
> [[https://github.com/Mimino666/langdetect]{.underline}](https://github.com/Mimino666/langdetect).
|
455 |
-
|
456 |
-
4. GitLab. "Recommend.Games / Board Game Scraper · GitLab," March
|
457 |
-
> 10, 2023.
|
458 |
-
> [[https://gitlab.com/recommend.games/board-game-scraper]{.underline}](https://gitlab.com/recommend.games/board-game-scraper).
|
459 |
-
|
460 |
-
5. Canu, Nicholas, Jonathan Ellis, and Oldenkamp, Adam. "The Impact of
|
461 |
-
> Crowdfunding on Board Games." Jupyter Notebook, May 20, 2022.
|
462 |
-
> [[https://github.com/canunj/BGG_KS_Analysis]{.underline}](https://github.com/canunj/BGG_KS_Analysis/blob/92452d7e7b174bf45763e469b1ed4ce61a84b7ba/The%20Impact%20of%20Crowdfunding%20on%20Board%20Games.pdf).
|
463 |
-
|
464 |
-
6. Spielberg, Steven dir. *Jurassic Park*. 1993; Universal City, CA:
|
465 |
-
> Universal Studios, 2022. UHD Blu Ray.
|
466 |
-
|
467 |
-
7. Chiu, Ke-Li, Annie Collins, and Rohan Alexander. "Detecting Hate
|
468 |
-
> Speech with GPT-3." arXiv, March 24, 2022.
|
469 |
-
> [[https://doi.org/10.48550/arXiv.2103.12407]{.underline}](https://doi.org/10.48550/arXiv.2103.12407).
|
470 |
-
|
471 |
-
8. Lucy, Li, and David Bamman. "Gender and Representation Bias in GPT-3
|
472 |
-
> Generated Stories." In *Proceedings of the Third Workshop on
|
473 |
-
> Narrative Understanding*, 48--55. Virtual: Association for
|
474 |
-
> Computational Linguistics, 2021.
|
475 |
-
> [[https://doi.org/10.18653/v1/2021.nuse-1.5]{.underline}](https://doi.org/10.18653/v1/2021.nuse-1.5).
|
476 |
-
|
477 |
-
9. Abid, Abubakar, Maheen Farooqi, and James Zou. "Persistent
|
478 |
-
> Anti-Muslim Bias in Large Language Models." In *Proceedings of the
|
479 |
-
> 2021 AAAI/ACM Conference on AI, Ethics, and Society*, 298--306.
|
480 |
-
> AIES '21. New York, NY, USA: Association for Computing
|
481 |
-
> Machinery, 2021.
|
482 |
-
> [[https://doi.org/10.1145/3461702.3462624]{.underline}](https://doi.org/10.1145/3461702.3462624).
|
483 |
-
|
484 |
-
10. Schramowski, Patrick, Cigdem Turan, Nico Andersen, Constantin A.
|
485 |
-
> Rothkopf, and Kristian Kersting. "Large Pre-Trained Language
|
486 |
-
> Models Contain Human-like Biases of What Is Right and Wrong to
|
487 |
-
> Do." *Nature Machine Intelligence* 4, no. 3 (March 2022): 258--68.
|
488 |
-
> [[https://doi.org/10.1038/s42256-022-00458-8]{.underline}](https://doi.org/10.1038/s42256-022-00458-8).
|
489 |
-
|
490 |
-
11. Huang, Po-Sen, Huan Zhang, Ray Jiang, Robert Stanforth, Johannes
|
491 |
-
> Welbl, Jack Rae, Vishal Maini, Dani Yogatama, and Pushmeet Kohli.
|
492 |
-
> "Reducing Sentiment Bias in Language Models via Counterfactual
|
493 |
-
> Evaluation." arXiv, October 8, 2020.
|
494 |
-
> [[https://doi.org/10.48550/arXiv.1911.03064]{.underline}](https://doi.org/10.48550/arXiv.1911.03064).
|
495 |
-
|
496 |
-
12. Cao, Yang Trista, and Hal Daumé III. "Toward Gender-Inclusive
|
497 |
-
> Coreference Resolution." In *Proceedings of the 58th Annual
|
498 |
-
> Meeting of the Association for Computational Linguistics*,
|
499 |
-
> 4568--95, 2020.
|
500 |
-
> [[https://doi.org/10.18653/v1/2020.acl-main.418]{.underline}](https://doi.org/10.18653/v1/2020.acl-main.418).
|
501 |
-
|
502 |
-
13. Floridi, Luciano, and Massimo Chiriatti. "GPT-3: Its Nature, Scope,
|
503 |
-
> Limits, and Consequences." *Minds and Machines* 30, no. 4
|
504 |
-
> (December 1, 2020): 681--94.
|
505 |
-
> [[https://doi.org/10.1007/s11023-020-09548-1]{.underline}](https://doi.org/10.1007/s11023-020-09548-1).
|
506 |
-
|
507 |
-
14. Thorp, H. Holden. "ChatGPT Is Fun, but Not an Author." *Science*
|
508 |
-
> 379, no. 6630 (January 27, 2023): 313--313.
|
509 |
-
> [[https://doi.org/10.1126/science.adg7879]{.underline}](https://doi.org/10.1126/science.adg7879).
|
510 |
-
|
511 |
-
15. Zinkula, Aaron Mok, Jacob. "ChatGPT May Be Coming for Our Jobs. Here
|
512 |
-
> Are the 10 Roles That AI Is Most Likely to Replace." Business
|
513 |
-
> Insider. Accessed March 19, 2023.
|
514 |
-
> [[https://www.businessinsider.com/chatgpt-jobs-at-risk-replacement-artificial-intelligence-ai-labor-trends-2023-02]{.underline}](https://www.businessinsider.com/chatgpt-jobs-at-risk-replacement-artificial-intelligence-ai-labor-trends-2023-02).
|
515 |
-
|
516 |
-
16. "Fine-Tuning." Accessed March 23, 2023.
|
517 |
-
> [[https://platform.openai.com/docs/guides/fine-tuning/conditional-generation]{.underline}](https://platform.openai.com/docs/guides/fine-tuning/conditional-generation).
|
518 |
-
|
519 |
-
17. "Best Practices for Prompt Engineering with OpenAI API \| OpenAI
|
520 |
-
> Help Center." Accessed March 26, 2023.
|
521 |
-
> [[https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api]{.underline}](https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api).
|
522 |
-
|
523 |
-
18. Gunawan, D., C. A. Sembiring, and M. A. Budiman. "The Implementation
|
524 |
-
> of Cosine Similarity to Calculate Text Relevance between Two
|
525 |
-
> Documents." *Journal of Physics: Conference Series* 978, no. 1
|
526 |
-
> (March 2018): 012120.
|
527 |
-
> [[https://doi.org/10.1088/1742-6596/978/1/012120]{.underline}](https://doi.org/10.1088/1742-6596/978/1/012120).
|
528 |
-
|
529 |
-
19. Token. "Token · SpaCy API Documentation." Accessed March 26, 2023.
|
530 |
-
> [[https://spacy.io/api/token#similarity]{.underline}](https://spacy.io/api/token#similarity).
|
531 |
-
|
532 |
-
20. Reynolds, Laria, and Kyle McDonell. "Prompt Programming for Large
|
533 |
-
> Language Models: Beyond the Few-Shot Paradigm." arXiv, February
|
534 |
-
> 15, 2021.
|
535 |
-
> [[https://doi.org/10.48550/arXiv.2102.07350]{.underline}](https://doi.org/10.48550/arXiv.2102.07350).
|
536 |
-
|
537 |
-
21. Zhou, Yongchao, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster,
|
538 |
-
> Silviu Pitis, Harris Chan, and Jimmy Ba. "Large Language Models
|
539 |
-
> Are Human-Level Prompt Engineers." arXiv, March 10, 2023.
|
540 |
-
> [[https://doi.org/10.48550/arXiv.2211.01910]{.underline}](https://doi.org/10.48550/arXiv.2211.01910).
|
541 |
-
|
542 |
-
22. MasterClass. "How to Write a Book Title in 7 Tips: Create the Best
|
543 |
-
> Book Title - 2023." Accessed March 30, 2023.
|
544 |
-
> [[https://www.masterclass.com/articles/how-to-write-a-book-title-in-7-tips-create-the-best-book-title]{.underline}](https://www.masterclass.com/articles/how-to-write-a-book-title-in-7-tips-create-the-best-book-title).
|
545 |
-
|
546 |
-
23. Qualtrics. "How to Find the Perfect Product Name in 2023." Accessed
|
547 |
-
> March 30, 2023.
|
548 |
-
> [[https://www.qualtrics.com/experience-management/product/product-naming/]{.underline}](https://www.qualtrics.com/experience-management/product/product-naming/).
|
549 |
-
|
550 |
-
24. Raffel, Colin, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
|
551 |
-
> Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu.
|
552 |
-
> "Exploring the Limits of Transfer Learning with a Unified
|
553 |
-
> Text-to-Text Transformer." arXiv, July 28, 2020.
|
554 |
-
> [[http://arxiv.org/abs/1910.10683]{.underline}](http://arxiv.org/abs/1910.10683).
|
555 |
-
|
556 |
-
25. "Michau/T5-Base-En-Generate-Headline · Hugging Face." Accessed March
|
557 |
-
> 30, 2023.
|
558 |
-
> [[https://huggingface.co/Michau/t5-base-en-generate-headline]{.underline}](https://huggingface.co/Michau/t5-base-en-generate-headline).
|
559 |
-
|
560 |
-
26. "T5-Base · Hugging Face," November 3, 2022.
|
561 |
-
> [[https://huggingface.co/t5-base]{.underline}](https://huggingface.co/t5-base).
|
562 |
-
|
563 |
-
27. Tay, Yi, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar,
|
564 |
-
> Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, and
|
565 |
-
> Donald Metzler. "Scale Efficiently: Insights from Pre-Training and
|
566 |
-
> Fine-Tuning Transformers." arXiv, January 30, 2022.
|
567 |
-
> [[https://doi.org/10.48550/arXiv.2109.10686]{.underline}](https://doi.org/10.48550/arXiv.2109.10686).
|
568 |
-
|
569 |
-
28.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|