jackkuo commited on
Commit
4478090
·
verified ·
1 Parent(s): 76e5e9f

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. -NE5T4oBgHgl3EQfRg5P/vector_store/index.faiss +3 -0
  2. -NE5T4oBgHgl3EQfRg5P/vector_store/index.pkl +3 -0
  3. -dFLT4oBgHgl3EQfvC_J/content/tmp_files/2301.12158v1.pdf.txt +710 -0
  4. -dFLT4oBgHgl3EQfvC_J/content/tmp_files/load_file.txt +0 -0
  5. .gitattributes +75 -0
  6. 09FKT4oBgHgl3EQfOS13/vector_store/index.faiss +3 -0
  7. 0dE0T4oBgHgl3EQfuAFm/content/2301.02599v1.pdf +3 -0
  8. 0dE0T4oBgHgl3EQfuAFm/vector_store/index.faiss +3 -0
  9. 0dE0T4oBgHgl3EQfuAFm/vector_store/index.pkl +3 -0
  10. 0tAzT4oBgHgl3EQftv2j/vector_store/index.faiss +3 -0
  11. 0tE0T4oBgHgl3EQfdQB7/content/tmp_files/2301.02374v1.pdf.txt +0 -0
  12. 0tE0T4oBgHgl3EQfdQB7/content/tmp_files/load_file.txt +0 -0
  13. 1dAzT4oBgHgl3EQfDfo-/content/2301.00976v1.pdf +3 -0
  14. 1dAzT4oBgHgl3EQfDfo-/vector_store/index.pkl +3 -0
  15. 1tAzT4oBgHgl3EQfRfvj/content/tmp_files/2301.01218v1.pdf.txt +1191 -0
  16. 1tAzT4oBgHgl3EQfRfvj/content/tmp_files/load_file.txt +0 -0
  17. 29FKT4oBgHgl3EQfQS0h/content/2301.11766v1.pdf +3 -0
  18. 29FKT4oBgHgl3EQfQS0h/vector_store/index.faiss +3 -0
  19. 29FST4oBgHgl3EQfYTgs/content/2301.13787v1.pdf +3 -0
  20. 29FST4oBgHgl3EQfYTgs/vector_store/index.pkl +3 -0
  21. 2dE2T4oBgHgl3EQf5gh4/content/tmp_files/2301.04191v1.pdf.txt +0 -0
  22. 2dE2T4oBgHgl3EQf5gh4/content/tmp_files/load_file.txt +0 -0
  23. 3NE0T4oBgHgl3EQfuwHF/vector_store/index.faiss +3 -0
  24. 3dFQT4oBgHgl3EQf3TZi/content/2301.13427v1.pdf +3 -0
  25. 3dFQT4oBgHgl3EQf3TZi/vector_store/index.pkl +3 -0
  26. 4tE4T4oBgHgl3EQf1A0X/vector_store/index.pkl +3 -0
  27. 5tAzT4oBgHgl3EQfu_3x/content/2301.01701v1.pdf +3 -0
  28. 5tAzT4oBgHgl3EQfu_3x/vector_store/index.faiss +3 -0
  29. 6tFKT4oBgHgl3EQfTy2d/content/2301.11781v1.pdf +3 -0
  30. 6tFKT4oBgHgl3EQfTy2d/vector_store/index.faiss +3 -0
  31. 6tFKT4oBgHgl3EQfTy2d/vector_store/index.pkl +3 -0
  32. 79A0T4oBgHgl3EQfOf-b/content/2301.02162v1.pdf +3 -0
  33. 79A0T4oBgHgl3EQfOf-b/vector_store/index.pkl +3 -0
  34. 7NE3T4oBgHgl3EQfqApm/content/2301.04647v1.pdf +3 -0
  35. 7NE3T4oBgHgl3EQfqApm/vector_store/index.faiss +3 -0
  36. 8tE2T4oBgHgl3EQf8Qgm/vector_store/index.faiss +3 -0
  37. 8tFRT4oBgHgl3EQfpzcC/vector_store/index.pkl +3 -0
  38. 99E1T4oBgHgl3EQf8QUL/content/tmp_files/2301.03542v1.pdf.txt +0 -0
  39. 99E1T4oBgHgl3EQf8QUL/content/tmp_files/load_file.txt +0 -0
  40. A9E1T4oBgHgl3EQfpAX1/content/2301.03328v1.pdf +3 -0
  41. A9E1T4oBgHgl3EQfpAX1/vector_store/index.faiss +3 -0
  42. A9E1T4oBgHgl3EQfpAX1/vector_store/index.pkl +3 -0
  43. ANFIT4oBgHgl3EQf-iyR/content/2301.11411v1.pdf +3 -0
  44. AtE1T4oBgHgl3EQf9Ab6/content/2301.03553v1.pdf +3 -0
  45. AtE1T4oBgHgl3EQf9Ab6/vector_store/index.faiss +3 -0
  46. AtE4T4oBgHgl3EQfEwyK/vector_store/index.faiss +3 -0
  47. AtFIT4oBgHgl3EQf_CxR/content/tmp_files/2301.11413v1.pdf.txt +1250 -0
  48. AtFIT4oBgHgl3EQf_CxR/content/tmp_files/load_file.txt +0 -0
  49. B9E2T4oBgHgl3EQf8wmh/content/tmp_files/2301.04222v1.pdf.txt +0 -0
  50. B9E2T4oBgHgl3EQf8wmh/content/tmp_files/load_file.txt +0 -0
-NE5T4oBgHgl3EQfRg5P/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e119ff0c9f5b4d8df29f3c66ae51e85d31396e300b80f02b31d54153b2ae3fd
3
+ size 7536685
-NE5T4oBgHgl3EQfRg5P/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1123a7705717a927871e5d08f9ed807928b1383a4652ac1c750da5e5034fa177
3
+ size 295665
-dFLT4oBgHgl3EQfvC_J/content/tmp_files/2301.12158v1.pdf.txt ADDED
@@ -0,0 +1,710 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The AAAI 2023 Workshop on Representation Learning for Responsible Human-Centric AI (R2HCAI)
2
+ A system for Human-AI collaboration for Online Customer Support
3
+ Debayan Banerjee*
4
+ Mathis Poser*
5
+ Christina Wiethof*
6
+ Varun Shankar Subramanian
7
+ Richard Paucar
8
+ Eva A. C. Bittner
9
+ Chris Biemann
10
+ Universit¨at Hamburg, Hamburg, Germany
11
+ {debayan.banerjee,mathis.poser,christina.wiethof,eva.bittner,chris.biemann}@uni-
12
+ hamburg.de,{varunshankar55,rfpaucar}@gmail.com
13
+ Abstract
14
+ AI enabled chat bots have recently been put to use to answer
15
+ customer service queries, however it is a common feedback
16
+ of users that bots lack a personal touch and are often unable to
17
+ understand the real intent of the user’s question. To this end,
18
+ it is desirable to have human involvement in the customer
19
+ servicing process. In this work, we present a system where
20
+ a human support agent collaborates in real-time with an AI
21
+ agent to satisfactorily answer customer queries. We describe
22
+ the user interaction elements of the solution, along with the
23
+ machine learning techniques involved in the AI agent.
24
+ Introduction
25
+ In the pursuit of operational efficiency, companies across
26
+ the globe have been deploying automation technology aided
27
+ by Artificial Intelligence (AI) for Online Customer Support
28
+ (OCS) use cases 1. With the explosive growth of social me-
29
+ dia usage, incoming customer queries have grown exponen-
30
+ tially and to handle this growth, the use of proper technology
31
+ is critical. Some estimates say that by the year 2025, 95%
32
+ of all customer interactions will be processed in some form
33
+ by AI 2. However, AI in its present state is not advanced
34
+ enough to completely replace human agents for most cus-
35
+ tomer support scenarios. Additionally, the complete replace-
36
+ ment of human workforce by AI is a topic of active ethical
37
+ and political debate. For these reasons the development of a
38
+ hybrid working environment is required, where both human
39
+ agents and AI agents can co-operate to satisfy OCS require-
40
+ ments.
41
+ In this work we briefly describe a web based user inter-
42
+ face that allows a customer to interact with a human sup-
43
+ port agent, where the human agent receives helpful sugges-
44
+ tions in parallel from an AI agent. In subsequent sections, we
45
+ elaborate further on the machine learning techniques used
46
+ for the AI agent.
47
+ Our present work is a part of a project which aims to find
48
+ ways of integrating AI agents into customer support based
49
+ workflows, with an aim of reducing workload of human
50
+ *These authors contributed equally.
51
+ 1https://www.gartner.com/smarterwithgartner/4-key-tech-
52
+ trends-in-customer-service-to-watch
53
+ 2https://servion.com/blog/what-emerging-technologies-future-
54
+ customer-experience/
55
+ agents. It is one of the primary goals of the project not to
56
+ entirely replace the human agent with AI, and instead find
57
+ productive means of co-existence of the two. As a part of
58
+ this project, an international volunteer-driven organisation,
59
+ which organises internships and projects for students across
60
+ the globe was involved. In this organisation, prospective stu-
61
+ dents participate in text based chat with human agents, and
62
+ typically enquire about available opportunities and how to
63
+ participate in them. The human agents in turn use their do-
64
+ main expertise to provide the necessary information to the
65
+ students.
66
+ All the students and human agents involved were resi-
67
+ dents of Germany and hence the conversations were car-
68
+ ried out in the German language. After collecting the con-
69
+ versations, an annotation phase was undertaken, where rele-
70
+ vant utterances of the conversation were annotated with the
71
+ corresponding FAQ IDs. When the conversations originally
72
+ took place, there was no singular FAQ database in existence.
73
+ For the purpose of this project, such a database was created.
74
+ This made it possible to annotate the utterances with relevant
75
+ FAQ IDs.
76
+ The goal of the dataset is to train an AI agent that can pas-
77
+ sively listen to the ongoing conversation and make relevant
78
+ suggestions visible only to the human agent, not to the stu-
79
+ dent. The human agent may then forward the suggested FAQ
80
+ answer to the student, or decide not to do so if the quality of
81
+ suggestion is poor. The eventual goal is for the human agent
82
+ to spend less time looking for the right answer in a Knowl-
83
+ edge Base, and instead offload this task to the AI agent.
84
+ Later, a web UI was constructed, as described in the Web
85
+ Interface section, that the human agent uses to interact with
86
+ the student. The student is not aware of the UI’s existence
87
+ and is operating on a separate chat platform. The AI agent
88
+ provides timely suggestions in this UI which is visible to the
89
+ human agent.
90
+ Our scenario differs from conventional Conversational
91
+ Question Answering (CQA) or Interactive Information Re-
92
+ trieval (IIR) where the user interacts directly with the AI
93
+ agent, and the AI agent is responsible for a response at each
94
+ turn. In our case, the AI agent is in a passive listening role.
95
+ It observes the ongoing conversation between two humans,
96
+ and makes suggestions that are only visible to the human
97
+ agent. Since the task of the AI agent is not just to suggest
98
+ relevant FAQs but also to remain silent when no relevant
99
+ 1
100
+ arXiv:2301.12158v1 [cs.AI] 28 Jan 2023
101
+
102
+ Figure 1: Screenshot of web based prototype
103
+ FAQ is to be suggested, we evaluate both of these aspects in
104
+ the evaluation section.
105
+ The user interface presented in this work has been pub-
106
+ lished before (Poser et al. 2022). The machine learning tech-
107
+ niques used to train the AI agent are yet to be published, and
108
+ hence a larger focus in this work is on the AI training aspect.
109
+ Web Interface
110
+ The web-based frontend in Figure 1 is labelled with cer-
111
+ tain design features (DF) to be explained shortly. The in-
112
+ terface was implemented with Bootstrap and ReactJS while
113
+ the backend API is hosted as a Python Flask app. The inter-
114
+ face greets humans agents with an avatar named Charlie that
115
+ presents a brief usage explanation (DF1). In addition, setting
116
+ options for AI support and learning behavior are provided
117
+ (DF2). The integrated chat window is based on the open-
118
+ source chat framework Rocket Chat. The backend generates
119
+ a ranked list of FAQ suggestions based on ML techniques to
120
+ be described later. In the frontend, two FAQ items - includ-
121
+ ing theme and accuracy in percent - with the highest agree-
122
+ ment are displayed (DF3). The discard buttons can be used
123
+ to sequentially display four additional FAQ suggestions with
124
+ decreasing accuracy. The copy-to-chat buttons insert FAQ
125
+ text into the input field of the chat window. Detailed infor-
126
+ mation about a respective FAQ can be viewed via the get-
127
+ more-info button (DF4). With a counter, points are added
128
+ (copy-to-chat) or subtracted (discard), if buttons are clicked
129
+ (DF5). A feedback field allows entering search terms to se-
130
+ lect and submit a FAQ that matches the interaction (DF6).
131
+ Based on customers’ chat messages, exact keyword-based
132
+ text matching is performed to automatically record interests
133
+ and suggest suitable projects from a database (DF7).
134
+ Related Work
135
+ The earliest dialogue systems, or chat-bots, were rule based
136
+ (Weizenbaum 1966; Colby et al. 1972) and subsequently
137
+ corpus based chat-bots were developed (Serban et al. 2015)
138
+ . In recent times neural chat-bots are frequently encountered
139
+ in day to day customer support scenarios (Ni et al. 2021).
140
+ Recently, an interplay of human and AI collaboration in
141
+ the process has been explored (Liu et al. 2021). However
142
+ current research in this area is focused on the AI bot be-
143
+ ing the first line of service, and only in the case of failures
144
+ of the bot, a handover is initiated to a human agent, who
145
+ plays a secondary role in the process. In contrast, our sce-
146
+ nario makes the human agent the first line of support with
147
+ the AI agent assisting in parallel.
148
+ To train chat-bots, conversational QA datasets such as the
149
+ Ubuntu corpus (Lowe et al. 2015), CoQA (Reddy, Chen,
150
+ and Manning 2019), DoQA (Campos et al. 2020) and QuAC
151
+ (Choi et al. 2018) have made progress in providing the
152
+ community with rich grounds for conversational research.
153
+ While CoQA relies on passages from broad domains
154
+ such as children’s stories and science to retrieve answers,
155
+ QuAC relies on Wikipedia articles to create conversations
156
+ and answers. DoQA on the other hand, focuses on three
157
+ specific domains of cooking, travel and movies from stack-
158
+ exchange.com. In scope of how our dataset is modelled,
159
+ it is most similar to DoQA, which is a domain specific
160
+ conversational dataset which also requires retrieval of the
161
+ correct FAQ from a database. CoQA, DoQA and QuAC
162
+ datasets are crowd-sourced and collected by the Wizard of
163
+ 2
164
+
165
+ IntelligentSupportAgent
166
+
167
+ E
168
+ Hi there, I'am Charlie your personal assistant!
169
+ Your Personal Settings?
170
+ information and knowledge. You can control my settings anytime, To elevate
171
+ Do you want my assistance?
172
+ on
173
+ to learn more about my features.
174
+ May I learn from your conversations and interaction with me?
175
+ DF2
176
+ On
177
+ Happy to work with you!
178
+ DF1
179
+ Knowledge
180
+ DF3
181
+ Charlie's Suggestions?
182
+ Feedback?
183
+ FAQ Theme:
184
+ Find here the right question, and then press send button.
185
+ Copy to chat
186
+ Dscardo
187
+ DF6
188
+ Get mareinfo
189
+ FAQ Theme:
190
+ DF4
191
+ Copy ochat
192
+ DiacardO
193
+ Get moreinfoO
194
+ Charlie's Explanations (Get more info)
195
+ Ccopyfo chat
196
+ Points for Charlie
197
+ DF5
198
+ EP's Interest
199
+ Projects
200
+ Where
201
+ DF7
202
+ + Charlie's Suggestions
203
+ Indicate Location (country)
204
+ Small text
205
+ Copy to chat
206
+ When
207
+ Discarda
208
+ Indicate month
209
+ Small teat
210
+ Message
211
+ What
212
+ Copy to chat
213
+ Small text
214
+ DiscardQ
215
+ Indicate what type of projectFigure 2: A sample conversation from the dataset with relevant corresponding FAQ annotation. The text in red is English
216
+ translation of the conversation for the purpose of this paper, and not a part of the dataset.
217
+ Oz method. On the contrary, our dataset consists of genuine
218
+ conversations between two humans whose sole purpose is
219
+ to find the best internship possible for the student. During
220
+ the conversations, neither of the parties were aware of the
221
+ need to form an annotated dataset. Hence, our dataset has
222
+ no artificial aspects in the flow of conversation.
223
+ The Dortmunder Chat Korpus (Beißwenger et al. 2013) and
224
+ The Verbmobil (Wahlster 1993) project provide German
225
+ conversational corpus but they do not address the Question
226
+ Answering or Information Retrieval domains.
227
+ Recently, the GermanQuAD and GermanDPR (M¨oller,
228
+ Risch, and Pietsch 2021) projects from DeepSet have
229
+ enabled access to Transformer based models trained on
230
+ the German text, which we make use of in our evaluation
231
+ section, however the dataset they are based on is in the form
232
+ of Questions and Answers, and not conversational in nature.
233
+ Dataset Creation
234
+ To train the AI agent, a conversational dataset had to be con-
235
+ structed. For this purpose, the conversations were carried out
236
+ on the popular mobile application WhatsApp 3, where both
237
+ the human agent and the student were on Whatsapp. The
238
+ Web Interface described in the previous section was not in-
239
+ cluded in this process. The conversations centered around
240
+ topics such as how to register for a project, which projects
241
+ are available in a given location, and whether there will be
242
+ certifications available at the end etc. The chats were ex-
243
+ tracted using the export functionality of WhatsApp. The
244
+ 3https://play.google.com/store/apps/details?id=com.whatsapp
245
+ conversations have been collected over a period of two years,
246
+ between 2018 and 2020. In some cases, an individual con-
247
+ versation may also span over a duration of several months,
248
+ where the student and the human agent re-established con-
249
+ tact after a gap of more than a few days. Such information
250
+ is visible through the inclusion of the timestamp field in the
251
+ dataset for each message that is exchanged.
252
+ Relevant consent for releasing their conversations was col-
253
+ lected from the participating students and agents. More-
254
+ over, the identities of the participants and the organisation
255
+ are pseudo-anonymised. Instead of the names of the partici-
256
+ pants, they are given a numerical name such as KundeSech-
257
+ sundzwanzig, which stands for Customer 26 in German. The
258
+ human agent is represented by the term Mitarbeiter which
259
+ stands for employee.
260
+ A single human agent handled all the 26 conversations
261
+ on WhatsApp over a period of time. When the conversa-
262
+ tions were carried out between 2018-2020, no single FAQ
263
+ database existed at the organisation. The human agent in-
264
+ stead used relevant domain expertise and experience within
265
+ the organisation, and referred to a set of disjoint sources of
266
+ information when the chats took place. Later in 2021, the hu-
267
+ man agent and a fellow domain expert colleague compiled a
268
+ single FAQ database that covers most of the issues discussed
269
+ in the conversations. Specific turns of the conversations were
270
+ manually annotated with relevant FAQs by the human agent
271
+ and then verified by the domain expert colleague.
272
+ Dataset Analysis
273
+ Chats and FAQs. As depicted in Figure 4 the 26 collected
274
+ conversations vary in length ranging from 22 utterances
275
+ 3
276
+
277
+ Mitarbeiter : Hey! ich bin Mitarbeiter. Du hast dich bei
278
+ FAQ 1
279
+ uns angemeldet und ich wurde gerne mit dir daruber
280
+ sprechen / telefonieren :). Wann hattest du denn dafur
281
+ "Question":"wann kann ich ein projekt machen?"
282
+ Zeit?
283
+ Employee : Hey!I am an employee here.You have
284
+ "When can Idoaproject?"
285
+ registered with us and I would like to talk to you about it
286
+ "Answer":"projekte sind jederzeitmoglich",
287
+ KundeVierzehn : Guten Morgen, Ich interessiere mich
288
+ "Projects can be done atany time”
289
+ sehr fur Projekte in der Turkei. Wenn der Start im Januar
290
+ moglich ist.
291
+ Customer Fourteen: Good Morning,Iam very interested
292
+ in projects in Turkey. If the start is possible in January.
293
+ FAQ55
294
+ Mitarbeiter/Employee : https://<Organisation>.org/opportunity/984743
295
+ "Question": "was mache ich nun nachdem ich mich
296
+ https://<Organisation>.org/opportunity/1002581
297
+ beworbenhabe?"
298
+ KundeVierzehn : Hallo Mitarbeiter, ich habe mich gerade
299
+ 'what do I doafter I haveapplied?"
300
+ beworben.
301
+ Customer Fourteen: Hello Employee, I just applied.
302
+ "Answer":"prozess:bewerben-kontaktmit
303
+ auslandspartner-akzeptiert-vertrag und gebuhr
304
+ Mitarbeiter : Ah super! Ich kummere mich darum dass du
305
+ approved",
306
+ schnell kontaktiert wirst :)
307
+ "process:apply-contactwithforeign
308
+ Employee: Ah great! I will make sure that you will be
309
+ contacted quickly :)
310
+ partner -accepted-contract and fee -approved'to 607 utterances, with an average of 239 utterances per
311
+ conversation. The entire set of conversations consists of
312
+ 6,219 utterances. 20.9 % of the utterances are annotated
313
+ with the relevant FAQ ID. A significant portion of the
314
+ dataset consists of chit-chat or other non-specific topics
315
+ where no suggestion is supposed to be made by the AI agent
316
+ to the human agent.
317
+ Since certain topics in the chat are discussed more often
318
+ than others, as seen in Figure 3, the distribution of relevant
319
+ annotated FAQ IDs also is imbalanced with FAQ ID 71
320
+ being the most frequent. FAQ 71 pertains to the procedure
321
+ of registering online for projects.
322
+ We have split the dataset into train, dev and test splits in
323
+ roughly 70:10:20 ratios. The train, dev and test splits have
324
+ 17, 3 and 6 conversations, respectively, consisting of 3,693 ,
325
+ 891 and 1,635 utterances.
326
+ Experimental Setting
327
+ Task Definition
328
+ We define the task with the following inputs: current utter-
329
+ ance uk, the set of FAQs F, and the history of utterances so
330
+ far {u1, u2, ...., uk−1}. The task for the model is to rank the
331
+ correct FAQ item from F to the top. If for a given utterance
332
+ no FAQ is appropriate, the model must produce as the top-
333
+ ranked output a special class that denotes absence of FAQ
334
+ suggestion. We hereby call this class no-suggestion.
335
+ Models
336
+ As baselines we use the following settings:
337
+ dumb In this setting, the system produces 10 suggestions,
338
+ with class no-suggestion at the top and FAQ IDs 1 to 9
339
+ as the subsequently ranked suggestions as output.
340
+ random In this setting, the system produces at random 10
341
+ classes as output without repetition. The output may contain
342
+ one of the FAQ IDs or the no-suggestion class.
343
+ Additionally.
344
+ we
345
+ employed
346
+ BM25
347
+ (Robertson
348
+ and
349
+ Zaragoza 2009) based text search ranking as a baseline
350
+ method. In this method we searched the input query string
351
+ against the FAQ database and used the ranked list of results.
352
+ To produce strong performance, we employ Dense Pas-
353
+ sage Retrieval (Karpukhin et al. 2020) techniques . As a
354
+ baseline, we use fb-multiset-english, which is a set of en-
355
+ coders 4 that were pre-trained on English Natural Questions
356
+ (Kwiatkowski et al. 2019), TriviaQA (Joshi et al. 2017), We-
357
+ bQuestions (Berant et al. 2013), and CuratedTREC (Baudiˇs
358
+ and ˇSediv´y 2015).
359
+ Finally, we use pre-trained context and query encoders
360
+ for the German language provided by DeepSet
361
+ 5 and
362
+ fine-tune them on our dataset for 100 epochs with a learning
363
+ rate of 1e-05 with the Adam optimizer. We use random
364
+ sampling for choosing negative examples during training.
365
+ We choose the best performing model based on mrr@10
366
+ on the dev split. We used deepset-german encoders, which
367
+ come comes from DeepSet and is trained on GermanQuAD
368
+ 4facebook/dpr-ctx encoder-multiset-base
369
+ 5https://www.deepset.ai/germanquad
370
+ Figure 3: Distribution of conversation topics in the dataset.
371
+ Figure 4: The length of each conversation
372
+ (M¨oller, Risch, and Pietsch 2021) dataset.
373
+ For query, we concatenate 4 consecutive utterances of
374
+ conversation and consider it the input to the model. For con-
375
+ text, we concatenate the question and answer for each FAQ
376
+ and make the DPR model consider these as the passages
377
+ database from which it has to rank the best possible FAQ.
378
+ Evaluation Metrics
379
+ As our metric, we choose the Mean Reciprocal Rank
380
+ (MRR). For each query candidate, the model produces an
381
+ MRR, which is the reciprocal of the position of the correct
382
+ FAQ in the ranked list. We consider only the top 10 candi-
383
+ dates, and hence, if the correct candidate is not in the top 10,
384
+ we consider the MRR as 0. We compute the eventual MRR
385
+ by taking a mean of the MRR of each query sample in the
386
+ test set.
387
+ 4
388
+
389
+ payment
390
+ project planning
391
+ organisation
392
+ insurance
393
+ conditions
394
+ browser
395
+ benefits
396
+ location
397
+ project
398
+ certificate
399
+ breach of contract
400
+ price
401
+ postprocessing
402
+ time
403
+ supervisor
404
+ scholarship
405
+ preparation
406
+ application
407
+ 0
408
+ 100
409
+ 200
410
+ 300
411
+ 400Conversation ID
412
+ 0
413
+ 100
414
+ 200
415
+ 300
416
+ 400
417
+ 500
418
+ 600
419
+ TurnsWe evaluate separate MRRs for those utterances which have
420
+ empty FAQ suggestions as gold annotation, and the ones
421
+ which have non-empty FAQ gold suggestions. As explained
422
+ before, the task of the AI agent is not just to recommend the
423
+ right FAQ when needed, but it must also remain silent when
424
+ no FAQ is suitable. We measure the ability of AI agent on
425
+ both these tasks in Table 1.
426
+ Experimental Setup
427
+ Since a large percentage of the utterances (79.1%) belongs
428
+ to the no-suggestion class we experiment with differ-
429
+ ent mixture of faq classes and the no-suggestion class.
430
+ During preparation of train and dev sets to be fed to the
431
+ model, we calibrate the ratio of no-suggestion utter-
432
+ ances differently as follows:
433
+ mean In this setting, we compute the mean of the frequency
434
+ of the faq classes and include these many samples of ran-
435
+ domly chosen no-suggestion utterances as input.
436
+ highest-freq In this setting, we find the most frequent faq
437
+ class and include the same number of no-suggestion
438
+ class samples.
439
+ sum In this setting, the number of samples of the utterances
440
+ in no-suggestion class is equal to the sum of the num-
441
+ ber of utterances in all the faq classes combined.
442
+ original In this setting we consider all utterances as in-
443
+ put which leads to roughly 80:20 class imbalance of
444
+ no-suggestion class and the faq classes.
445
+ It must be noted that in all the above settings, we
446
+ always include every faq class utterance. For input
447
+ to the model we concatenate 4 consecutive utterances
448
+ {uk−3, uk−2, uk−1, uk} for each utterance uk. When con-
449
+ catenating the utterances, we also append the sender name
450
+ to the beginning of each utterance.
451
+ Model/Setting
452
+ no-suggestion
453
+ faq
454
+ dumb
455
+ 1.0
456
+ 0.02
457
+ random
458
+ 0.04
459
+ 0.06
460
+ BM25
461
+ 0
462
+ 0.27
463
+ fb-multiset-english
464
+ mean
465
+ 0.12
466
+ 0.40
467
+ highest-freq
468
+ 0.35
469
+ 0.48
470
+ sum
471
+ 0.81
472
+ 0.44
473
+ original
474
+ 0.96
475
+ 0.33
476
+ deepset-german
477
+ mean
478
+ 0.12
479
+ 0.58
480
+ highest-freq
481
+ 0.42
482
+ 0.57
483
+ sum
484
+ 0.84
485
+ 0.50
486
+ original
487
+ 0.95
488
+ 0.38
489
+ Table 1: MRR@10 values for different models and settings
490
+ on test split of dataset
491
+ Results
492
+ We first analyse the baseline results from Table 1
493
+ :
494
+ The
495
+ dumb
496
+ setting
497
+ achieves
498
+ perfect
499
+ MRR
500
+ in
501
+ the
502
+ no-suggestion category since in this setting the AI
503
+ agent chooses ’silence’ as the top ranked candidate for all
504
+ turns. However it produces extremely poor results for turns
505
+ that do require suggestions, since there is no intelligence or
506
+ logic built in to his setting when fetching FAQ items. This
507
+ also highlights why we need to evaluate our system on two
508
+ different classes. If we had computed a singular MRR score
509
+ for all turns, a model which remains silent all the time would
510
+ score high accuracy. The random setting achieves poor per-
511
+ formance in both categories. The BM25 setting produces 0
512
+ MRR in no-suggestion class because there is no way
513
+ to ask a text search method to not return any results. It al-
514
+ ways fetches some set of results, and in effect, is unable to
515
+ produce silence as output.
516
+ The Deep Passage Retrieval approaches using the
517
+ deepset-germandpr set of models perform the best,
518
+ which comes as no surprise since these encoders were pre-
519
+ trained on German QA datasets, and further fine-tuned on
520
+ our dataset. In comparison fb-multiset-english per-
521
+ forms worse since the encoders are not aware of the German
522
+ language. We find that among the different settings of vary-
523
+ ing proportions of the inclusion of no-suggestion class
524
+ in the input, the sum setting produces a balanced perfor-
525
+ mance in the two categories of no-suggestion and faq.
526
+ Another notable point in the table is the performance of the
527
+ dumb model which always produces no-suggestion as
528
+ output hence achieving perfect MRR@10 of 1.0 in the rel-
529
+ evant samples, but it produces the worst results in the faq
530
+ classes, hence rendering it of little use to human agent. We
531
+ observe that as no-suggestion class performance im-
532
+ proves, faq class performance drops. This brings forth in-
533
+ teresting questions on how to calibrate the performance of
534
+ the model to reach a sweet spot for the human agent. An
535
+ MRR of 0.5 or greater for the faq classes means that the
536
+ right FAQ is generally either in the first or in the second
537
+ position, which is a positive contribution to lessen the hu-
538
+ man agent’s workload, since most user interface implemen-
539
+ tations for our scenario would display the top 3 FAQs to hu-
540
+ man agent together. It is, however, more important for the
541
+ no-suggestion MRR to be closer to 1.0, since the si-
542
+ lence class being ranked second still produces suggestions
543
+ that the human agent has to process, increasing noise for the
544
+ human agent.
545
+ Human Evaluation
546
+ To evaluate the usability aspects of the prototype and its in-
547
+ fluence on the task, we conducted interviews with 18 human
548
+ agents after usage. Additionally, we inspected their usage
549
+ behavior via screen recordings to supplement the qualita-
550
+ tive results. Overall, human agents indicated that they would
551
+ continue to use the prototype and highlighted that it is partic-
552
+ ularly helpful for agents who do not have much experience
553
+ in handling customers. During customer interactions, agents
554
+ sent on average 16 (SD: 5; Median: 14) messages during the
555
+ customer interaction. 17 agents used the FAQ answer sug-
556
+ gestions via the copy-to-chat-button at least three times. On
557
+ average, agents edited two (SD: 2; Median: 2) of the sug-
558
+ gested responses in the input field before sending them.
559
+ Overall, an average of six (SD: 2.5; Median: 7) sugges-
560
+ tions were used, whereby the detailed version via get-more-
561
+ info button (Mean: 3.7; SD: 2.6; Median: 4.5) was used more
562
+ 5
563
+
564
+ frequently than the short version (Mean: 2.6; SD: 2.4; Me-
565
+ dian: 2). To receive alternative FAQ answer suggestions, the
566
+ discard-button was clicked on average 15 times (SD: 10.8;
567
+ Median: 15). The display of two suggestions and the op-
568
+ tion for additional explanatory information via the get-more-
569
+ info-button were perceived as helpful “so that you can think
570
+ in which direction you might go” (agent1). Agents experi-
571
+ enced relief through displayed suggestions and the majority
572
+ saved time making decisions, especially by using the copy-
573
+ to-chat-button: “ I just had to copy them, which affected the
574
+ speed” (agent14). 16 agents utilized the feedback function
575
+ on average four times, while nine people successfully pro-
576
+ vided feedback. However, agents expressed the need for an
577
+ adaptation of the feedback function, as it was unclear. Con-
578
+ cerning the recommendation of projects, the pressure to re-
579
+ call knowledge or search in parallel to the customer inter-
580
+ action was reduced as relevant information was presented.
581
+ Thereby, it “took out the uncomfortable part of working with
582
+ such a consultation, which is looking up stuff ” (agent16)
583
+ Limitations
584
+ The current solution suffers from the following limitations:
585
+ 1) The web interface was developed for internal evaluation
586
+ purposes and is not available for general public use. 2) The
587
+ collection of the dataset suffers from class imbalance and
588
+ bias issues, since only a single person was involved in col-
589
+ lecting the conversations. 3) The feedback function of the UI
590
+ did not work as expected by the human agents. The human
591
+ agents expected the feedback regarding wrong suggestions
592
+ to be immediately learnt by the system, however during the
593
+ evaluation phase we did not re-train our models, or perform
594
+ on-line learning from the provided feedback.
595
+ Conclusion and Future Work
596
+ In this work we present a web interface for demonstrating
597
+ hybrid human-AI collaborative system that can handle cus-
598
+ tomer support queries. We show through machine based and
599
+ human based evaluations, that with the limited and imbal-
600
+ anced data we collected, we found appropriate methods to
601
+ train an AI agent that is able to provide appropriate assis-
602
+ tance to its human counterpart, which is the goal of our re-
603
+ search.
604
+ For future work, we wish to implement active on-line
605
+ learning from the human agent’s usage of the feedback fea-
606
+ ture in the UI. We would also like to collect a larger and
607
+ more balanced dataset for future iterations of the AI agent.
608
+ Acknolwedgements
609
+ The research was financed with funding provided by the
610
+ German Federal Ministry of Education and Research and the
611
+ European Social Fund under the ”Future of work” program
612
+ (INSTANT, 02L18A111).
613
+ References
614
+ Baudiˇs, P.; and ˇSediv´y, J. 2015. Modeling of the Question
615
+ Answering Task in the YodaQA System. 222–228. ISBN
616
+ 978-3-319-24026-8.
617
+ Beißwenger, M.; Herold, A.; L¨ungen, H.; and St¨orrer, A.
618
+ 2013. Das Dortmunder Chat-Korpus. Zeitschrift f¨ur ger-
619
+ manistische Linguistik, 41(1): 161–164.
620
+ Berant, J.; Chou, A.; Frostig, R.; and Liang, P. 2013. Se-
621
+ mantic Parsing on Freebase from Question-Answer Pairs. In
622
+ Proceedings of the 2013 Conference on Empirical Methods
623
+ in Natural Language Processing, 1533–1544. Seattle, Wash-
624
+ ington, USA: Association for Computational Linguistics.
625
+ Campos, J. A.; Otegi, A.; Soroa, A.; Deriu, J.; Cieliebak, M.;
626
+ and Agirre, E. 2020. DoQA - Accessing Domain-Specific
627
+ FAQs via Conversational QA. In Proceedings of the 58th
628
+ Annual Meeting of the Association for Computational Lin-
629
+ guistics, 7302–7314. Online: Association for Computational
630
+ Linguistics.
631
+ Choi, E.; He, H.; Iyyer, M.; Yatskar, M.; Yih, W.-t.; Choi,
632
+ Y.; Liang, P.; and Zettlemoyer, L. 2018. QuAC: Question
633
+ Answering in Context. In Proceedings of the 2018 Confer-
634
+ ence on Empirical Methods in Natural Language Process-
635
+ ing, 2174–2184. Brussels, Belgium: Association for Com-
636
+ putational Linguistics.
637
+ Colby, K. M.; Hilf, F. D.; Weber, S.; and Kraemer, H. C.
638
+ 1972. Turing-like indistinguishability tests for the validation
639
+ of a computer simulation of paranoid processes. Artificial
640
+ Intelligence, 3: 199–221.
641
+ Joshi, M.; Choi, E.; Weld, D.; and Zettlemoyer, L. 2017.
642
+ TriviaQA: A Large Scale Distantly Supervised Challenge
643
+ Dataset for Reading Comprehension. In Proceedings of the
644
+ 55th Annual Meeting of the Association for Computational
645
+ Linguistics (Volume 1: Long Papers), 1601–1611. Vancou-
646
+ ver, Canada: Association for Computational Linguistics.
647
+ Karpukhin, V.; Oguz, B.; Min, S.; Lewis, P.; Wu, L.; Edunov,
648
+ S.; Chen, D.; and Yih, W.-t. 2020.
649
+ Dense Passage Re-
650
+ trieval for Open-Domain Question Answering. In Proceed-
651
+ ings of the 2020 Conference on Empirical Methods in Nat-
652
+ ural Language Processing, 6769–6781. Online: Association
653
+ for Computational Linguistics.
654
+ Kwiatkowski, T.; Palomaki, J.; Redfield, O.; Collins, M.;
655
+ Parikh, A.; Alberti, C.; Epstein, D.; Polosukhin, I.; Kelcey,
656
+ M.; Devlin, J.; Lee, K.; Toutanova, K. N.; Jones, L.; Chang,
657
+ M.-W.; Dai, A.; Uszkoreit, J.; Le, Q.; and Petrov, S. 2019.
658
+ Natural Questions: a Benchmark for Question Answering
659
+ Research. Transactions of the Association of Computational
660
+ Linguistics.
661
+ Liu, J.; Song, K.; Kang, Y.; He, G.; Jiang, Z.; Sun, C.; Lu,
662
+ W.; and Liu, X. 2021. A Role-Selected Sharing Network for
663
+ Joint Machine-Human Chatting Handoff and Service Satis-
664
+ faction Analysis. In Proceedings of the 2021 Conference on
665
+ Empirical Methods in Natural Language Processing, 9731–
666
+ 9741. Online and Punta Cana, Dominican Republic: Associ-
667
+ ation for Computational Linguistics.
668
+ Lowe, R.; Pow, N.; Serban, I.; and Pineau, J. 2015.
669
+ The
670
+ Ubuntu Dialogue Corpus: A Large Dataset for Research in
671
+ Unstructured Multi-Turn Dialogue Systems. In Proceedings
672
+ of the 16th Annual Meeting of the Special Interest Group on
673
+ Discourse and Dialogue, 285–294. Prague, Czech Republic:
674
+ Association for Computational Linguistics.
675
+ 6
676
+
677
+ M¨oller, T.; Risch, J.; and Pietsch, M. 2021. GermanQuAD
678
+ and GermanDPR: Improving Non-English Question An-
679
+ swering and Passage Retrieval. arXiv pre-print 2104.12741.
680
+ Ni, J.; Young, T.; Pandelea, V.; Xue, F.; and Cambria, E.
681
+ 2021. Recent Advances in Deep Learning Based Dialogue
682
+ Systems: A Systematic Survey.
683
+ Poser, M.; Wiethof, C.; Banerjee, D.; Shankar Subramanian,
684
+ V.; Paucar, R.; and Bittner, E. A. C. 2022. Let’s Team Up
685
+ with AI! Toward a Hybrid Intelligence System for Online
686
+ Customer Service. In Drechsler, A.; Gerber, A.; and Hevner,
687
+ A., eds., The Transdisciplinary Reach of Design Science Re-
688
+ search, 142–153. Cham: Springer International Publishing.
689
+ ISBN 978-3-031-06516-3.
690
+ Reddy, S.; Chen, D.; and Manning, C. D. 2019.
691
+ CoQA:
692
+ A Conversational Question Answering Challenge. Trans-
693
+ actions of the Association for Computational Linguistics, 7:
694
+ 249–266.
695
+ Robertson, S.; and Zaragoza, H. 2009.
696
+ The Probabilistic
697
+ Relevance Framework: BM25 and Beyond.
698
+ Foundations
699
+ and Trends® in Information Retrieval, 3(4): 333–389.
700
+ Serban, I. V.; Lowe, R.; Henderson, P.; Charlin, L.; and
701
+ Pineau, J. 2015. A Survey of Available Corpora for Building
702
+ Data-Driven Dialogue Systems.
703
+ Wahlster, W. 1993. Verbmobil: Translation of Face-To-Face
704
+ Dialogs. In Proceedings of Machine Translation Summit IV,
705
+ 127–136. Kobe, Japan.
706
+ Weizenbaum, J. 1966. ELIZA—a Computer Program for the
707
+ Study of Natural Language Communication between Man
708
+ and Machine. Commun. ACM, 9(1): 36–45.
709
+ 7
710
+
-dFLT4oBgHgl3EQfvC_J/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
.gitattributes CHANGED
@@ -7055,3 +7055,78 @@ m9E1T4oBgHgl3EQfhQSd/content/2301.03239v1.pdf filter=lfs diff=lfs merge=lfs -tex
7055
  UdE1T4oBgHgl3EQfawRv/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7056
  ytAzT4oBgHgl3EQftP2f/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7057
  29E2T4oBgHgl3EQf5whX/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7055
  UdE1T4oBgHgl3EQfawRv/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7056
  ytAzT4oBgHgl3EQftP2f/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7057
  29E2T4oBgHgl3EQf5whX/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7058
+ XNAzT4oBgHgl3EQfKfsY/content/2301.01096v1.pdf filter=lfs diff=lfs merge=lfs -text
7059
+ o9AzT4oBgHgl3EQfOfuK/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7060
+ ANFIT4oBgHgl3EQf-iyR/content/2301.11411v1.pdf filter=lfs diff=lfs merge=lfs -text
7061
+ _9FAT4oBgHgl3EQfrB3j/content/2301.08651v1.pdf filter=lfs diff=lfs merge=lfs -text
7062
+ XdE1T4oBgHgl3EQfvwV4/content/2301.03403v1.pdf filter=lfs diff=lfs merge=lfs -text
7063
+ 1dAzT4oBgHgl3EQfDfo-/content/2301.00976v1.pdf filter=lfs diff=lfs merge=lfs -text
7064
+ g9FMT4oBgHgl3EQf3DHu/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7065
+ AtE1T4oBgHgl3EQf9Ab6/content/2301.03553v1.pdf filter=lfs diff=lfs merge=lfs -text
7066
+ x9FIT4oBgHgl3EQf0ism/content/2301.11369v1.pdf filter=lfs diff=lfs merge=lfs -text
7067
+ vtFPT4oBgHgl3EQf-TW6/content/2301.13215v1.pdf filter=lfs diff=lfs merge=lfs -text
7068
+ _9E2T4oBgHgl3EQfRAZd/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7069
+ 5tAzT4oBgHgl3EQfu_3x/content/2301.01701v1.pdf filter=lfs diff=lfs merge=lfs -text
7070
+ 0dE0T4oBgHgl3EQfuAFm/content/2301.02599v1.pdf filter=lfs diff=lfs merge=lfs -text
7071
+ 6tFKT4oBgHgl3EQfTy2d/content/2301.11781v1.pdf filter=lfs diff=lfs merge=lfs -text
7072
+ AtE4T4oBgHgl3EQfEwyK/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7073
+ xtFKT4oBgHgl3EQfLS1R/content/2301.11745v1.pdf filter=lfs diff=lfs merge=lfs -text
7074
+ 7NE3T4oBgHgl3EQfqApm/content/2301.04647v1.pdf filter=lfs diff=lfs merge=lfs -text
7075
+ 0dE0T4oBgHgl3EQfuAFm/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7076
+ odE0T4oBgHgl3EQfZwBs/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7077
+ GNFIT4oBgHgl3EQfWStC/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7078
+ vtFPT4oBgHgl3EQf-TW6/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7079
+ 5tAzT4oBgHgl3EQfu_3x/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7080
+ odAyT4oBgHgl3EQflvgJ/content/2301.00456v1.pdf filter=lfs diff=lfs merge=lfs -text
7081
+ qNAzT4oBgHgl3EQfOvuh/content/2301.01172v1.pdf filter=lfs diff=lfs merge=lfs -text
7082
+ 29FKT4oBgHgl3EQfQS0h/content/2301.11766v1.pdf filter=lfs diff=lfs merge=lfs -text
7083
+ XNAzT4oBgHgl3EQfKfsY/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7084
+ _9FAT4oBgHgl3EQfrB3j/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7085
+ YtE1T4oBgHgl3EQfcQR-/content/2301.03182v1.pdf filter=lfs diff=lfs merge=lfs -text
7086
+ lNE_T4oBgHgl3EQf6By0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7087
+ wdE0T4oBgHgl3EQfcACj/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7088
+ T9E1T4oBgHgl3EQfugV-/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7089
+ T9E1T4oBgHgl3EQfugV-/content/2301.03389v1.pdf filter=lfs diff=lfs merge=lfs -text
7090
+ ntFIT4oBgHgl3EQfuSvs/content/2301.11343v1.pdf filter=lfs diff=lfs merge=lfs -text
7091
+ QdAzT4oBgHgl3EQfz_5Y/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7092
+ 0tAzT4oBgHgl3EQftv2j/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7093
+ ntFIT4oBgHgl3EQfuSvs/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7094
+ A9E1T4oBgHgl3EQfpAX1/content/2301.03328v1.pdf filter=lfs diff=lfs merge=lfs -text
7095
+ x9FIT4oBgHgl3EQf0ism/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7096
+ dNE_T4oBgHgl3EQf0hzM/content/2301.08330v1.pdf filter=lfs diff=lfs merge=lfs -text
7097
+ xtFKT4oBgHgl3EQfLS1R/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7098
+ xtE2T4oBgHgl3EQf3gi9/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7099
+ WdE1T4oBgHgl3EQfvgUI/content/2301.03399v1.pdf filter=lfs diff=lfs merge=lfs -text
7100
+ 29FST4oBgHgl3EQfYTgs/content/2301.13787v1.pdf filter=lfs diff=lfs merge=lfs -text
7101
+ 6tFKT4oBgHgl3EQfTy2d/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7102
+ 8tE2T4oBgHgl3EQf8Qgm/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7103
+ cdAyT4oBgHgl3EQfwfli/content/2301.00649v1.pdf filter=lfs diff=lfs merge=lfs -text
7104
+ 7NE3T4oBgHgl3EQfqApm/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7105
+ 3NE0T4oBgHgl3EQfuwHF/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7106
+ q9E0T4oBgHgl3EQfrAGu/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7107
+ WtE3T4oBgHgl3EQfbgpl/content/2301.04516v1.pdf filter=lfs diff=lfs merge=lfs -text
7108
+ A9E1T4oBgHgl3EQfpAX1/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7109
+ tNFKT4oBgHgl3EQfKi0w/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7110
+ qNAzT4oBgHgl3EQfOvuh/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7111
+ 3dFQT4oBgHgl3EQf3TZi/content/2301.13427v1.pdf filter=lfs diff=lfs merge=lfs -text
7112
+ mdFPT4oBgHgl3EQf4TUa/content/2301.13193v1.pdf filter=lfs diff=lfs merge=lfs -text
7113
+ WdE1T4oBgHgl3EQfvgUI/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7114
+ -NE5T4oBgHgl3EQfRg5P/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7115
+ q9E0T4oBgHgl3EQfrAGu/content/2301.02561v1.pdf filter=lfs diff=lfs merge=lfs -text
7116
+ dNE_T4oBgHgl3EQf0hzM/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7117
+ tNFKT4oBgHgl3EQfKi0w/content/2301.11742v1.pdf filter=lfs diff=lfs merge=lfs -text
7118
+ yNFJT4oBgHgl3EQfhiyb/content/2301.11566v1.pdf filter=lfs diff=lfs merge=lfs -text
7119
+ AtE1T4oBgHgl3EQf9Ab6/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7120
+ JNFIT4oBgHgl3EQfZCuh/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7121
+ MdE1T4oBgHgl3EQftQUl/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7122
+ 29FKT4oBgHgl3EQfQS0h/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7123
+ YdE4T4oBgHgl3EQfNwz6/content/2301.04960v1.pdf filter=lfs diff=lfs merge=lfs -text
7124
+ M9E2T4oBgHgl3EQfBgYX/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7125
+ QtE2T4oBgHgl3EQfsAh_/content/2301.04055v1.pdf filter=lfs diff=lfs merge=lfs -text
7126
+ 09FKT4oBgHgl3EQfOS13/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7127
+ 79A0T4oBgHgl3EQfOf-b/content/2301.02162v1.pdf filter=lfs diff=lfs merge=lfs -text
7128
+ q9E3T4oBgHgl3EQf8gt0/content/2301.04808v1.pdf filter=lfs diff=lfs merge=lfs -text
7129
+ edE1T4oBgHgl3EQfLgP4/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7130
+ ctE5T4oBgHgl3EQfEw7I/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7131
+ b9FIT4oBgHgl3EQfmyvP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7132
+ FdAyT4oBgHgl3EQfe_hY/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
09FKT4oBgHgl3EQfOS13/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:560b95c219b0e40dccde1a20c12f177950c055664b1f4cab3f87d29203004429
3
+ size 4653101
0dE0T4oBgHgl3EQfuAFm/content/2301.02599v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d8b3fe0917b32b4168b1583648572da597f364646ed4f9d065e7d43b8fed103
3
+ size 148892
0dE0T4oBgHgl3EQfuAFm/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b528e24646e51d720be2596560679175e3bb21101f708d49e7f59d1760bb269
3
+ size 1441837
0dE0T4oBgHgl3EQfuAFm/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e6ac82544bd3c120ed879e5b335be598cb622104edec12a2167328d3594da8aa
3
+ size 57545
0tAzT4oBgHgl3EQftv2j/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:94f11fb85dd07be9d1b9dab477d2e6c383d14841d71a4674b79cbd05eb58991b
3
+ size 4784173
0tE0T4oBgHgl3EQfdQB7/content/tmp_files/2301.02374v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
0tE0T4oBgHgl3EQfdQB7/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
1dAzT4oBgHgl3EQfDfo-/content/2301.00976v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b2e8e609bfcb013397873fe4a33840e0251b7c695ea5a42fd8aa1ba01d8167e
3
+ size 245499
1dAzT4oBgHgl3EQfDfo-/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca5225521e2337cf792a10909251c83bbcbbf66e5d72dc2d46f78ae5e90716a0
3
+ size 120708
1tAzT4oBgHgl3EQfRfvj/content/tmp_files/2301.01218v1.pdf.txt ADDED
@@ -0,0 +1,1191 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Tracing the Origin of Adversarial Attack for Forensic Investigation and
2
+ Deterrence
3
+ Han Fang1, Jiyi Zhang 1, Yupeng Qiu 1, Ke Xu 2, Chengfang Fang 2, Ee-Chien Chang 1*
4
+ 1 National University of Singapore
5
+ 2 Huawei International
6
7
+ Abstract
8
+ Deep neural networks are vulnerable to adversarial attacks.
9
+ In this paper, we take the role of investigators who want to
10
+ trace the attack and identify the source, that is, the particular
11
+ model which the adversarial examples are generated from.
12
+ Techniques derived would aid forensic investigation of at-
13
+ tack incidents and serve as deterrence to potential attacks. We
14
+ consider the buyers-seller setting where a machine learning
15
+ model is to be distributed to various buyers and each buyer
16
+ receives a slightly different copy with same functionality. A
17
+ malicious buyer generates adversarial examples from a par-
18
+ ticular copy Mi and uses them to attack other copies. From
19
+ these adversarial examples, the investigator wants to iden-
20
+ tify the source Mi. To address this problem, we propose a
21
+ two-stage separate-and-trace framework. The model separa-
22
+ tion stage generates multiple copies of a model for a same
23
+ classification task. This process injects unique characteristics
24
+ into each copy so that adversarial examples generated have
25
+ distinct and traceable features. We give a parallel structure
26
+ which embeds a “tracer” in each copy, and a noise-sensitive
27
+ training loss to achieve this goal. The tracing stage takes in
28
+ adversarial examples and a few candidate models, and iden-
29
+ tifies the likely source. Based on the unique features induced
30
+ by the noise-sensitive loss function, we could effectively trace
31
+ the potential adversarial copy by considering the output logits
32
+ from each tracer. Empirical results show that it is possible to
33
+ trace the origin of the adversarial example and the mechanism
34
+ can be applied to a wide range of architectures and datasets.
35
+ 1
36
+ Introduction
37
+ Deep learning models are vulnerable to adversarial attacks.
38
+ By introducing specific perturbations on input samples, the
39
+ network model could be misled to give wrong predictions
40
+ even when the perturbed sample looks visually close to
41
+ the clean image (Szegedy et al. 2014; Goodfellow, Shlens,
42
+ and Szegedy 2014; Moosavi-Dezfooli, Fawzi, and Frossard
43
+ 2016; Carlini and Wagner 2017). There are many existing
44
+ works on defending against such attacks (Kurakin, Good-
45
+ fellow, and Bengio 2016; Meng and Chen 2017; Gu and
46
+ Rigazio 2014; Hinton, Vinyals, and Dean 2015). Unfortu-
47
+ nately, although current defenses could mitigate the attack
48
+ to some extent, the threat is still far from being completely
49
+ eliminated. In this paper, we look into the forensic aspect:
50
+ from the adversarial examples, can we determine which
51
+ *Corresponding Authors.
52
+ Figure 1: Buyers-seller setting. The seller has multiple mod-
53
+ els Mi, i ∈ [1, m] that are to be distributed to different
54
+ buyers. A malicious buyer batt attempts to attack the vic-
55
+ tim buyer bvic by generating the adversarial examples with
56
+ his own model Matt.
57
+ model the adversarial examples were derived from? Tech-
58
+ niques derived could aid forensic investigation of attack in-
59
+ cidents and provide deterrence to future attacks.
60
+ We consider a buyers-seller setting (Zhang, Tann, and
61
+ Chang 2021), which is similar to the buyers-seller setting
62
+ in digital rights protection (Memon and Wong 2001).
63
+ Buyers-seller Setting.
64
+ Under this setting, the seller S dis-
65
+ tributes m classification models Mi, i ∈ [1, m] to different
66
+ buyers bi’s as shown in Fig. 1. These models are trained for
67
+ a same classification task using a same training dataset. The
68
+ models are made accessible to the buyer as black boxes, for
69
+ instance, the models could be embedded in hardware such as
70
+ FPGA and ASIC, or are provided in a Machine Learning as a
71
+ Service (MLaaS) platform. Hence, the buyer only has black-
72
+ box access, which means that he can only query the model
73
+ for the hard label. In addition, we assume that the buyers do
74
+ not know the training datasets. The seller has full knowledge
75
+ and thus has white-box access to all the distributed models.
76
+ Attack and Traceability.
77
+ A malicious buyer wants to at-
78
+ tack other victim buyers. The malicious buyer does not have
79
+ direct access to other models and thus generates the exam-
80
+ ples from its own model and then deploys the found exam-
81
+ ples. For example, the malicious buyer might generate an ad-
82
+ versarial example of a road sign using its self-driving vehi-
83
+ cle, and then physically defaces the road sign to trick passing
84
+ vehicles. Now, as forensic investigators who have obtained
85
+ the defaced road sign, we want to understand how the ad-
86
+ versarial example is generated and trace the models used in
87
+ generating the example.
88
+ arXiv:2301.01218v1 [cs.CR] 31 Dec 2022
89
+
90
+ M1
91
+ M2
92
+ M1
93
+ M3
94
+ Matt
95
+ Attacking
96
+ 20
97
+ Generating
98
+ Adversarial
99
+ ExamplesProposed Framework.
100
+ There are two stages in our solu-
101
+ tion: model separation and origin tracing. During the model
102
+ separation stage, given a classification task, we want to gen-
103
+ erate multiple models that have high accuracy on the clas-
104
+ sification task and yet are sufficiently different for tracing.
105
+ In other words, we want to proactively enhance differences
106
+ among the models in order to facilitate tracing. To achieve
107
+ that, we propose a parallel network structure that pairs a
108
+ unique tracer with the original classification model. The role
109
+ of the tracer is to modify the output, so as to induce the at-
110
+ tacker to adversarial examples with unique features. We give
111
+ a noise-sensitive training loss for the tracer.
112
+ During the tracing stage, given m different classification
113
+ models Mi, i ∈ [1, m] and the found adversarial example,
114
+ we want to determine which model is most likely used in
115
+ generating the adversarial examples. This is achieved by ex-
116
+ ploiting the different tracers that are earlier embedded into
117
+ the parallel models. Our proposed method compares the out-
118
+ put logits (the output of the network before softmax) of those
119
+ tracers to identify the source.
120
+ In a certain sense, traceability is similar to neural network
121
+ watermarking and can be viewed as a stronger form of water-
122
+ marking. Neural network watermarking schemes (Boenisch
123
+ 2020) attempt to generate multiple models so that an investi-
124
+ gator can trace the source of a modified copy. In traceability,
125
+ the investigator can trace the source based on the generated
126
+ adversarial examples.
127
+ Contributions.
128
+ 1. We point out a new aspect in defending against adver-
129
+ sarial attacks, that is, tracing the origin of adversarial
130
+ samples among multiple classifiers. Techniques derived
131
+ would aid forensic investigation of attack incidents and
132
+ provide deterrence to future attacks.
133
+ 2. We propose a framework to achieve traceability in the
134
+ buyers-seller setting. The framework consists of two
135
+ stages: a model separation stage, and a tracing stage.
136
+ The model separation stage generates multiple “well-
137
+ separated” models and this is achieved by a parallel
138
+ network structure that pairs a tracer with the classifier.
139
+ The tracing mechanism exploits the characteristics of the
140
+ paired tracers to decide the origin of the given adversarial
141
+ examples.
142
+ 3. We investigate the effectiveness of the separation and the
143
+ subsequent tracing. Experimental studies show that the
144
+ proposed mechanism can effectively trace to the source.
145
+ For example, the tracing accuracy achieves more than
146
+ 97% when applying to “ResNet18-CIFAR10” task. We
147
+ also observe a clear separation of the source tracer’s log-
148
+ its distribution, from the non-source’s logits distribution
149
+ (e.g. Fig. 5a-5c).
150
+ 2
151
+ Related Work
152
+ In this paper, we adopt black-box settings where the adver-
153
+ sary can only query the model and get the hard label (final
154
+ decision) of the output. Many existing attacks assume white-
155
+ box settings. Attack such as FGSM (Goodfellow, Shlens,
156
+ and Szegedy 2014), PGD (Kurakin, Goodfellow, and Bengio
157
+ 2016), JSMA (Papernot et al. 2016), DeepFool (Moosavi-
158
+ Dezfooli, Fawzi, and Frossard 2016), CW (Carlini and Wag-
159
+ ner 2017) and EAD (Chen et al. 2018) usually directly rely
160
+ on the gradient information provided by the victim model.
161
+ As the detailed information of the model is hidden in black-
162
+ box settings, black-box attacks are often considered more
163
+ difficult and there are fewer works. Chen et. al. introduced
164
+ a black-box attack called Zeroth Order Optimization (ZOO)
165
+ (Chen et al. 2017). ZOO can approximate the gradients of
166
+ the objective function with finite-difference numerical esti-
167
+ mates by only querying the network model. Thus the ap-
168
+ proximated gradient is utilized to generate the adversarial
169
+ examples. Guo et. al. proposed a simple black-box adver-
170
+ sarial attack called “SimBA” (Guo et al. 2019) to generate
171
+ adversarial examples with a set of orthogonal vectors. By
172
+ testing the output logits with the added chosen vector, the
173
+ optimization direction can be effectively found. Brendel et.
174
+ al. developed a decision-based adversarial attack which is
175
+ known as “Boundary attack” (Brendel, Rauber, and Bethge
176
+ 2018), it worked by iteratively perturbing another initial im-
177
+ age that belongs to a different label toward the decision
178
+ boundaries between the original label and the adjacent la-
179
+ bel. By querying the model with enough perturbed images,
180
+ the boundary as well as the perturbation can be found thus
181
+ generating the adversarial examples. Chen et. al. proposed
182
+ another decision based attack named hop-skip-jump attack
183
+ (HSJA) (Chen, Jordan, and Wainwright 2020) recently. By
184
+ only utilizing the binary information at the decision bound-
185
+ ary and the Monte-Carlo estimation, the gradient direction of
186
+ the network can be found so as to realize the adversarial ex-
187
+ amples generation. Based on (Chen, Jordan, and Wainwright
188
+ 2020), Li et. al. (Li et al. 2020) proposed a query-efficient
189
+ boundary-based black-box attack named QEBA which es-
190
+ timate the gradient of the boundary in several transformed
191
+ space and effectively reduce the query numbers in gener-
192
+ ating the adversarial examples. Maho et. al. (Maho, Furon,
193
+ and Le Merrer 2021) proposed a surrogate-free black-box
194
+ attack which do not estimate the gradient but searching the
195
+ boundary based on polar coordinates, compared with (Chen,
196
+ Jordan, and Wainwright 2020) and (Li et al. 2020), (Maho,
197
+ Furon, and Le Merrer 2021) achieves less distortion with
198
+ less query numbers.
199
+ 3
200
+ Proposed Framework
201
+ 3.1
202
+ Main Idea
203
+ We design a framework that contains two stages: model sep-
204
+ aration and origin tracing.
205
+ During the model separation stage, we want to generate
206
+ multiple models which are sufficiently different under ad-
207
+ versarial attack while remaining highly accurate on the clas-
208
+ sification task. Our main idea is a parallel network structure
209
+ which pairs a unique tracer with the original classifier. The
210
+ specific structure will be illustrated in Section 3.2.
211
+ As for origin tracing, we exploit unique characteristics of
212
+ different tracers in the parallel structure, which can be ob-
213
+ served in the tracers’ logits. Hence, our tracing process is
214
+ conducted by feeding the adversarial examples into the trac-
215
+ ers and analyzing their output.
216
+
217
+ Figure 2: The framework of the proposed method. The left part of the framework indicates the separation process of the seller’s
218
+ distributed models Mi, i ∈ [1, m]. The right part of the framework illustrates the origin tracing process.
219
+ The whole framework of the proposed scheme is shown
220
+ in Fig. 2. As illustrated in Fig.2, each distributed model Mi
221
+ consists of a tracer Ti and the original classification model
222
+ C, and the tracer is trained with a proposed noise-sensitive
223
+ loss LNS. During the tracing stage, the adversarial examples
224
+ are fed into each Ti and the outputs are analyzed to identify
225
+ the origin.
226
+ 3.2
227
+ Model Separation
228
+ We design a parallel network structure to generate the dis-
229
+ tributed models Mi, i ∈ [1, m], which contains a tracer
230
+ model Ti and a main model C, as shown in Fig. 3a. Ti is
231
+ used for injecting unique features and setting traps for the
232
+ attacker. C is the network trained for the original task. The
233
+ final results are determined by both C and Ti with a weight
234
+ parameter α. In each distributed model, C is fixed and only
235
+ Ti is different.
236
+ The specific structure of Ti is shown in Fig. 3b, it is
237
+ linearly cascaded with one “SingleConv” block (Conv-BN-
238
+ ReLU), two “Res-block” (He et al. 2016), one “Conv” block,
239
+ one full connection block and one “Tanh” activation layer.
240
+ The training process of Ti can be described as:
241
+ 1) Given the training dataset1 and tracer Ti, we first ini-
242
+ tialize Ti with random parameters.
243
+ 2) For each training epoch, we add random noise No 2 on
244
+ the input image x to generate the noised image xNo.
245
+ 3) Then we feed both x and xNo into Ti and get the out-
246
+ puts Ox and OxNo. We attempt to make Ti sensitive to noise,
247
+ so Ox and OxNo should be as different as possible. The loss
248
+ function of Ti can be written as:
249
+ LNS =
250
+ |Ox ◦ OxNo|
251
+ ∥Ox∥2∥OxNo∥2
252
+ =
253
+ |Ti(θTi, x) ◦ Ti(θTi, xNo)|
254
+ ∥Ti(θTi, x)∥2∥Ti(θTi, xNo)∥2
255
+ (1)
256
+ 1The training dataset for Ti only contains 1000 random sampled
257
+ images from the dataset of the original classification task
258
+ 2No follows a uniform distribution over [0, 0.03)
259
+ where ◦ represents the Hadamard product. θTi indicates the
260
+ parameters of Ti.
261
+ Each distributed Ti for different buyers is generated by
262
+ randomly initializing and then training. We believed the ran-
263
+ domness in initialization is enough to guarantee the differ-
264
+ ence from different Ti. It should be noted that when pro-
265
+ ducing a new distributed copy, we only have to train one
266
+ new tracer without setting more constraints on former trac-
267
+ ers. So such a separation method can be applied to multiple
268
+ distributed models independently.
269
+ As for C, it is trained in a normal way which utilizes the
270
+ whole training dataset and cross-entropy loss. For the main
271
+ classification task, C only has to be trained once. Besides, the
272
+ training of C is independent of the training of Ti. After train-
273
+ ing C, we could get a high accuracy classification model.
274
+ The final distributed model Mi is parallel combined with C
275
+ and Ti. The specific workflow of Mi can be described as:
276
+ For input image x, Ti and C both receive the same x and
277
+ output two different vectors OTi and OC respectively. OTi
278
+ and OC have the same size and will be further added in a
279
+ weighted way to generate the final outputs OF , as shown in
280
+ Eq. 2.
281
+ OF = OC + α × OTi
282
+ (2)
283
+ where α is the weight parameter. It is worth noting that for
284
+ the output of C, we use the normalization form of it, which
285
+ can be formulated as:
286
+ OC =
287
+ C(x) − min(C(x))
288
+ max(C(x)) − min(C(x))
289
+ (3)
290
+ where x indicates the input image, max and min indicate
291
+ the maximum value and minimum value respectively.
292
+ By
293
+ utilizing
294
+ the
295
+ aforementioned
296
+ model
297
+ separation
298
+ method, two properties are well satisfied: (I) The attack
299
+ could be tricked to focus more on Ti than C. Since after the
300
+ training, Ti will be sensitive to random noise. Therefore, the
301
+ output of Ti is easy to be changed by adding noise. Com-
302
+ pared with C, the boundary of Ti is more likely to be esti-
303
+ mated and Ti is more likely to be attacked. Thus, the attacker
304
+
305
+ Model Separation Stage
306
+ Origin Tracing Stage
307
+ Attacker's Model !
308
+ Main Model
309
+ Initialized Tracer Model
310
+ Distributed Tracer
311
+ (source model)
312
+ Ms
313
+ c
314
+ T1
315
+ T2
316
+ Tini
317
+ Tini
318
+ Tini
319
+ 2
320
+ m
321
+ Adversarial
322
+ Trace's Outputs Obtaining
323
+ Model Combination
324
+ Tracer Model Training
325
+ Examples
326
+ Outputs OTi(x)
327
+ Outputs
328
+ Outputs
329
+ Outputs
330
+ Adversarial
331
+ Tracer
332
+ Image
333
+ Example
334
+ OT1
335
+ OT2
336
+ OTs
337
+ OTm
338
+ Ti
339
+ X
340
+ Tracer
341
+ Noise-sensitive
342
+ Loss L'is.
343
+ Image
344
+ Main
345
+ Ti
346
+ Noised
347
+ x
348
+ Image
349
+ c
350
+ Attacked Model Tracing
351
+ Identified Tracer :
352
+ xNo
353
+ Outputs OTi(xNo)
354
+ Mi
355
+ Ts
356
+ Outputs
357
+ Tracing mechanism
358
+ OTm
359
+ no
360
+ Generated Models
361
+ OTs
362
+ arg max 0
363
+ ho
364
+ true
365
+ Identified Model :
366
+ att: attacked label
367
+ M1
368
+ M2
369
+ M3
370
+ M4
371
+ Ms
372
+ Mm
373
+ true: true label
374
+ Ms(a) Parallel network structure.
375
+ (b) The architecture of tracer.
376
+ (c) Differences in logits.
377
+ Figure 3: The specific network design in model separation.
378
+ will fall into the trap of Ti and the generated adversarial per-
379
+ turbations will bring the feature of the source Ti. (II) Based
380
+ on random initialization, each distributed Ti will correspond
381
+ to different adversarial perturbations. This property helps us
382
+ in tracing, since the source Ts which generates adversarial
383
+ examples will output unique responses compared with other
384
+ Ti, i ̸= s when feeding the generated adversarial examples,
385
+ as shown in Fig. 3c.
386
+ 3.3
387
+ Tracing the Origin
388
+ The tracing process is conducted by two related compo-
389
+ nents:
390
+ • The first component keeps white-box copies for each of
391
+ the m distributed copies 3. This component allows us to
392
+ obtain the output logits of each tracer on an input x.
393
+ • The second component is an output logits-based mech-
394
+ anism. It gives a decision on which copy i is the most
395
+ likely one to generate the adversarial example.
396
+ The specific tracing process can be described as follows:
397
+ 1) Given an appeared adversarial examples denoted as
398
+ xatt, we feed the adversarial example into all Ti, i ∈ [1, m]
399
+ and obtain the output logits of them, noted as OTi, i ∈
400
+ [1, m].
401
+ 2) Then we extract two values that are corresponding to
402
+ the attacked label and true label in each OTi, denoted as OTi
403
+ att
404
+ and OTi
405
+ true respectively. 4
406
+ 3) The source model can be determined by:
407
+ s = arg max
408
+ i,i∈[1,m]
409
+ (OTi
410
+ att − OTi
411
+ true)
412
+ (4)
413
+ To simplify the description, we denote the difference of out-
414
+ put logits (OTi
415
+ att−OTi
416
+ true) as DOL. The tracer corresponded to
417
+ the largest DOL is regarded as the source model. The reason
418
+ is as follows:
419
+ Since the perturbation are highly related to Ti, when feed-
420
+ ing the same adversarial example, the outputs of Ti and Tj
421
+ 3This setting is reasonable because when an adversarial attack
422
+ appeared, the model seller who has all the details of the distributed
423
+ network takes responsible to trace the attacker.
424
+ 4Attacked label can be easily determined by the output logits
425
+ and the true label can be tagged by the model owner. If this sample
426
+ cannot be accurately tagged by the owner, then this sample is not
427
+ regarded as an adversarial example.
428
+ (i ̸= j) will be certainly different. For source model Ts
429
+ where the adversarial examples are generated from, OTs is
430
+ likely to render a large value on the adversarial label and a
431
+ small value on the ground-truth label. Since the weight of
432
+ OTs in the final OFs is small, so in order to achieve ad-
433
+ versarial attack, OTs will be modified as much as possible.
434
+ Thus DOL of Ts should be large. But for victim model Tv,
435
+ the DOL will be small. Therefore, according to the value of
436
+ DOL, we can trace the origin of the adversarial example.
437
+ 4
438
+ Experimental Results
439
+ 4.1
440
+ Implementation Details
441
+ In order to show the effectiveness of the proposed frame-
442
+ work, we perform the experiments on two network architec-
443
+ ture (ResNet18 (He et al. 2016) and VGG16 (Simonyan and
444
+ Zisserman 2014)) with two small image datasets (CIFAR10
445
+ (Krizhevsky, Hinton et al. 2009) of 10 classes and GTSRB
446
+ (Houben et al. 2013) of 43 classes) and two deeper network
447
+ architecture (ResNet50 and VGG19) with one big image
448
+ dataset (mini-ImageNet (Ravi and Larochelle 2016) of 100
449
+ classes). The main classifier C in experiments is trained for
450
+ 200 epochs. All the model training is implemented by Py-
451
+ Torch and executed on NVIDIA RTX 2080ti. For gradient
452
+ descent, Adam (Kingma and Ba 2015) with learning rate of
453
+ 1e-4 is applied as the optimization method.
454
+ 4.2
455
+ The Classification Accuracy of The Proposed
456
+ Architecture
457
+ The most influenced parameter for the classification accu-
458
+ racy is the weight parameter α. α determines the partici-
459
+ pation ratio of Ti in final outputs. To investigate the influ-
460
+ ence of α, we change the value of α from 0 (baseline) to 0.2
461
+ and record the corresponding classification accuracy of each
462
+ task, the results are shown in Table 1.
463
+ It can be seen from Table 1 that for CIFAR10 and GTSRB,
464
+ the growth of α will seldom decrease the accuracy of the
465
+ classification task. Compared with the baseline (α = 0), the
466
+ small value of α will keep the accuracy at the same level as
467
+ the baseline. But for mini-ImageNet, the accuracy decreases
468
+ more as α increases, we believe it is due to the complexity
469
+ of the classification task. But even though, the decrease rate
470
+ is still within 3% when α is not larger than 0.15.
471
+
472
+ Tracer Model
473
+ Ti
474
+ Image
475
+ Main Model
476
+ x
477
+ cTracer Model T;Tv
478
+ Ti
479
+ Ti
480
+ c
481
+ c
482
+ c
483
+
484
+ CIFAR10
485
+ GTSRB
486
+ Mini-ImageNet
487
+ ResNet18
488
+ VGG16
489
+ ResNet18
490
+ VGG16
491
+ ResNet50
492
+ VGG19
493
+ 0
494
+ 94.30%
495
+ 93.68%
496
+ 96.19%
497
+ 97.59%
498
+ 73.12%
499
+ 75.79%
500
+ 0.05
501
+ 94.24%
502
+ 93.64%
503
+ 96.14%
504
+ 97.52%
505
+ 72.32%
506
+ 75.04%
507
+ 0.1
508
+ 94.24%
509
+ 93.63%
510
+ 96.07%
511
+ 97.36%
512
+ 71.88%
513
+ 74.96%
514
+ 0.15
515
+ 94.07%
516
+ 93.63%
517
+ 95.72%
518
+ 96.84%
519
+ 70.50%
520
+ 73.75%
521
+ 0.2
522
+ 93.95%
523
+ 93.57%
524
+ 95.09%
525
+ 95.52%
526
+ 68.14%
527
+ 71.75%
528
+ Table 1: The classification accuracy with different α.
529
+ 4.3
530
+ Traceability of different black-box attack
531
+ It should be noted that the change of α will not only influ-
532
+ ence the accuracy but also affect the process of black-box
533
+ adversarial attack. Therefore, in order to explore the influ-
534
+ ence of α, the following experiments will be conducted with
535
+ α = 0.05, 0.1 and 0.15.
536
+ Setup and Code. To verify the traceability of the pro-
537
+ posed mechanism, we conduct experiments on two dis-
538
+ tributed models. We set one model as the source model Ms
539
+ to perform the adversarial attack and set the other model
540
+ as the victim model Mv. The goal is to test whether the
541
+ proposed scheme can effectively trace the source model
542
+ from the generated adversarial examples. The black-box at-
543
+ tack we choose is Boundary (Brendel, Rauber, and Bethge
544
+ 2018), HSJA (Chen, Jordan, and Wainwright 2020), QEBA
545
+ (Li et al. 2020) and SurFree (Maho, Furon, and Le Merrer
546
+ 2021). For Boundary (Brendel, Rauber, and Bethge 2018)
547
+ and HSJA (Chen, Jordan, and Wainwright 2020), we use Ad-
548
+ versarial Robustness Toolbox (ART) (Nicolae et al. 2018)
549
+ platform to conduct the experiments. For QEBA (Li et al.
550
+ 2020) and SurFree (Maho, Furon, and Le Merrer 2021), we
551
+ pull implementations from their respective GitHub reposito-
552
+ ries 5 6 with default parameters. For each α, each network
553
+ architecture, each dataset and each attack, we generate 1000
554
+ successful attacked adversarial examples of Ms and con-
555
+ duct the tracing experiment.
556
+ Evaluation Metrics. Traceability is evaluated by tracing
557
+ accuracy, which is calculated by:
558
+ Acc = Ncorrect
559
+ NAll
560
+ (5)
561
+ where Ncorrect indicates the number of correct-tracing sam-
562
+ ples and NAll indicates the total number of samples, which
563
+ is set as 1000 in the experiments.
564
+ The tracing performance of different attacks with different
565
+ settings is shown in Table 2. It can be seen that when apply-
566
+ ing ResNet-based architecture as the backbone of C, the trac-
567
+ ing accuracy is higher than 90%. Especially for α = 0.15,
568
+ most of the tracing accuracy is higher than 96%, which indi-
569
+ cates the effectiveness of the proposed mechanism. Besides,
570
+ for a different level of classification task and different attack-
571
+ ing methods, the tracing accuracy can stay at a high level,
572
+ which shows the great adaptability of the proposed scheme.
573
+ The influence of α. We can see from Table 2 that the trac-
574
+ ing accuracy increases with the increase of α. We conclude
575
+ 5QEBA:https://github.com/AI-secure/QEBA
576
+ 6SurFree:https://github.com/t-maho/SurFree
577
+ the reason as: α determines the participation rate of tracer
578
+ Ti in final output logits, the larger α will make the final de-
579
+ cision boundary rely more on T . Therefore, when α gets
580
+ larger, making DOL of T larger would be a better choice to
581
+ realize the adversarial attack. The bigger DOL of T will cer-
582
+ tainly lead to better tracing performance. To verify the cor-
583
+ rectness of the explanation, we show the distribution of DOL
584
+ for task “ResNet18-CIFAR10” with different attacks in Fig.
585
+ 4. We first generate 1000 adversarial examples of model Mi
586
+ for each α (0.05,0.1,0.15) with Boundary, HSJA, QEBA and
587
+ SurFree attack, then we record the DOLs of Ti. The distri-
588
+ bution of DOLs are shown in Fig. 4.
589
+ (a) The results of Boundary.
590
+ (b) The results of HSJA.
591
+ (c) The results of QEBA.
592
+ (d) The results of SurFree.
593
+ Figure 4: The distributions of output differences with differ-
594
+ ent black-box attacks.
595
+ It can be seen that compared with α = 0.05 and α = 0.1,
596
+ the DOL of α = 0.15 concentrate more on larger values,
597
+ which indicates that the larger α will result to larger DOL.
598
+ The influence of network architecture. The tracing re-
599
+ sults vary with different networks and different datasets.
600
+ With the same dataset, the tracing accuracy of ResNet18 will
601
+ be higher than that of VGG16. We attribute the reason to
602
+ the complexity of the model architecture. According to (Su
603
+ et al. 2018), compared with ResNet, the structure of VGG is
604
+ less robust, so VGG-based C might be easier to be adversar-
605
+ ial attacked. Therefore, once C is attacked, there is a certain
606
+ probability that Ti is not attacked as we expected, so DOL of
607
+ Ti will not produce the expected features for tracing. Fortu-
608
+ nately, the network architecture can be designed by us, so in
609
+ practice, choosing a robust architecture would be better for
610
+ tracing.
611
+ The influence of classification task. In our experiments,
612
+ we test the classification task with different classes. It can
613
+ be seen that with the increase of classification task complex-
614
+ ity, traceability performance decreases slightly. But in most
615
+ cases, when α = 0.15, the traceability ability can still reach
616
+ more than 90%.
617
+ The influence of black-box attack. The mechanism of
618
+ the black-box attack greatly influences the tracing perfor-
619
+ mance. For Boundary attack(Brendel, Rauber, and Bethge
620
+
621
+ 600
622
+ α = 0.05
623
+ 500
624
+ α = 0.1
625
+ umbers of sampl
626
+ α = 0.15
627
+ 400
628
+ 300
629
+ 200
630
+ 100
631
+ 0.7
632
+ 1.6
633
+ 1.9
634
+ Outout crferences800
635
+ α = 0.05
636
+ α = 0.1
637
+ 0
638
+ sam
639
+ 600
640
+ α = 0.15
641
+ of
642
+ 400
643
+ mbers
644
+ 200
645
+ 0.5
646
+ 0.8
647
+ Outout dirference800
648
+ α = 0.05
649
+ α = 0.1
650
+ Jumbers of samp
651
+ 600
652
+ α= 0.15
653
+ 400
654
+ 200
655
+ 3
656
+ 1.9
657
+ Outout dirferences800
658
+ α = 0.05
659
+ α = 0.1
660
+ Q
661
+ Jumbers of samr
662
+ 600
663
+ α = 0.15
664
+ 400
665
+ 200
666
+ 0.5
667
+ 0.8
668
+ 2
669
+ Output differencesAttack
670
+ Boundary
671
+ HSJA
672
+ QEBA
673
+ SurFree
674
+ alpha
675
+ 0.05
676
+ 0.1
677
+ 0.15
678
+ 0.05
679
+ 0.1
680
+ 0.15
681
+ 0.05
682
+ 0.1
683
+ 0.15
684
+ 0.05
685
+ 0.1
686
+ 0.15
687
+ CIFAR10
688
+ ResNet18
689
+ 98.1%
690
+ 98.9 %
691
+ 99.2 %
692
+ 98.2%
693
+ 99.1%
694
+ 99.3%
695
+ 99.6%
696
+ 99.7%
697
+ 99.7%
698
+ 94.5%
699
+ 95.7%
700
+ 97.9%
701
+ VGG16
702
+ 92.1%
703
+ 95.6 %
704
+ 98.2%
705
+ 92.3 %
706
+ 96.4 %
707
+ 97.9 %
708
+ 92.6 %
709
+ 96.6%
710
+ 99.2%
711
+ 64.2 %
712
+ 82.1 %
713
+ 87.8 %
714
+ GTSRB
715
+ ResNet18
716
+ 97.6%
717
+ 97.6 %
718
+ 98.9 %
719
+ 97.6 %
720
+ 97.7 %
721
+ 98.7 %
722
+ 97.6%
723
+ 97.7%
724
+ 99.6 %
725
+ 89.8 %
726
+ 95.7 %
727
+ 96.8%
728
+ VGG16
729
+ 94.1 %
730
+ 96.8 %
731
+ 97.6 %
732
+ 95.5 %
733
+ 97.3 %
734
+ 98.3%
735
+ 86.3%
736
+ 92.6%
737
+ 95.0 %
738
+ 89.7%
739
+ 95.7%
740
+ 96.8%
741
+ mini ImageNet
742
+ ResNet50
743
+ 96.2%
744
+ 96.4 %
745
+ 98.7 %
746
+ 94.5%
747
+ 95.5 %
748
+ 97.5 %
749
+ 91.7%
750
+ 93.8%
751
+ 95.4 %
752
+ 82.1 %
753
+ 87.3 %
754
+ 90.5%
755
+ VGG19
756
+ 89.4 %
757
+ 94.7 %
758
+ 98.2%
759
+ 93.4 %
760
+ 95.1 %
761
+ 95.4%
762
+ 89.5%
763
+ 90.4%
764
+ 90.8 %
765
+ 75.7 %
766
+ 88.7 %
767
+ 88.8%
768
+ Table 2: The trace accuracy of different attacks.
769
+ 2018), HSJA(Chen, Jordan, and Wainwright 2020) and
770
+ QEBA(Li et al. 2020), the tracing accuracy shows similar
771
+ results, but for SurFree (Maho, Furon, and Le Merrer 2021),
772
+ the tracing accuracy will be worse than that of the other
773
+ attacks. The reason is that Boundary attack, HSJA(Chen,
774
+ Jordan, and Wainwright 2020), QEBA(Li et al. 2020) are
775
+ gradient-estimation-based attacks, which tries to use random
776
+ noise to estimate the gradient of the network and further
777
+ attack along the gradient. Since the gradient is highly re-
778
+ lated to Ti, such attacks are more likely to be trapped by
779
+ Ti. But SurFree(Maho, Furon, and Le Merrer 2021) is at-
780
+ tacking based on geometric characteristics of the boundary,
781
+ which may ignore the trap of Ti especially when α is small.
782
+ So compared with Boundary attack(Brendel, Rauber, and
783
+ Bethge 2018), HSJA(Chen, Jordan, and Wainwright 2020)
784
+ and QEBA(Li et al. 2020), the proposed mechanism may
785
+ get worse performance when facing SurFree(Maho, Furon,
786
+ and Le Merrer 2021) attack.
787
+ 4.4
788
+ The influence of distributed copy numbers
789
+ In this section, we will discuss the traceability of the algo-
790
+ rithm in multiple distributed copies. When training tracer Ti,
791
+ the parameter is randomly initialized and each Ti is trained
792
+ independently. So the distribution of DOL corresponding to
793
+ any two branches should follow independent and identically
794
+ distribution. Therefore, the traceability results of multiple
795
+ copies could be calculated from the results of two copies.
796
+ In order to verify the correctness, we perform the following
797
+ experiments.
798
+ For experiment verification, we trained 10 different Ti
799
+ first, then we randomly choose one Ms as the source model
800
+ to generate the adversarial examples. We record the tracing
801
+ performance on the n, n ∈ [2, 10] models.
802
+ To estimate the tracing results for n, n ∈ [2, 10] models,
803
+ we utilize the Monte-Carlo sampling method in the distri-
804
+ bution of two models’ DOL. The specific procedure is de-
805
+ scribed as:
806
+ 1). We randomly choose one source model Ms and one
807
+ other victim model Mv as the fundamental models, then we
808
+ perform the black-box attack on Ms with 1000 different im-
809
+ ages and record the DOL of Ts and Tv.
810
+ 2). We draw the distribution of DOL corresponding to Ts
811
+ and Tv as the basic distribution, denoted as Ds and Dv, as
812
+ shown in Fig. 5a- 5c.
813
+ 3). For the tracing results of n, n ∈ [2, 10] models, we
814
+ conduct the sampling process (take one sample Ss from Ds
815
+ and n − 1 sample Sn−1
816
+ v
817
+ from Dv) 10000 times.
818
+ 4) For each sampling, if Ss > max(Sn−1
819
+ v
820
+ ), we consider
821
+ it as a correct tracing sample. We record the total number
822
+ of correct tracing N n
823
+ C in 10000 samplings. The final tracing
824
+ accuracy of n models can be calculated with N n
825
+ C/10000.
826
+ The results are shown in Fig. 5d-5f. The attack we choose
827
+ is HSJA(Chen, Jordan, and Wainwright 2020), and α is fixed
828
+ as 0.15. It can be seen that with the increasing number of
829
+ distributed copies, the tracing accuracy gradually decreases.
830
+ But with 10 branches, it can still maintain more than 90%
831
+ accuracy for CIFAR10 and GTSRB. Besides, the estimated
832
+ tracing performance is almost the same as the actual experi-
833
+ ment results, which indicates the correctness of our analysis.
834
+ 5
835
+ Discussion
836
+ 5.1
837
+ The importance of noise-sensitive loss
838
+ In the proposed mechanism, making Ti easier to be attacked
839
+ is the key for tracing. We design the noise-sensitive loss to
840
+ meet the requirement. In this section, experiments will be
841
+ conducted to show the importance of noise-sensitive loss.
842
+ We use two randomly initialized tracers as the comparison
843
+ to conduct the tracing experiment on 1000 adversarial im-
844
+ ages. The adversarial attack is set as HSJA(Chen, Jordan,
845
+ and Wainwright 2020), α is fixed as 0.15. The experimental
846
+ results are shown in Table 3.
847
+ Attack
848
+ CIFAR10
849
+ GTSRB
850
+ mini-ImageNet
851
+ ResNet18
852
+ VGG16
853
+ ResNet18
854
+ VGG16
855
+ ResNet50
856
+ VGG19
857
+ Random
858
+ 57.9%
859
+ 62.4%
860
+ 53.9%
861
+ 57.0%
862
+ 56.2%
863
+ 59.8%
864
+ Proposed
865
+ 99.3%
866
+ 97.9%
867
+ 98.7%
868
+ 98.3%
869
+ 97.5%
870
+ 95.4%
871
+ Table 3: The trace accuracy of HSJA attack with different T .
872
+ It can be seen that without noise-sensitive loss, the trac-
873
+ ing accuracy of the random initialized tracer only achieves
874
+ 60%, which is much lower than the proposed noise-sensitive
875
+ tracer. This indicates that noise-sensitive loss is very impor-
876
+ tant in realizing accurate tracing, only setting different pa-
877
+ rameters of tracer is not enough to trap the attack to result in
878
+ specific features.
879
+ 5.2
880
+ Non-transferability and traceability
881
+ The concept of traceability is related but not equivalent
882
+ to non-transferability. A non-transferable adversarial exam-
883
+ ple works only on the victim model it is generated from.
884
+ Therefore, tracing such non-transferable example may be
885
+ a straightforward task. On the other hand, a transferable
886
+ sample may be generic enough to work on many copies/-
887
+ models. The task of tracing becomes more meaningful in
888
+
889
+ (a) The distribution of CIFAR10.
890
+ (b) The distribution of GTSRB.
891
+ (c) The distribution of mini-ImageNet.
892
+ (d) The tracing results of CIFAR10.
893
+ (e) The tracing results of GTSRB.
894
+ (f) The tracing results of mini-ImageNet.
895
+ Figure 5: The distribution of DOL with HSJA and ResNet backbone and tracing performance of multiple branches.
896
+ this scenario. Our ability to trace a non-transferable exam-
897
+ ple demonstrates that the process of adversarial attack intro-
898
+ duces distinct traceable features which are unique to each
899
+ victim model. In this sense, traceability can serve as a fail-
900
+ safe property in defending adversarial attacks. There are
901
+ many defense methods can satisfy non-transferrability, but
902
+ once the defense fails, the model will not be effectively pro-
903
+ tected. But our experimental results show that for the pro-
904
+ posed method, even if the defense fails, we still have a cer-
905
+ tain probability to trace the attacked model, as shown in
906
+ Table 4. We use the data of “ResNet-CIFAR10” task with
907
+ HSJA (Chen, Jordan, and Wainwright 2020) and QEBA (Li
908
+ et al. 2020) as examples to show the specific tracing results.
909
+ Attack
910
+ α
911
+ NTr
912
+ NTr(+)
913
+ Tr
914
+ Tr(+)
915
+ Tr Rate
916
+ Total Rate
917
+ HSJA
918
+ 0.05
919
+ 672
920
+ 672
921
+ 328
922
+ 313
923
+ 95.43%
924
+ 98.50%
925
+ 0.1
926
+ 973
927
+ 973
928
+ 27
929
+ 19
930
+ 70.37%
931
+ 99.20%
932
+ 0.15
933
+ 993
934
+ 993
935
+ 7
936
+ 0
937
+ 0%
938
+ 99.30%
939
+ QEBA
940
+ 0.05
941
+ 840
942
+ 840
943
+ 160
944
+ 156
945
+ 97.50%
946
+ 99.60%
947
+ 0.1
948
+ 879
949
+ 879
950
+ 121
951
+ 118
952
+ 97.52%
953
+ 99.70%
954
+ 0.15
955
+ 859
956
+ 859
957
+ 141
958
+ 138
959
+ 97.87%
960
+ 99.7%
961
+ Table 4: The trace accuracy of different attacks.
962
+ In Table 4, NTr and Tr indicate the number of non-
963
+ transferrable samples and transferrable samples respectively.
964
+ NTr(+) and Tr(+) indicate the number of successful tracing
965
+ samples. We can see that for QEBA with α = 0.05, 0.1, and
966
+ 0.15, the traceability to transferrable samples is all keep at
967
+ a high level which is greater than 97%. As for HSJA, when
968
+ α = 0.05, 328 samples can be transferred, and the trace-
969
+ ability of transferrable examples achieves 95.43%. When
970
+ α = 0.15, although the traceability of transferrable exam-
971
+ ples decreases to 0%, only 7 samples are transferrable. So
972
+ the total tracing rate is still at a high level. In general, the pro-
973
+ posed method either guarantees the high non-transferability
974
+ or the high tracing accuracy for transferred samples.
975
+ 5.3
976
+ Limitations and adaptive attacks
977
+ Although the proposed system maintains certain traceability
978
+ in the buyers-seller setting, there are still some limitations
979
+ that need to be addressed. For example, once the attacker
980
+ finds a way to attack C and bypass Ti, the tracing perfor-
981
+ mance may degrade. But we found that attacking such sys-
982
+ tem could be a challenging topic itself (in our setting) as
983
+ the attackers do not have access to all other copies and thus
984
+ are unable to avoid the differences that our tracer exploits.
985
+ Besides, it seems a more adaptive attack also comes with
986
+ “cost”. For instance, the approach of attacking C and by-
987
+ passing Ti would degrade the visual quality of the attack.
988
+ So future work may be paid on how to evade the attack by
989
+ utilizing such “cost”.
990
+ 6
991
+ Conclusion
992
+ This paper researches a new aspect of defending against ad-
993
+ versarial attacks that is traceability of adversarial attacks.
994
+ The techniques derived could aid forensic investigation of
995
+ known attacks, and provide deterrence to future attacks in
996
+ the buyers-seller setting. As for the mechanism, we de-
997
+ sign a framework which contains two related components
998
+ (model separation and origin tracing) to realize traceabil-
999
+ ity. For model separation, we propose a parallel network
1000
+ structure which pairs a unique tracer with the original classi-
1001
+ fier and a noise-sensitive training loss. Tracer model injects
1002
+ the unique features and ensures the differences between dis-
1003
+ tributed models. As for origin tracing, we design an output-
1004
+ logits-based tracing mechanism. Based on this, the traceabil-
1005
+ ity of the attacked models can be realized when obtaining
1006
+
1007
+ 400
1008
+ Source
1009
+ 350
1010
+ INon-Source
1011
+ 300
1012
+ 250
1013
+ 200
1014
+ 150
1015
+ 100
1016
+ 50
1017
+ 1.5450
1018
+ 400
1019
+ Source
1020
+ INon-Source
1021
+ 350
1022
+ 300
1023
+ 250
1024
+ 200
1025
+ 150
1026
+ 100
1027
+ 50500
1028
+ 450
1029
+ Source
1030
+ INon-Source
1031
+ 400
1032
+ 350
1033
+ 300
1034
+ 250
1035
+ 200
1036
+ 150
1037
+ 100
1038
+ 50110
1039
+ 105
1040
+ Tracing Accuracy (
1041
+ 100
1042
+ 95
1043
+ 90
1044
+ ResNet18-R
1045
+ +--ResNet18-S
1046
+ 85
1047
+ -VGG16-R
1048
+ +-- VGG16-S
1049
+ 80
1050
+ Number of Distributed Models100
1051
+ Tracing Accuracy (%)
1052
+ 66
1053
+ 98
1054
+ 96
1055
+ 95
1056
+ 94
1057
+ ResNet18-R
1058
+ --ResNet18-S
1059
+ 93
1060
+ VGG16-R
1061
+ +--VGG16-S
1062
+ 92
1063
+ 10
1064
+ Number of Distributed Models110.00
1065
+ 100.00
1066
+ Tracing Accuracy
1067
+ 90.00
1068
+ 80.00
1069
+ 70.0
1070
+ ResNet50-R
1071
+ ResNet50-S
1072
+ 60.0
1073
+ —VGG19-R
1074
+ +-- VGG19-S
1075
+ 50.00
1076
+ 2
1077
+ 3
1078
+ 4
1079
+ 5
1080
+ D
1081
+ 10
1082
+ Number of Distributed Modelsthe adversarial examples. The experiment of multi-dataset
1083
+ and multi-network model shows that it is possible to achieve
1084
+ traceability through the adversarial examples.
1085
+ References
1086
+ Boenisch, F. 2020. A survey on model watermarking neural
1087
+ networks. arXiv preprint arXiv:2009.12153.
1088
+ Brendel, W.; Rauber, J.; and Bethge, M. 2018. Decision-
1089
+ Based Adversarial Attacks: Reliable Attacks Against Black-
1090
+ Box Machine Learning Models. In International Conference
1091
+ on Learning Representations, ICLR 2018.
1092
+ Carlini, N.; and Wagner, D. 2017. Towards evaluating the
1093
+ robustness of neural networks. In 2017 IEEE Symposium on
1094
+ Security and Privacy (S&P), 39–57. IEEE.
1095
+ Chen, J.; Jordan, M. I.; and Wainwright, M. J. 2020. Hop-
1096
+ skipjumpattack: A query-efficient decision-based attack. In
1097
+ 2020 IEEE Symposium on Security and Privacy (S&P),
1098
+ 1277–1294. IEEE.
1099
+ Chen, P.-Y.; Sharma, Y.; Zhang, H.; Yi, J.; and Hsieh, C.-J.
1100
+ 2018. Ead: elastic-net attacks to deep neural networks via
1101
+ adversarial examples. In Thirty-second AAAI conference on
1102
+ artificial intelligence.
1103
+ Chen, P.-Y.; Zhang, H.; Sharma, Y.; Yi, J.; and Hsieh, C.-
1104
+ J. 2017. Zoo: Zeroth order optimization based black-box
1105
+ attacks to deep neural networks without training substitute
1106
+ models. In Proceedings of the 10th ACM workshop on Arti-
1107
+ ficial Intelligence and Security, 15–26.
1108
+ Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explain-
1109
+ ing and harnessing adversarial examples.
1110
+ arXiv preprint
1111
+ arXiv:1412.6572.
1112
+ Gu, S.; and Rigazio, L. 2014. Towards deep neural network
1113
+ architectures robust to adversarial examples. arXiv preprint
1114
+ arXiv:1412.5068.
1115
+ Guo, C.; Gardner, J.; You, Y.; Wilson, A. G.; and Wein-
1116
+ berger, K. 2019. Simple black-box adversarial attacks. In In-
1117
+ ternational Conference on Machine Learning, 2484–2493.
1118
+ PMLR.
1119
+ He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual
1120
+ learning for image recognition. In Proceedings of the IEEE
1121
+ Conference on Computer Vision and Pattern Recognition,
1122
+ 770–778.
1123
+ Hinton, G.; Vinyals, O.; and Dean, J. 2015.
1124
+ Distill-
1125
+ ing the knowledge in a neural network.
1126
+ arXiv preprint
1127
+ arXiv:1503.02531.
1128
+ Houben, S.; Stallkamp, J.; Salmen, J.; Schlipsing, M.; and
1129
+ Igel, C. 2013. Detection of traffic signs in real-world im-
1130
+ ages: The German Traffic Sign Detection Benchmark. In
1131
+ The 2013 International Joint Conference on Neural Net-
1132
+ works (IJCNN), 1–8. Ieee.
1133
+ Kingma, D. P.; and Ba, J. 2015.
1134
+ Adam: A Method for
1135
+ Stochastic Optimization. In 3rd International Conference
1136
+ on Learning Representations, ICLR 2015.
1137
+ Krizhevsky, A.; Hinton, G.; et al. 2009. Learning multiple
1138
+ layers of features from tiny images.
1139
+ Kurakin, A.; Goodfellow, I.; and Bengio, S. 2016.
1140
+ Ad-
1141
+ versarial machine learning at scale.
1142
+ arXiv preprint
1143
+ arXiv:1611.01236.
1144
+ Li, H.; Xu, X.; Zhang, X.; Yang, S.; and Li, B. 2020. Qeba:
1145
+ Query-efficient boundary-based blackbox attack.
1146
+ In Pro-
1147
+ ceedings of the IEEE/CVF Conference on Computer Vision
1148
+ and Pattern Recognition, 1221–1230.
1149
+ Maho, T.; Furon, T.; and Le Merrer, E. 2021.
1150
+ SurFree:
1151
+ a fast surrogate-free black-box attack.
1152
+ In Proceedings of
1153
+ the IEEE/CVF Conference on Computer Vision and Pattern
1154
+ Recognition, 10430–10439.
1155
+ Memon, N.; and Wong, P. W. 2001. A buyer-seller water-
1156
+ marking protocol. IEEE Transactions on image processing,
1157
+ 10(4): 643–649.
1158
+ Meng, D.; and Chen, H. 2017. Magnet: a two-pronged de-
1159
+ fense against adversarial examples. In Proceedings of the
1160
+ 2017 ACM SIGSAC Conference on Computer and Commu-
1161
+ nications Security, 135–147.
1162
+ Moosavi-Dezfooli, S.-M.; Fawzi, A.; and Frossard, P. 2016.
1163
+ Deepfool: a simple and accurate method to fool deep neural
1164
+ networks. In Proceedings of the IEEE Conference on Com-
1165
+ puter Vision and Pattern Recognition, 2574–2582.
1166
+ Nicolae, M.-I.; Sinn, M.; Tran, M. N.; Buesser, B.; Rawat,
1167
+ A.; Wistuba, M.; Zantedeschi, V.; Baracaldo, N.; Chen, B.;
1168
+ Ludwig, H.; et al. 2018. Adversarial Robustness Toolbox
1169
+ v1. 0.0. arXiv preprint arXiv:1807.01069.
1170
+ Papernot, N.; McDaniel, P.; Jha, S.; Fredrikson, M.; Celik,
1171
+ Z. B.; and Swami, A. 2016. The limitations of deep learning
1172
+ in adversarial settings. In 2016 IEEE European Symposium
1173
+ on Security and Privacy (EuroS&P), 372–387. IEEE.
1174
+ Ravi, S.; and Larochelle, H. 2016. Optimization as a model
1175
+ for few-shot learning.
1176
+ Simonyan, K.; and Zisserman, A. 2014. Very deep convo-
1177
+ lutional networks for large-scale image recognition. arXiv
1178
+ preprint arXiv:1409.1556.
1179
+ Su, D.; Zhang, H.; Chen, H.; Yi, J.; Chen, P.-Y.; and Gao, Y.
1180
+ 2018. Is Robustness the Cost of Accuracy?–A Comprehen-
1181
+ sive Study on the Robustness of 18 Deep Image Classifica-
1182
+ tion Models. In Proceedings of the European Conference on
1183
+ Computer Vision (ECCV), 631–648.
1184
+ Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan,
1185
+ D.; Goodfellow, I.; and Fergus, R. 2014. Intriguing proper-
1186
+ ties of neural networks. In 2nd International Conference on
1187
+ Learning Representations, ICLR 2014.
1188
+ Zhang, J.; Tann, W. J.-W.; and Chang, E.-C. 2021. Mitigat-
1189
+ ing Adversarial Attacks by Distributing Different Copies to
1190
+ Different Users. arXiv preprint arXiv:2111.15160.
1191
+
1tAzT4oBgHgl3EQfRfvj/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
29FKT4oBgHgl3EQfQS0h/content/2301.11766v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5663ad1a3e5f1d7895eb65700d50c7c281d53c232e8000b2a8f8df46b53b0b9
3
+ size 4536833
29FKT4oBgHgl3EQfQS0h/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22e7b62f2f678eff3d25745072ceb15bc1a19991a724d93bc1e83e646bd7e6dc
3
+ size 5111853
29FST4oBgHgl3EQfYTgs/content/2301.13787v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:788aac1edc0676c4a4e027bd4be9959e11f8af7fef7ce6c38514d680afbcc98d
3
+ size 1886140
29FST4oBgHgl3EQfYTgs/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e601af831f5a61fe599a5646571b689d57a76da7bf60cab9a5c8fe8ab7932f5
3
+ size 163503
2dE2T4oBgHgl3EQf5gh4/content/tmp_files/2301.04191v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
2dE2T4oBgHgl3EQf5gh4/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
3NE0T4oBgHgl3EQfuwHF/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d3c9f861210349e6372cd2223f7152f974ec7defb0ded916dab1e093a7d98fa
3
+ size 2555949
3dFQT4oBgHgl3EQf3TZi/content/2301.13427v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85723d8fa63c90c7421d2871b278f9f2a57ea26e01d2d3843e50a46482e056ee
3
+ size 331822
3dFQT4oBgHgl3EQf3TZi/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f91a32407385f0691f1f4f5c35b4d9fe20f4445b28c62d39aa15a58d4f772780
3
+ size 232675
4tE4T4oBgHgl3EQf1A0X/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f19c14b0469548d244521f232623bd74ae34cea89a60666a1f9dfd63cba1653a
3
+ size 113067
5tAzT4oBgHgl3EQfu_3x/content/2301.01701v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b1bf4ceb07993c3dd5838e9a6310756563e4040dc067b682c6a6fa4bf3734ec
3
+ size 840595
5tAzT4oBgHgl3EQfu_3x/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:efc57992563b00c5fb1b27ff69baaccc0521a645dc7d5b425ef0c9a4e6a5d78c
3
+ size 4194349
6tFKT4oBgHgl3EQfTy2d/content/2301.11781v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9ed3b7b893c2b2aa6830d5168ecd27a5445b901ed01186356f7b4eb0b562946
3
+ size 2606381
6tFKT4oBgHgl3EQfTy2d/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa4056c57f85f8ffc546cf1860abd6ce75222b1c88a262bd80db422becdc4631
3
+ size 5242925
6tFKT4oBgHgl3EQfTy2d/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b82aa717b4eb725ffce6e80f6d86f80831aee086aa70f7c457e253ede8ac889
3
+ size 186648
79A0T4oBgHgl3EQfOf-b/content/2301.02162v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc019fdfdf2b79d7ce2c2a2cbe0da4343ccad200850ab4911abc8d888fa9870b
3
+ size 257110
79A0T4oBgHgl3EQfOf-b/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1696af756c4d6857a0966385cd4a56ca13a1cd22288e3cc94a06c74db9784f16
3
+ size 125508
7NE3T4oBgHgl3EQfqApm/content/2301.04647v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f65f589fc6db54b9095778d3257d543a9c4db9061a3f4f8dac297d23607b595
3
+ size 22951452
7NE3T4oBgHgl3EQfqApm/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:784b62e271a055e41088ec7c6db120d7a6918ccc4ced2277ad61cb2f2bcb3b9b
3
+ size 6094893
8tE2T4oBgHgl3EQf8Qgm/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba0e01baf85bce7f6e39b55fa84a6ba5bc6d75acd22bff9ecd1d6c3ad1f887b5
3
+ size 3670061
8tFRT4oBgHgl3EQfpzcC/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8ddc28c6b5c76942aabf313989c932da3118213b661426df13b54fe636b2845
3
+ size 98782
99E1T4oBgHgl3EQf8QUL/content/tmp_files/2301.03542v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
99E1T4oBgHgl3EQf8QUL/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
A9E1T4oBgHgl3EQfpAX1/content/2301.03328v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:565b0f71d8fe8c09073a97277847b6757666618cdf4c18fb427ef33529c4b377
3
+ size 648515
A9E1T4oBgHgl3EQfpAX1/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49c8df9211a6dcb70e26a305e2e8378382c47c41ab46db8b416be7ef43713d55
3
+ size 3342381
A9E1T4oBgHgl3EQfpAX1/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5124fd5f6e51dc43feab048332a4037ab7de88c91d204405a30f76e4bf92b8be
3
+ size 114432
ANFIT4oBgHgl3EQf-iyR/content/2301.11411v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1635141ad0f31e68177423da379a1988104a383464d5f0205217c77ee515326d
3
+ size 4577337
AtE1T4oBgHgl3EQf9Ab6/content/2301.03553v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7d232508c4c17d108227a8b0f01ea84fe6d13e3acaf167ded97d0abe210e733
3
+ size 2575394
AtE1T4oBgHgl3EQf9Ab6/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0bc4fe7e1d4ed4a24f767ae4fda7b2db269411f18945c8ca443eeb9384404f25
3
+ size 3866669
AtE4T4oBgHgl3EQfEwyK/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12fb36b0f496af213e5bd0829919ea050d041b1040ca95a4a539d7d14a6bf0eb
3
+ size 1441837
AtFIT4oBgHgl3EQf_CxR/content/tmp_files/2301.11413v1.pdf.txt ADDED
@@ -0,0 +1,1250 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Springer Nature 2021 LATEX template
2
+ A massive quiescent galaxy at redshift 4.658
3
+ Adam C. Carnall1*, Ross J. McLure1, James S. Dunlop1, Derek J.
4
+ McLeod1, Vivienne Wild2, Fergus Cullen1, Dan Magee3, Ryan
5
+ Begley1, Andrea Cimatti4,5, Callum T. Donnan1, Massissilia L.
6
+ Hamadouche1, Sophie M. Jewell1 and Sam Walker1
7
+ 1Institute for Astronomy, School of Physics & Astronomy, University of
8
+ Edinburgh, Royal Observatory, Edinburgh, EH9 3HJ, UK.
9
+ 2School of Physics & Astronomy, University of St Andrews, North
10
+ Haugh, St Andrews, KY16 9SS, UK.
11
+ 3Department of Astronomy and Astrophysics, UCO/Lick Observatory,
12
+ University of California, Santa Cruz, CA 95064, USA.
13
+ 4Department of Physics and Astronomy (DIFA), University of Bologna,
14
+ Via Gobetti 93/2, I-40129, Bologna, Italy.
15
+ 5INAF, Osservatorio di Astrofisica e Scienza dello Spazio, Via Piero
16
+ Gobetti 93/3, I-40129, Bologna, Italy.
17
+ *Corresponding author email: [email protected]
18
+ Abstract
19
+ We report the spectroscopic confirmation of a massive quiescent galaxy,
20
+ GS-9209 at a new redshift record of z = 4.658, just 1.25 Gyr after
21
+ the Big Bang, using new deep continuum observations from JWST NIR-
22
+ Spec. From our full-spectral-fitting analysis, we find that this galaxy
23
+ formed its stellar population over a ≃ 200 Myr period, approximately
24
+ 600 − 800 Myr after the Big Bang (zform = 7.3 ± 0.2), before quench-
25
+ ing at zquench = 6.7 ± 0.3. GS-9209 demonstrates unambiguously that
26
+ massive galaxy formation was already well underway within the first bil-
27
+ lion years of cosmic history, with this object having reached a stellar
28
+ mass of log10(M∗/M⊙)
29
+ >
30
+ 10.3 by z = 7. This galaxy also clearly
31
+ demonstrates that the earliest onset of galaxy quenching was no later
32
+ than ≃ 800 Myr after the Big Bang. We estimate the iron abundance
33
+ and α-enhancement of GS-9209, finding [Fe/H] = −0.97+0.06
34
+ −0.07 and
35
+ [α/Fe] = 0.67+0.25
36
+ −0.15, suggesting the stellar mass vs iron abundance rela-
37
+ tion at z ≃ 7, when this object formed most of its stars, was ≃ 0.4 dex
38
+ lower than at z ≃ 3.5. Whilst its spectrum is dominated by stellar emis-
39
+ sion, GS-9209 also exhibits broad Hα emission, indicating that it hosts
40
+ an active galactic nucleus (AGN), for which we measure a black-hole
41
+ 1
42
+ arXiv:2301.11413v1 [astro-ph.GA] 26 Jan 2023
43
+
44
+ Springer Nature 2021 LATEX template
45
+ 2
46
+ A massive quiescent galaxy at redshift 4.658
47
+ mass of log10(M•/M⊙) = 8.7 ± 0.1. Although large-scale star forma-
48
+ tion in GS-9209 has been quenched for almost half a billion years, the
49
+ significant integrated quantity of accretion implied by this large black-
50
+ hole mass suggests AGN feedback plausibly played a significant role
51
+ in quenching star formation in this galaxy. GS-9209 is also extremely
52
+ compact, with an effective radius of just 215 ± 20 parsecs. This intrigu-
53
+ ing object offers perhaps our deepest insight yet into massive galaxy
54
+ formation and quenching during the first billion years of cosmic history.
55
+ 1 Summary
56
+ The discovery of massive galaxies with old stellar populations at early cosmic
57
+ epochs has historically acted as a key constraint on models for both galaxy for-
58
+ mation physics and cosmology [1–4]. Today, the extremely rapid assembly of
59
+ the earliest galaxies during the first billion years of cosmic history continues to
60
+ challenge our understanding of galaxy formation physics [5, 6]. The advent of
61
+ the James Webb Space Telescope (JWST) has exacerbated this issue by con-
62
+ firming the existence of galaxies in significant numbers as early as the first few
63
+ hundred million years [7–9]. Perhaps even more surprisingly, in some galaxies,
64
+ this initial highly efficient star formation rapidly shuts down, or quenches, giv-
65
+ ing rise to massive quiescent galaxies as little as ∼ 1.5 billion years after the
66
+ Big Bang, at redshifts up to z ≃ 4 [4, 10]. Due to their faintness and red colour,
67
+ it has proven extremely challenging to learn about these extreme quiescent
68
+ galaxies, or to confirm whether any exist at earlier times. Here, we report the
69
+ spectroscopic confirmation of a quiescent galaxy, GS-9209, at a new redshift
70
+ record of 4.658, just 1.25 billion years after the Big Bang, using the NIRSpec
71
+ instrument on JWST. The transformative power of JWST allows us to char-
72
+ acterise the physical properties of this early massive galaxy in unprecedented
73
+ detail. GS-9209 has a stellar mass of M∗ = 4.1 ± 0.2 × 1010 M⊙, and quenched
74
+ star formation at z = 6.7 ± 0.3, when the Universe was ≃ 800 million years
75
+ old. This intriguing object offers perhaps our deepest insight yet into massive
76
+ galaxy formation and quenching during the first billion years of cosmic history.
77
+ 2 Results
78
+ GS-9209 was first highlighted in the early 2000s as an object with red optical
79
+ to near-infrared colours and a photometric redshift of z ≃ 4.5 [11]. An optical
80
+ spectrum was taken in the mid-2010s as part of the VIMOS Ultra Deep Sur-
81
+ vey (VUDS) [12], showing tentative evidence for a Lyman break at λ ≃ 7000˚A,
82
+ but no Lyman α emission. During the past 5 years, several studies have iden-
83
+ tified GS-9209 as a candidate high-redshift massive quiescent galaxy [13, 14],
84
+ based on its blue colours at wavelengths, λ = 2 − 8µm and non-detection at
85
+ millimetre wavelengths [15]. GS-9209 is also not detected in X-rays [16], at
86
+ radio wavelengths [17], or at λ = 24µm [18]. The faint, red nature of the source
87
+ (with magnitudes HAB = 24.7 and KAB = 23.6) means that near-infrared
88
+ spectroscopy with ground-based instrumentation is prohibitively expensive.
89
+
90
+ Springer Nature 2021 LATEX template
91
+ A massive quiescent galaxy at redshift 4.658
92
+ 3
93
+ 2.0
94
+ 2.5
95
+ 3.0
96
+ 3.5
97
+ 4.0
98
+ 4.5
99
+ 5.0
100
+ Observed Wavelength / µm
101
+ 0.0
102
+ 0.3
103
+ 0.6
104
+ 0.9
105
+ 1.2
106
+ 1.5
107
+ fλ / 10−19 erg s−1 cm−2 ˚A−1
108
+ F170LP + G235M
109
+ F290LP + G395M
110
+ 2.0
111
+ 2.2
112
+ 2.4
113
+ 2.6
114
+ λ / µm
115
+ 0.3
116
+ 0.6
117
+ 0.9
118
+ 1.2
119
+ fλ / 10−19 erg s−1 cm−2 ˚A−1
120
+
121
+
122
+
123
+
124
+ Ca k
125
+ Ca h
126
+
127
+ +
128
+ [O ii]
129
+ Fitted model
130
+ 0.4
131
+ 0.5
132
+ 0.6
133
+ 0.7
134
+ 0.8
135
+ Rest-frame Wavelength / µm
136
+ Fig. 1 JWST NIRSpec observations of GS-9209. Data were taken using the G235M and
137
+ G395M gratings (R = 1000), providing wavelength coverage from λ = 1.7 − 5.1µm. The
138
+ galaxy is at z = 4.658, and exhibits extremely deep Balmer absorption lines, similar to lower
139
+ redshift post-starburst galaxies, clearly indicating this galaxy experienced a significant, rapid
140
+ drop in star-formation rate (SFR) within the past few hundred million years. The spectral
141
+ region from λ = 2.6 − 4.0µm, containing Hβ and Hα, is shown at a larger scale in Fig. 2.
142
+ 2.1 Spectroscopic data
143
+ On 16th November 2022, we obtained medium-resolution spectroscopy (R =
144
+ λ/∆λ = 1000) through the JWST NIRSpec fixed slit, integrating for 3 hours
145
+ with the G235M grism and 2 hours with the G395M grism, providing con-
146
+ tinuous wavelength coverage from λ = 1.7 − 5.1µm. These data, shown in
147
+ Fig. 1, reveal a full suite of extremely deep Balmer absorption features, from
148
+ which we measure a spectroscopic redshift of 4.6582 ± 0.0002, consistent with
149
+ previous photometric data and the VUDS spectrum. The spectrum strongly
150
+ resembles that of an A-type star, and is reminiscent of lower-redshift post-
151
+ starburst galaxies [19–21], with a Hδ equivalent width (EW), as measured by
152
+ the HδA Lick index, of 7.9 ± 0.3˚A, comparable to the most extreme values
153
+ observed in the local Universe [22]. These spectral features strongly indicate
154
+ this galaxy has undergone a sharp decline in star-formation rate (SFR) during
155
+ the preceding few hundred Myr.
156
+ The observed continuum is relatively smooth, as is the case for A-type
157
+ stars, with only two clearly detected metal absorption features: the Ca k line
158
+ at 3934˚A and the Na d feature at 5895˚A. The Ca h line at 3969˚A is blended
159
+ with the much stronger Hϵ Balmer line. The spectrum exhibits only the merest
160
+ suspicion of [O ii] 3727˚A and [O iii] 4959˚A, 5007˚A emission, and no apparent
161
+ infilling of Hβ or any of the higher-order Balmer absorption lines. However,
162
+ as can be seen in Fig. 2, both Hα and [Nii] 6584˚A are clearly albeit weakly
163
+ detected in emission, with Hα also exhibiting an obvious broad component.
164
+ This broad component, along with the relative strength of [N ii] compared
165
+ with the narrow Hα line indicate the presence of an accreting supermassive
166
+
167
+ Springer Nature 2021 LATEX template
168
+ 4
169
+ A massive quiescent galaxy at redshift 4.658
170
+ 2.6
171
+ 2.8
172
+ 3.0
173
+ 3.2
174
+ 3.4
175
+ 3.6
176
+ 3.8
177
+ 4.0
178
+ Observed Wavelength / µm
179
+ 0.0
180
+ 0.2
181
+ 0.4
182
+ 0.6
183
+ 0.8
184
+ 1.0
185
+ fλ / 10−19 erg s−1 cm−2 ˚A−1
186
+
187
+
188
+ Mg i
189
+ Na d
190
+ [N ii]
191
+ Fe i
192
+ [O iii]
193
+ [O iii]
194
+ Bagpipes full fitted model
195
+ Bagpipes AGN component
196
+ Narrow line model
197
+ 0.50
198
+ 0.55
199
+ 0.60
200
+ 0.65
201
+ 0.70
202
+ Rest-frame Wavelength / µm
203
+ Observed fluxes
204
+ Observed flux errors
205
+ Fig. 2 JWST NIRSpec observations of GS-9209: zoom in on Hβ and Hα. Data are shown
206
+ in blue, with their associated uncertainties visible at the bottom in purple. The full Bagpipes
207
+ fitted model is shown in black, with the AGN component shown in red. The narrow Hα and
208
+ [N ii] lines were masked during the Bagpipes fitting process, and subsequently fitted with
209
+ Gaussian functions, shown in green. Key emission and absorption features are also marked.
210
+ black hole: an active galactic nucleus (AGN). However, the extreme EWs of
211
+ the observed Balmer absorption features indicate that the continuum emission
212
+ must be strongly dominated by the stellar component. Nevertheless, the AGN
213
+ contribution to GS-9209 must be carefully modelled when fitting the spectrum
214
+ of this source to extract reliable stellar population properties (see Section 4.3).
215
+ 2.2 Full spectral fitting
216
+ To measure the stellar population properties of GS-9209, we perform full spec-
217
+ trophotometric fitting using the Bagpipes code. Full details of the methodology
218
+ we employ are given in Section 4.3. Briefly, we combine our spectroscopic
219
+ data with previously available CANDELS photometry, as well as new JWST
220
+ NIRCam medium-band imaging in 5 filters from the Ultra Deep Field
221
+ Medium-Band Survey (Programme ID: 1963; PI: Williams). We first mask the
222
+ wavelengths corresponding to [O ii], [O iii], narrow Hα and [N ii], due to likely
223
+ AGN contributions. We discuss the properties of these lines and their likely
224
+ origin in Section 2.5. We then fit a 22-parameter model for the stellar, dust,
225
+ nebular and AGN components, as well as spectrophotometric calibration.
226
+ The resulting posterior median model is shown in black in Figs 1 and 2. We
227
+ obtain a stellar mass of log10(M∗/M⊙) = 10.61±0.02, under the assumption of
228
+ a Kroupa initial mass function (IMF) [23]. We additionally recover a very low
229
+ level of dust attenuation, with AV = 0.04+0.05
230
+ −0.03. The SFR we measure averaged
231
+ over the past 100 Myr is consistent with zero, with a very stringent upper
232
+ bound, though this is largely a result of our chosen star-formation history
233
+ (SFH) parameterisation [24]. We report a more-realistic upper bound on the
234
+ SFR in Section 2.5 based on the narrow Hα line.
235
+
236
+ Springer Nature 2021 LATEX template
237
+ A massive quiescent galaxy at redshift 4.658
238
+ 5
239
+ 0.0
240
+ 0.2
241
+ 0.4
242
+ 0.6
243
+ 0.8
244
+ 1.0
245
+ 1.2
246
+ Age of Universe / Gyr
247
+ 0.0
248
+ 1.0
249
+ 2.0
250
+ 3.0
251
+ log10(SFR/yr−1)
252
+ SFRpeak = 530+840
253
+ −310 M⊙ yr−1
254
+ tform = 0.71+0.3
255
+ −0.2 Gyr
256
+
257
+
258
+ 0.0
259
+ 0.2
260
+ 0.4
261
+ 0.6
262
+ 0.8
263
+ 1.0
264
+ 1.2
265
+ Age of Universe / Gyr
266
+ 9.0
267
+ 9.5
268
+ 10.0
269
+ 10.5
270
+ 11.0
271
+ log10(M∗/M⊙)
272
+ Labbe et al. (2022)
273
+ 5
274
+ 6
275
+ 8
276
+ 12
277
+ 30
278
+ Redshift
279
+ 5
280
+ 6
281
+ 8
282
+ 12
283
+ 30
284
+ Redshift
285
+ Fig. 3 The star-formation history of GS-9209. The SFR as a function of time is shown in
286
+ the left panel, with the stellar mass as a function of time shown in the right panel. The blue
287
+ lines show the posterior medians, with the darker and lighter shaded regions showing the 1σ
288
+ and 2σ confidence intervals respectively. We find a formation redshift, zform = 7.3 ± 0.2 and
289
+ a quenching redshift, zquench = 6.7 ± 0.3. The sample of massive z ≃ 8 galaxy candidates
290
+ from JWST CEERS reported by [7] is also shown in the right panel, demonstrating that
291
+ these candidates are plausible progenitors for GS-9209.
292
+ 2.3 Star-formation history
293
+ The star-formation history (SFH) we recover is shown in Fig. 3. We find that
294
+ GS-9209 formed its stellar population largely during a ≃ 200 Myr period, from
295
+ around 600 − 800 Myr after the Big Bang (z ≃ 7 − 8). We recover a mass-
296
+ weighted mean formation time, tform = 0.71+0.03
297
+ −0.02 Gyr after the Big Bang,
298
+ corresponding to a formation redshift, zform = 7.3 ± 0.2. This is the redshift
299
+ at which GS-9209 would have had half its current stellar mass, approximately
300
+ log10(M∗/M⊙) = 10.3. We find that GS-9209 quenched (which we define as
301
+ the time at which its sSFR fell below 0.2 divided by the Hubble time, e.g.,
302
+ [25]) at time tquench = 0.79+0.06
303
+ −0.04 Gyr after the Big Bang, corresponding to a
304
+ quenching redshift, zquench = 6.7 ± 0.3.
305
+ Our model predicts that the peak historical SFR for GS-9209 (at approx-
306
+ imately zform) was within the range SFRpeak = 530+840
307
+ −310 M⊙ yr−1. This is
308
+ similar to the SFRs of bright submillimetre galaxies (SMGs). The number den-
309
+ sity of SMGs with SFR > 300 M⊙ yr−1 at 5 < z < 6 has been estimated to
310
+ be ≃ 3×10−6 Mpc−3 [26]. Extrapolation then suggests that the SMG number
311
+ density at z ≃ 7 is ≃ 1 × 10−6 Mpc−3, which equates to ≃ 1 SMG at z ≃ 7
312
+ over the ≃ 400 square arcmin area from which GS-9209 and one other z > 4
313
+ quiescent galaxy were selected [14]. This broadly consistent number density
314
+ suggests it is entirely plausible that GS-9209 went through a SMG phase at
315
+ z ≃ 7, shortly before quenching.
316
+ In the right panel of Fig. 3, we show the positions of the massive, high-
317
+ redshift galaxies recently reported by [7] in the first imaging release from the
318
+ JWST CEERS survey. It can be seen that the positions of these galaxies are
319
+
320
+ Springer Nature 2021 LATEX template
321
+ 6
322
+ A massive quiescent galaxy at redshift 4.658
323
+ broadly consistent with the SFH of GS-9209 at z ≃ 8. It should however be
324
+ noted that, as previously discussed, GS-9209 was selected as one of only two
325
+ robustly identified z > 4 massive quiescent galaxies in an area roughly 10 times
326
+ the size of the initial CEERS imaging area [14]. It therefore seems unlikely
327
+ that a large fraction of the objects reported by [7] will evolve in a similar way
328
+ to GS-9209 over the redshift interval from z ≃ 5 − 8.
329
+ 2.4 Stellar metallicity
330
+ We obtain a relatively low stellar metallicity for GS-9209 of log10(Z∗/Z⊙) =
331
+ −0.97+0.06
332
+ −0.07 (where we adopt a value of Z⊙=0.0142 [27]). By re-running our
333
+ fitting procedure at a range of fixed metallicity values, we find that metallicity
334
+ is constrained mainly by the shape of the stellar continuum emission above
335
+ the Balmer break (the λ = 2.0 − 2.6µm region shown in the inset panel of
336
+ Fig. 1), which is strongly incompatible with models at higher metallicities.
337
+ This UV continuum shape is mostly sensitive to the Fe abundance [28, 29],
338
+ and we therefore associate our measured Z∗ value with the Fe abundance,
339
+ [Fe/H] = −0.97+0.06
340
+ −0.07. This is ≃ 0.4 dex below the mean z ≃ 3.5 stellar mass vs
341
+ iron abundance relationship for star-forming galaxies [30]. Given that GS-9209
342
+ formed its stellar population at z ≃ 7, our result suggests that the stellar mass
343
+ vs iron abundance relation continues to trend downwards over the redshift
344
+ interval from z ≃ 3.5−7, as is observed between the local Universe and z ≃ 3.5.
345
+ As can be seen from Figs 1 and 2, we do not obtain a good fit to either the
346
+ Ca k or Na d absorption features, with our model significantly under-predicting
347
+ the depths of both. Stellar populations that form and quench rapidly are known
348
+ to be α-enhanced [31], whereas the stellar population models we fit assume a
349
+ fixed scaled-Solar abundance pattern (see Section 4.3). We therefore provision-
350
+ ally attribute the failure of our model to reproduce these α-element absorption
351
+ features to significant α-enhancement in GS-9209. It should be noted however
352
+ that both of these features (in particular Na d) can also arise from interstellar
353
+ medium (ISM) absorption, though the low dust attenuation we infer from our
354
+ spectral fit might be taken to suggest this effect should be small.
355
+ Unfortunately, reliable empirical α-enhanced models are not currently
356
+ available for stellar populations with ages less than 1 Gyr. Therefore, to test
357
+ this α-enhancement hypothesis, we first measure the EWs of these two fea-
358
+ tures from our data (see Section 4), obtaining a Ca k EW of 2.15 ± 0.25˚A,
359
+ and a Na d EW of 2.09 ± 0.46˚A. For comparison, our posterior median model
360
+ predicts values of 1.12˚A and 0.41˚A respectively. We then scale up the metallic-
361
+ ity of our model, keeping all other parameters fixed, until the predicted EWs
362
+ match our data. By this process, we obtain [Ca/Fe] = 0.67+0.25
363
+ −0.15. We are how-
364
+ ever unable to reproduce the observed depth of Na d via this process, which
365
+ we attribute to the known strong ISM component of this absorption feature
366
+ [29, 32]. The Ca abundance we calculate is however fully consistent with both
367
+ theoretical predictions [33] and observational evidence [34] for α-enhancement
368
+ in extreme stellar populations. In particular, [3] report a consistent value of
369
+ [Ca/Fe] = 0.59 ± 0.07 for an extreme massive quiescent galaxy at z = 2.1.
370
+
371
+ Springer Nature 2021 LATEX template
372
+ A massive quiescent galaxy at redshift 4.658
373
+ 7
374
+ We therefore adopt our measured Ca abundance as our best estimate of the
375
+ α-enhancement of GS-9209, [α/Fe] = 0.67+0.25
376
+ −0.15. This extreme α-enhancement
377
+ supports our finding of an extremely short, ≲ 200 Myr formation timescale
378
+ [31], as shown in Fig. 3. We caution however that this value could be artificially
379
+ boosted by an ISM contribution to the Ca k absorption line.
380
+ 2.5 Evidence for AGN activity
381
+ From our Bagpipes full spectral fit, we measure an observed broad Hα flux of
382
+ fHα, broad = 1.26±0.08×10−17 = erg s−1 cm−2 and full width at half maximum
383
+ (FWHM) of 10800±600 km s−1 in the rest frame. This line width, whilst very
384
+ broad, is consistent with rest-frame UV broad line widths measured for some
385
+ z = 6 quasars (e.g., [35, 36]).
386
+ We also recover an observed AGN continuum flux at rest-frame wave-
387
+ length, λrest = 5100˚A of f5100 = 0.040 ± 0.004 × 10−19 erg s−1 cm−2 ˚A−1.
388
+ This is approximately 5 per cent of the total observed flux from GS-9209 at
389
+ λ = 2.9µm. We measure a power-law index for the AGN continuum emission
390
+ of αλ = −1.36±0.08 at λrest < 5000˚A, and αλ = 0.69±0.14 at λrest > 5000˚A.
391
+ These indices are broadly consistent with the average values observed for local
392
+ quasars [37]. In combination with the non-detection of GS-9209 at longer wave-
393
+ lengths (see Section 2), this suggests the AGN component in GS-9209 is not
394
+ significantly reddened. The AGN contribution to the continuum flux from GS-
395
+ 9209 rises to ≃ 15 per cent at the blue end of our spectrum (λ = 1.7µm),
396
+ and ≃ 20 per cent at the red end (λ = 5µm). Just above the Lyman break at
397
+ λ ≃ 7000˚A, the AGN contribution is ≃ 35 per cent of the observed flux.
398
+ Given our measured fHα, broad, which is more direct than our AGN con-
399
+ tinuum measurement, the average relation for local AGN presented by [38]
400
+ predicts f5100 to be ≃ 0.4 dex brighter than we measure. However, given the
401
+ intrinsic scatter of 0.2 dex they report, our measured f5100 is only 2σ below
402
+ the mean relation. The extreme equivalent widths of the observed Balmer
403
+ absorption features firmly disfavour stronger AGN continuum emission.
404
+ We fit the narrow Hα and [N ii] lines in our spectrum as follows. We first
405
+ subtract from our observed spectrum the posterior median Bagpipes model
406
+ from our full spectral fitting, described in Section 2.2. We then simultaneously
407
+ fit Gaussian components to both lines, assuming the same velocity width for
408
+ both, which is allowed to vary. This process is visualised in Fig. 2. We also
409
+ show the broad Hβ line in our AGN model, for which we assume the same
410
+ width as broad Hα, as well as Case B recombination. It can be seen that the
411
+ broad Hβ line peaks at around the noise level in our spectrum, and is hence
412
+ too weak to be clearly observed in our data.
413
+ We obtain a Hα narrow-line flux of 1.58 ± 0.10 × 10−18 erg s−1 cm−2
414
+ and a [N ii] flux of 1.56 ± 0.10 × 10−18 erg s−1 cm−2, giving a line ratio of
415
+ log10([N ii]/Hα) = −0.01 ± 0.04. This line ratio is significantly higher than
416
+ would be expected as a result of ongoing star formation, and is consistent
417
+ with excitation due to an AGN or shocks resulting from galactic outflows [39].
418
+ Such outflows are commonly observed in post-starburst galaxies at z ≳ 1 [40]
419
+
420
+ Springer Nature 2021 LATEX template
421
+ 8
422
+ A massive quiescent galaxy at redshift 4.658
423
+ Fig. 4 JWST NIRCam imaging of GS-9209. Each cutout image shows an area of 1.5′′×1.5′′.
424
+ The RGB image in the first (leftmost) panel is constructed with F430M as red, F210M as
425
+ green and F182M as blue. The second panel shows the F210M image, with our posterior
426
+ median PetroFit model shown in the third panel. The residuals between model and data are
427
+ shown in the right panel, on the same colour scale as the middle two panels.
428
+ without corresponding AGN signatures, suggesting either that these outflows
429
+ are driven by stellar feedback, or that the AGN activity responsible for the
430
+ outflow has since shut down.
431
+ Even if we assume all the narrow Hα emission is driven by ongoing
432
+ star formation, we obtain SFR = 1.9 ± 0.1 M⊙ yr−1 [41], corresponding to
433
+ log10(sSFR/yr−1) = −10.3±0.1. This is under the assumption that dust atten-
434
+ uation is negligible, based on our finding of a very low AV from full spectral
435
+ fitting in Section 2.2. This is well below the commonly applied sSFR threshold
436
+ for defining quiescent galaxies at this redshift [25], log10(sSFRthreshold/yr−1) =
437
+ 0.2/tH = −9.8, where tH is the age of the Universe. Given the multiple lines
438
+ of evidence we uncover for a significant non-stellar component to this line, it
439
+ is likely that the SFR of GS-9209 is considerably lower than this estimate.
440
+ We estimate the black-hole mass for GS-9209, M•, from our combined
441
+ Hα flux and broad-line width, using the relation presented in Equation 6
442
+ of [38], obtaining log10(M•/M⊙) = 8.7 ± 0.1. From our Bagpipes full spec-
443
+ tral fit, we infer a stellar velocity dispersion, σ = 247 ± 16 km s−1 for
444
+ GS-9209, after correcting for the intrinsic dispersion of our template set,
445
+ as well as instrumental dispersion. Given this measurement, the relationship
446
+ between velocity dispersion and black-hole mass presented by [42] predicts
447
+ log10(M•/M⊙) = 8.9 ± 0.1.
448
+ Given the broad agreement between these estimators, it seems reasonable
449
+ to conclude that GS-9209 contains a supermassive black hole with a mass of
450
+ approximately half a billion to a billion Solar masses. It is interesting to note
451
+ that this is ≃ 4 − 5 times the black-hole mass that would be expected given
452
+ the stellar mass of the galaxy, assuming this is equivalent to the bulge mass.
453
+ This is consistent with the observed increase in the average black-hole to bulge
454
+ mass ratio for massive galaxies from 0 < z < 2 [43]. This large amount of
455
+ historical AGN accretion relative to star formation strongly implies that AGN
456
+ feedback may be responsible for quenching this galaxy.
457
+ 2.6 Size measurement and dynamical mass
458
+ GS-9209 is an extremely compact source, which is only marginally resolved in
459
+ the highest-resolution available imaging data. The CANDELS/3DHST team
460
+
461
+ RGB
462
+ F210M Data
463
+ Model
464
+ Residual
465
+ 1.5"×Springer Nature 2021 LATEX template
466
+ A massive quiescent galaxy at redshift 4.658
467
+ 9
468
+ [44] measured an effective radius, re = 0.029 ± 0.002′′ for GS-9209 in the HST
469
+ F125W filter via S´ersic fitting, along with a S´ersic index, n = 6.0 ± 0.8. At
470
+ z = 4.658, this corresponds to re = 189 ± 13 parsecs.
471
+ We update this size measurement using the newly available JWST NIR-
472
+ Cam F210M-band imaging, which has a FWHM of ≃ 0.07′′ (see Section 4.4).
473
+ Accounting for the AGN point-source contribution, we measure an effective
474
+ radius, re = 0.033 ± 0.003′′ for the stellar component of GS-9209, along with
475
+ a S´ersic index, n = 2.3 ± 0.3. At z = 4.658, this corresponds to re = 215 ± 20
476
+ parsecs. This is consistent with the CANDELS/3DHST measurement, and is
477
+ ≃ 0.7 dex below the mean relationship between re and stellar mass for qui-
478
+ escent galaxies at z ≃ 1 [44, 45]. This is interesting given that post-starburst
479
+ galaxies z ≃ 1 are known to be more compact than is typical for the wider
480
+ quiescent population [46]. We calculate a stellar-mass surface density within
481
+ re of log10(Σeff/M⊙ kpc−2) = 11.15 ± 0.08, consistent with the densest stel-
482
+ lar systems in the Universe [47]. We show the F210M data for GS-9209, along
483
+ with our posterior-median model in Fig. 4.
484
+ We estimate the dynamical mass using our size and velocity dispersion
485
+ measurements (e.g., [40]), obtaining a value of log10(Mdyn/M⊙) = 10.3 ± 0.1.
486
+ This is ≃ 0.3 dex lower than the stellar mass we measure. As GS-9209 is only
487
+ marginally resolved, even in JWST imaging data, and due to the presence
488
+ of the AGN component, it is plausible that our measured re may be subject
489
+ to systematic uncertainties. Deeper imaging data in the F200W or F277W
490
+ bands (e.g., from the JWST Advanced Deep Extragalactic Survey; JADES)
491
+ will provide a useful check on this, particularly given the lower AGN fraction
492
+ in the F277W band. Furthermore, since the pixel scale of NIRSpec is 0.1′′,
493
+ our velocity dispersion measurement may not accurately represent the central
494
+ velocity dispersion of GS-9209, leading to an underestimated dynamical mass.
495
+ It should also be noted that the stellar mass we measure is strongly dependent
496
+ on our assumed IMF.
497
+ A final, intriguing possibility would be a high level of rotational support in
498
+ GS-9209, as has been observed for quiescent galaxies at 2 < z < 3 [48]. Unfor-
499
+ tunately, the extremely compact nature of the source makes any attempt at
500
+ resolved studies extremely challenging, even with the JWST NIRSpec integral
501
+ field unit. Resolved kinematics for this galaxy would be a clear use case for the
502
+ High Angular Resolution Monolithic Optical and Near-infrared Integral field
503
+ spectrograph (HARMONI) planned for the Extremely Large Telescope (ELT).
504
+ 3 Conclusion
505
+ We report the spectroscopic confirmation of a massive quiescent galaxy, GS-
506
+ 9209 at a new redshift record of z = 4.6582 ± 0.002, with a stellar mass
507
+ of log10(M∗/M⊙) = 10.61 ± 0.02. This galaxy formed its stellar popula-
508
+ tion over a ≃ 200 Myr period, approximately 600 − 800 Myr after the Big
509
+ Bang (zform = 7.3 ± 0.2), before quenching at zquench = 6.7 ± 0.3. GS-9209
510
+ demonstrates unambiguously that massive galaxy formation was already well
511
+
512
+ Springer Nature 2021 LATEX template
513
+ 10
514
+ A massive quiescent galaxy at redshift 4.658
515
+ underway within the first billion years of cosmic history, with this object having
516
+ reached log10(M∗/M⊙) > 10.3 by z = 7. This galaxy also clearly demonstrates
517
+ that the earliest onset of galaxy quenching was no later than ≃ 800 Myr after
518
+ the Big Bang.
519
+ We estimate the iron abundance and α-enhancement of GS-9209, finding
520
+ [Fe/H] = −0.97+0.06
521
+ −0.07 and [α/Fe] = 0.67+0.25
522
+ −0.15, suggesting the stellar mass vs
523
+ iron abundance relation at z ≃ 7, when this object formed most of its stars,
524
+ was ≃ 0.4 dex lower than at z ≃ 3.5 [30]. Whilst its spectrum is dominated by
525
+ stellar emission, GS-9209 also hosts an AGN, for which we measure a black-hole
526
+ mass of log10(M•/M⊙) = 8.7 ± 0.1 from the observed broad and narrow Hα
527
+ emission [38]. We also predict a consistent value of log10(M•/M⊙) = 8.9 ± 0.1
528
+ based on the stellar velocity dispersion of GS-9209 [42]. Whilst large-scale star
529
+ formation in GS-9209 has been quenched for almost half a billion years, the
530
+ significant integrated quantity of AGN accretion implied by this large black-
531
+ hole mass (≃ 4 − 5 times what would be expected given the stellar mass of
532
+ this galaxy) suggests that AGN activity plausibly played a significant role in
533
+ quenching star formation in this galaxy.
534
+ Based on the properties we measure, GS-9209 seems likely to be associated
535
+ with the most extreme galaxy populations currently known at z > 5, such as
536
+ the highest-redshift submillimetre galaxies and quasars (e.g., [36, 49, 50]). GS-
537
+ 9209 is also plausibly descended from an object similar to the z ≃ 8 massive
538
+ galaxy candidates recently reported in the first data from the JWST CEERS
539
+ programme [7], though the number density of these candidates is significantly
540
+ higher than that of z > 4 quiescent galaxies. GS-9209 and similar objects (e.g.,
541
+ [9]) are also likely progenitors for the dense, ancient cores of the most massive
542
+ galaxies in the local Universe.
543
+ This study, which makes use of just 5 hours of on-source integration time,
544
+ demonstrates the huge potential of JWST for revolutionising our understand-
545
+ ing of the high-redshift Universe. It seems clear that this work will be followed
546
+ rapidly by the confirmation and detailed spectroscopic exploration of large
547
+ samples of z > 4 quiescent galaxies, to build up a detailed understanding of
548
+ massive galaxy formation and quenching during the first billion years.
549
+ 4 Methods
550
+ 4.1 Spectroscopic data reduction
551
+ We reduce our NIRSpec data using the JWST Science Calibration Pipeline
552
+ v1.8.4, using version 1017 of the JWST calibration reference data. To improve
553
+ the spectrophotometric calibration of our data, we also reduce observations
554
+ of the A-type standard star 2MASS J18083474+6927286 [51], taken as part
555
+ of JWST commissioning programme 1128 (PI: L¨utzgendorf) [52] using the
556
+ same instrument modes. We compare the resulting stellar spectrum against
557
+ a spectral model for this star from the CALSPEC library [53] to construct a
558
+ calibration function, which we then apply to our observations of GS-9209.
559
+
560
+ Springer Nature 2021 LATEX template
561
+ A massive quiescent galaxy at redshift 4.658
562
+ 11
563
+ 4.2 Photometric data reduction
564
+ The majority of our photometric data are taken directly from the CANDELS
565
+ GOODS South catalogue [54]. We supplement this with new JWST NIRCam
566
+ photometric data taken as part of the Ultra Deep Field Medium-Band Survey
567
+ [55] (Programme ID: 1963; PI: Williams). Data are available in the F182M,
568
+ F210M, F430M, F460M and F480M bands. We reduce these data using the
569
+ PRIMER Enhanced NIRCam Image-processing Library (PENCIL, e.g., [8]), a
570
+ custom version of the JWST Science Calibration Pipeline (v1.8.0), and using
571
+ version 1011 of the JWST calibration reference data. We measure photometric
572
+ fluxes for GS-9209 in large, 1′′-diameter apertures to ensure we measure the
573
+ total flux in each band (the object is isolated, with no other sources within
574
+ this radius, see Fig. 4). We measure uncertainties as the standard deviation of
575
+ flux values in the nearest 100 blank-sky apertures, masking out nearby objects
576
+ (e.g., [56]).
577
+ 4.3 Bagpipes full spectral fitting
578
+ We fit the available photometry in parallel with our new spectroscopic data
579
+ using the Bagpipes code [57]. Our model has a total of 22 free parameters,
580
+ describing the stellar, dust, nebular and AGN components of the spectrum.
581
+ A full list of these parameters, along with their associated priors, is given in
582
+ Table 1. We fit our model to the data using the MultiNest nested sampling
583
+ algorithm [58–60].
584
+ We use the 2016 updated version of the BC03 [61, 62] stellar population
585
+ models, using the MILES stellar spectral library [63] and updated stellar evolu-
586
+ tionary tracks [64, 65]. We assume a double-power-law star-formation-history
587
+ model (e.g., [24, 57]). We allow the logarithm of the stellar metallicity, Z∗ to
588
+ vary freely from log10(Z∗/Z⊙) = −2.45 to 0.55. These are the limits of the
589
+ range spanned by the BC03 model grid relative to our adopted Solar metallicity
590
+ value (Z⊙ = 0.0142 [27]).
591
+ We mask out the narrow emission lines in our spectrum during our Bag-
592
+ pipes fitting due to likely AGN contributions, whereas Bagpipes is only capable
593
+ of modelling emission lines from star-forming regions. We do however still
594
+ include a nebular model in our Bagpipes fit to allow for the possibility of
595
+ nebular continuum emission from star-forming regions. We assume a stellar-
596
+ birth-cloud lifetime of 10 Myr, and vary the logarithm of the ionization
597
+ parameter, U, from log10(U) = −4 to −2. We also allow the logarithm of the
598
+ gas-phase metallicity, Zg, to vary freely from log10(Zg/Z⊙) = −2.45 to 0.55.
599
+ Because our eventual fitted model only includes an extremely small amount
600
+ of star formation within the last 10 Myr for GS-9209, this nebular component
601
+ makes a negligible contribution to the fitted model spectrum.
602
+ We model attenuation of the above components by dust using the model
603
+ of [66, 67], which is parameterised as a power-law deviation from the Calzetti
604
+ dust attenuation law [68], and also includes a Drude profile to model the 2175˚A
605
+ bump. We allow the V −band attenuation, AV to vary from 0 − 4 magnitudes.
606
+
607
+ Springer Nature 2021 LATEX template
608
+ 12
609
+ A massive quiescent galaxy at redshift 4.658
610
+ Table 1 The 22 free parameters of the Bagpipes model we fit to our spectroscopic and photometric data (see Sections 2.2 and 4.3), along with their
611
+ associated prior distributions. The upper limit on τ, tobs, is the age of the Universe as a function of redshift. Logarithmic priors are all applied in
612
+ base ten. For parameters with Gaussian priors, the mean is µ and the standard deviation is σ.
613
+ Component Parameter
614
+ Symbol / Unit
615
+ Range
616
+ Prior
617
+ Hyper-parameters
618
+ General
619
+ Redshift
620
+ z
621
+ (4.6, 4.7)
622
+ Gaussian
623
+ µ = 4.66
624
+ σ = 0.01
625
+ Stellar velocity dispersion
626
+ σ / km s−1
627
+ (50, 500)
628
+ Logarithmic
629
+ SFH
630
+ Total stellar mass formed
631
+ M∗ / M⊙
632
+ (1, 1013)
633
+ Logarithmic
634
+ Stellar metallicity
635
+ Z∗ / Z⊙
636
+ (0.00355, 3.55)
637
+ Logarithmic
638
+ Double-power-law falling slope
639
+ α
640
+ (0.01, 1000)
641
+ Logarithmic
642
+ Double-power-law rising slope
643
+ β
644
+ (0.01, 1000)
645
+ Logarithmic
646
+ Double-power-law turnover time
647
+ τ / Gyr
648
+ (0.1, tobs)
649
+ Uniform
650
+ Dust
651
+ V −band attenuation
652
+ AV / mag
653
+ (0, 4)
654
+ Uniform
655
+ Deviation from Calzetti slope
656
+ δ
657
+ (−0.3, 0.3)
658
+ Gaussian
659
+ µ = 0
660
+ σ = 0.1
661
+ Strength of 2175˚A bump
662
+ B
663
+ (0, 5)
664
+ Uniform
665
+ Attenuation ratio for birth clouds ϵ
666
+ (1, 5)
667
+ Uniform
668
+ AGN
669
+ Power law slope (λ < 5000˚A)
670
+ αλ<5000˚
671
+ A
672
+ (−2.5, −0.5)
673
+ Gaussian
674
+ µ = −1.5 σ = 0.1
675
+ Power law slope (λ > 5000˚A)
676
+ αλ>5000˚
677
+ A
678
+ (−0.5, 1.5)
679
+ Gaussian
680
+ µ = 0.5
681
+ σ = 0.2
682
+ Hα broad-line flux
683
+ fHα, broad / erg s−1 cm−2
684
+ (0, 2.5 × 10−17) Uniform
685
+ Hα broad-line velocity dispersion
686
+ σHα, broad / km s−1
687
+ (1000, 5000)
688
+ Logarithmic
689
+ Continuum flux at λ = 5100˚A
690
+ f5100 / erg s−1 cm−2 ˚A−1 (0, 10−19)
691
+ Uniform
692
+ Nebular
693
+ Ionization parameter
694
+ U
695
+ (10−4, 10−2)
696
+ Logarithmic
697
+ Gas-phase metallicity
698
+ Zg / Z⊙
699
+ (0.00355, 3.55)
700
+ Logarithmic
701
+ Calibration Zero order
702
+ P0
703
+ (0.75, 1.25)
704
+ Gaussian
705
+ µ = 1
706
+ σ = 0.1
707
+ First order
708
+ P1
709
+ (−0.25, 0.25)
710
+ Gaussian
711
+ µ = 0
712
+ σ = 0.1
713
+ Second order
714
+ P2
715
+ (−0.25, 0.25)
716
+ Gaussian
717
+ µ = 0
718
+ σ = 0.1
719
+ Noise
720
+ White noise scaling
721
+ a
722
+ (0.1, 10)
723
+ logarithmic
724
+
725
+ Springer Nature 2021 LATEX template
726
+ A massive quiescent galaxy at redshift 4.658
727
+ 13
728
+ We further assume that attenuation is multiplied by an additional factor for
729
+ all stars with ages below 10 Myr, and resulting nebular emission. This factor
730
+ is commonly assumed to be 2, however we allow this to vary from 1 to 5.
731
+ We allow redshift to vary, using a narrow Gaussian prior with a mean of 4.66
732
+ and standard deviation of 0.01. We additionally convolve the spectral model
733
+ with a Gaussian kernel in velocity space, to account for velocity dispersion in
734
+ our target galaxy. The width of this kernel is allowed to vary with a logarithmic
735
+ prior across a range from 50 − 500 km s−1.
736
+ Separately from the above components, we also include a model for AGN
737
+ continuum, broad Hα and Hβ emission. Following [37], we model AGN contin-
738
+ uum emission with a broken power law, with two spectral indices and a break
739
+ at λrest = 5000˚A in the rest frame. We vary the spectral index at λrest < 5000˚A
740
+ using a Gaussian prior with a mean value of αλ = −1.5 (αν = −0.5) and stan-
741
+ dard deviation of 0.1. We also vary the spectral index at λrest > 5000˚A using
742
+ a Gaussian prior with a mean value of αλ = 0.5 (αν = −2.5) and standard
743
+ deviation of 0.2. We parameterise the normalisation of the AGN continuum
744
+ component using f5100, the flux at rest-frame 5100˚A, which we allow to vary
745
+ with a linear prior from 0 to 10−19 erg s−1 cm−2 ˚A−1.
746
+ We model broad Hα with a Gaussian component, varying the normalisation
747
+ from 0 to 2.5 × 10−17 erg s−1 cm−2 using a linear prior, and the velocity
748
+ dispersion from 1000 − 5000 km s−1 in the rest frame using a logarithmic
749
+ prior. We also include a broad Hβ component in the model, which has the
750
+ same parameters as the broad Hα line, but with normalisation divided by the
751
+ standard 2.86 ratio from Case B recombination theory. However, as shown in
752
+ Fig. 2, this Hβ model peaks at around the noise level in our spectrum, and
753
+ the line is therefore plausible in not being obviously detected in the observed
754
+ spectrum.
755
+ We include intergalactic medium (IGM) absorption using the model of
756
+ [69]. To allow for imperfect spectrophotometric calibration of our spectroscopic
757
+ data, we also include a second-order Chebyshev polynomial (e.g., [70, 71]),
758
+ which the above components of our combined model are all divided by before
759
+ being compared with our spectroscopic data. We finally fit an additional white
760
+ noise term, which multiplies the spectroscopic uncertainties from the JWST
761
+ pipeline by a factor, a, which we vary with a logarithmic prior from 1 − 10.
762
+ 4.4 Size measurement from F210M-band imaging
763
+ We model the light distribution of GS-9209 in the JWST NIRCam F210M
764
+ imaging data using PetroFit [72]. We fit these PetroFit models to our data
765
+ using the MultiNest nested sampling algorithm [58–60]. We use F210M in
766
+ preference to the F182M band due to the smaller AGN contribution in
767
+ F210M and the fact that it sits above the Balmer break, therefore being
768
+ more representative of the stellar mass present rather than any ongoing star
769
+ formation.
770
+
771
+ Springer Nature 2021 LATEX template
772
+ 14
773
+ A massive quiescent galaxy at redshift 4.658
774
+ As our spectroscopic data contains strong evidence for an AGN, we fit both
775
+ S´ersic and delta-function components simultaneously, convolved by an empir-
776
+ ically estimated PSF, derived by stacking bright stars. In preliminary fitting,
777
+ we find that the relative fluxes of these two components are entirely degen-
778
+ erate with the S´ersic parameters. We therefore predict the AGN contribution
779
+ to the flux in this band based on our full-spectral-fitting result, obtaining a
780
+ value of 8 ± 1 per cent. We then impose this as a Gaussian prior on the rela-
781
+ tive contributions from the S´ersic and delta function components. The 11 free
782
+ parameters of our model are the overall flux normalisation, which we fit with a
783
+ logarithmic prior, the effective radius, re, S´ersic index, n, ellipticity and posi-
784
+ tion angle of the S´ersic component, the x and y centroids of both components,
785
+ the position angle of the point spread function, and the fraction of light in the
786
+ delta-function component, which we fit with a Gaussian prior with a mean of
787
+ 8 per cent and standard deviation of 1 per cent, based on our full spectral
788
+ fitting result.
789
+ Acknowledgements
790
+ The authors would like to thank James Aird for helpful discussions. A. C.
791
+ Carnall thanks the Leverhulme Trust for their support via a Leverhulme
792
+ Early Career Fellowship. R. J. McLure, J. S. Dunlop, D. J. McLeod, V. Wild,
793
+ R. Begley, C. T. Donnan and M. L. Hamadouche acknowledge the support
794
+ of the Science and Technology Facilities Council. F. Cullen acknowledges
795
+ support from a UKRI Frontier Research Guarantee Grant (grant reference
796
+ EP/X021025/1). A. Cimatti acknowledges support from the grant PRIN
797
+ MIUR 2017 - 20173ML3WW 001.
798
+ Statement of Author Contributions
799
+ ACC led the preparation of the observing proposal, reduction and analysis of
800
+ the data, and preparation of the manuscript. RJM, JSD, VW, FC and AC
801
+ provided advice and assistance with data reduction, analysis and interpreta-
802
+ tion, as well as consulting on the preparation of the observing proposal. DJM,
803
+ DM, RB and CTD reduced the JWST imaging data and prepared the empir-
804
+ ical PSF. DJM, MLH and SMJ assisted with measurement of the size and
805
+ morphology of GS-9209. SW assisted with selection of GS-9209 from the CAN-
806
+ DELS catalogues prior to the observing proposal being submitted. All authors
807
+ assisted with preparation of the final published manuscript.
808
+ References
809
+ [1] Dunlop, J., Peacock, J., Spinrad, H., Dey, A., Jimenez, R., Stern, D.,
810
+ Windhorst, R.: A 3.5-Gyr-old galaxy at redshift 1.55. Nature 381, 581–584
811
+ (1996). https://doi.org/10.1038/381581a0
812
+
813
+ Springer Nature 2021 LATEX template
814
+ A massive quiescent galaxy at redshift 4.658
815
+ 15
816
+ [2] Cimatti, A., Daddi, E., Renzini, A., Cassata, P., Vanzella, E., Pozzetti, L.,
817
+ Cristiani, S., Fontana, A., Rodighiero, G., Mignoli, M., Zamorani, G.: Old
818
+ galaxies in the young Universe. Nature 430, 184–187 (2004) arXiv:astro-
819
+ ph/0407131 [astro-ph]. https://doi.org/10.1038/nature02668
820
+ [3] Kriek, M., Conroy, C., van Dokkum, P.G., Shapley, A.E., Choi, J., Reddy,
821
+ N.A., Siana, B., van de Voort, F., Coil, A.L., Mobasher, B.: A massive,
822
+ quiescent, population II galaxy at a redshift of 2.1. Nature 540(7632),
823
+ 248–251 (2016) arXiv:1612.02001 [astro-ph.GA]. https://doi.org/10.1038/
824
+ nature20570
825
+ [4] Glazebrook, K., Schreiber, C., Labb´e, I., Nanayakkara, T., Kacprzak,
826
+ G.G., Oesch, P.A., Papovich, C., Spitler, L.R., Straatman, C.M.S., Tran,
827
+ K.-V.H., Yuan, T.: A massive, quiescent galaxy at a redshift of 3.717.
828
+ Nature 544(7648), 71–74 (2017) arXiv:1702.01751 [astro-ph.GA]. https:
829
+ //doi.org/10.1038/nature21680
830
+ [5] Schreiber, C., Glazebrook, K., Nanayakkara, T., Kacprzak, G.G., Labb´e,
831
+ I., Oesch, P., Yuan, T., Tran, K.-V., Papovich, C., Spitler, L., Straatman,
832
+ C.: Near infrared spectroscopy and star-formation histories of 3 < z < 4
833
+ quiescent galaxies. A&A 618, 85 (2018) arXiv:1807.02523. https://doi.
834
+ org/10.1051/0004-6361/201833070
835
+ [6] Girelli, G., Bolzonella, M., Cimatti, A.: Massive and old quiescent
836
+ galaxies at high redshift. A&A 632, 80 (2019) arXiv:1910.07544 [astro-
837
+ ph.GA].
838
+ https://doi.org/10.1051/0004-6361/20183454710.48550/arXiv.
839
+ 1910.07544
840
+ [7] Labbe, I., van Dokkum, P., Nelson, E., Bezanson, R., Suess, K., Leja,
841
+ J., Brammer, G., Whitaker, K., Mathews, E., Stefanon, M.: A very early
842
+ onset of massive galaxy formation. arXiv e-prints, 2207–12446 (2022)
843
+ arXiv:2207.12446 [astro-ph.GA]
844
+ [8] Donnan, C.T., McLeod, D.J., Dunlop, J.S., McLure, R.J., Carnall, A.C.,
845
+ Begley, R., Cullen, F., Hamadouche, M.L., Bowler, R.A.A., Magee, D.,
846
+ McCracken, H.J., Milvang-Jensen, B., Moneti, A., Targett, T.: The evo-
847
+ lution of the galaxy UV luminosity function at redshifts z ≃ 8 − 15 from
848
+ deep JWST and ground-based near-infrared imaging. MNRAS 518(4),
849
+ 6011–6040 (2023) arXiv:2207.12356 [astro-ph.GA]. https://doi.org/10.
850
+ 1093/mnras/stac347210.48550/arXiv.2207.12356
851
+ [9] Carnall, A.C., McLeod, D.J., McLure, R.J., Dunlop, J.S., Begley, R.,
852
+ Cullen, F., Donnan, C.T., Hamadouche, M.L., Jewell, S.M., Jones, E.W.,
853
+ Pollock, C.L., Wild, V.: A first look at JWST CEERS: massive qui-
854
+ escent galaxies from 3 < z < 5. arXiv e-prints, 2208–00986 (2022)
855
+ arXiv:2208.00986 [astro-ph.GA]
856
+
857
+ Springer Nature 2021 LATEX template
858
+ 16
859
+ A massive quiescent galaxy at redshift 4.658
860
+ [10] Valentino, F., Tanaka, M., Davidzon, I., Toft, S., G´omez-Guijarro, C.,
861
+ Stockmann, M., Onodera, M., Brammer, G., Ceverino, D., Faisst, A.L.,
862
+ Gallazzi, A., Hayward, C.C., Ilbert, O., Kubo, M., Magdis, G.E., Sels-
863
+ ing, J., Shimakawa, R., Sparre, M., Steinhardt, C., Yabe, K., Zabl, J.:
864
+ Quiescent Galaxies 1.5 Billion Years after the Big Bang and Their Pro-
865
+ genitors. ApJ 889(2), 93 (2020) arXiv:1909.10540 [astro-ph.GA]. https:
866
+ //doi.org/10.3847/1538-4357/ab64dc
867
+ [11] Caputi, K.I., Dunlop, J.S., McLure, R.J., Roche, N.D.: A deeper view of
868
+ extremely red galaxies: the redshift distribution in the GOODS/CDFS
869
+ ISAAC field. MNRAS 353(1), 30–42 (2004) arXiv:astro-ph/0401047
870
+ [astro-ph]. https://doi.org/10.1111/j.1365-2966.2004.08044.x
871
+ [12] Le F`evre, O., Tasca, L.A.M., Cassata, P., Garilli, B., Le Brun, V.,
872
+ Maccagni, D., Pentericci, L., Thomas, R., Vanzella, E., Zamorani, G.,
873
+ Zucca, E., Amorin, R., Bardelli, S., Capak, P., Cassar`a, L., Castellano,
874
+ M., Cimatti, A., Cuby, J.G., Cucciati, O., de la Torre, S., Durkalec,
875
+ A., Fontana, A., Giavalisco, M., Grazian, A., Hathi, N.P., Ilbert, O.,
876
+ Lemaux, B.C., Moreau, C., Paltani, S., Ribeiro, B., Salvato, M., Schaerer,
877
+ D., Scodeggio, M., Sommariva, V., Talia, M., Taniguchi, Y., Tresse, L.,
878
+ Vergani, D., Wang, P.W., Charlot, S., Contini, T., Fotopoulou, S., L´opez-
879
+ Sanjuan, C., Mellier, Y., Scoville, N.: The VIMOS Ultra-Deep Survey: ˜10
880
+ 000 galaxies with spectroscopic redshifts to study galaxy assembly at early
881
+ epochs 2 < z ≃ 6. A&A 576, 79 (2015) arXiv:1403.3938 [astro-ph.CO].
882
+ https://doi.org/10.1051/0004-6361/201423829
883
+ [13] Merlin, E., Fontana, A., Castellano, M., Santini, P., Torelli, M., Bout-
884
+ sia, K., Wang, T., Grazian, A., Pentericci, L., Schreiber, C., Ciesla,
885
+ L., McLure, R., Derriere, S., Dunlop, J.S., Elbaz, D.: Chasing pas-
886
+ sive galaxies in the early Universe: a critical analysis in CANDELS
887
+ GOODS-South. MNRAS 473(2), 2098–2123 (2018) arXiv:1709.00429
888
+ [astro-ph.GA]. https://doi.org/10.1093/mnras/stx2385
889
+ [14] Carnall, A.C., Walker, S., McLure, R.J., Dunlop, J.S., McLeod, D.J.,
890
+ Cullen, F., Wild, V., Amorin, R., Bolzonella, M., Castellano, M., Cimatti,
891
+ A., Cucciati, O., Fontana, A., Gargiulo, A., Garilli, B., Jarvis, M.J.,
892
+ Pentericci, L., Pozzetti, L., Zamorani, G., Calabro, A., Hathi, N.P., Koeke-
893
+ moer, A.M.: Timing the earliest quenching events with a robust sample
894
+ of massive quiescent galaxies at 2 < z < 5. MNRAS 496(1), 695–707
895
+ (2020) arXiv:2001.11975 [astro-ph.GA]. https://doi.org/10.1093/mnras/
896
+ staa1535
897
+ [15] Santini, P., Merlin, E., Fontana, A., Magnelli, B., Paris, D., Castellano,
898
+ M., Grazian, A., Pentericci, L., Pilo, S., Torelli, M.: Passive galaxies
899
+ in the early Universe: ALMA confirmation of z ∼ 3 − 5 candidates in
900
+ the CANDELS GOODS-South field. MNRAS 486(1), 560–569 (2019)
901
+
902
+ Springer Nature 2021 LATEX template
903
+ A massive quiescent galaxy at redshift 4.658
904
+ 17
905
+ arXiv:1902.09548 [astro-ph.GA]. https://doi.org/10.1093/mnras/stz801
906
+ [16] Luo, B., Brandt, W.N., Xue, Y.Q., Lehmer, B., Alexander, D.M., Bauer,
907
+ F.E., Vito, F., Yang, G., Basu-Zych, A.R., Comastri, A., Gilli, R., Gu, Q.-
908
+ S., Hornschemeier, A.E., Koekemoer, A., Liu, T., Mainieri, V., Paolillo,
909
+ M., Ranalli, P., Rosati, P., Schneider, D.P., Shemmer, O., Smail, I., Sun,
910
+ M., Tozzi, P., Vignali, C., Wang, J.-X.: The Chandra Deep Field-South
911
+ Survey: 7 Ms Source Catalogs. The Astrophysical Journal Supplement
912
+ Series 228, 2 (2017) arXiv:1611.03501 [astro-ph.GA]. https://doi.org/10.
913
+ 3847/1538-4365/228/1/2
914
+ [17] Bonzini, M., Padovani, P., Mainieri, V., Kellermann, K.I., Miller, N.,
915
+ Rosati, P., Tozzi, P., Vattakunnel, S.: The sub-mJy radio sky in
916
+ the Extended Chandra Deep Field-South: source population. MNRAS
917
+ 436, 3759–3771 (2013) arXiv:1310.1248. https://doi.org/10.1093/mnras/
918
+ stt1879
919
+ [18] Dunlop, J., Akiyama, M., Alexander, D., Almaini, O., Borys, C., Bouwens,
920
+ R., Bremer, M., Cimatti, A., Cirasuolo, M., Clewley, L., Conselice, C.,
921
+ Coppin, K., Dalton, G., Damen, M., Dunne, L., Dye, S., Eales, S., Edge,
922
+ A., Egami, E., Fall, M., Farrah, D., Ferguson, H., Finoguenov, A., Fou-
923
+ caud, S., Franx, M., Furusawa, H., Huang, J., Ibar, E., Illingworth, G.,
924
+ Ivison, R., Jarvis, M., Labbe, I., Lawrence, A., Maddox, S., McLure, R.,
925
+ Mortier, A., Oliver, S., Ouchi, M., Page, M., Papovich, C., Quadri, R.,
926
+ Rawlings, S., Rieke, G., Schiminovich, D., Sekiguchi, K., Serjeant, S.,
927
+ Simpson, C., Smail, I., Stanway, E., Taylor, A., Watson, M., Williams, R.,
928
+ Yamada, T., van Breukelen, C., van Dokkum, P.: A Spitzer Public Legacy
929
+ survey of the UKIDSS Ultra Deep Survey. Spitzer Proposal (2007)
930
+ [19] Goto, T.: A catalogue of local E+A (post-starburst) galaxies selected from
931
+ the Sloan Digital Sky Survey Data Release 5. MNRAS 381(1), 187–193
932
+ (2007) arXiv:0801.1106 [astro-ph]. https://doi.org/10.1111/j.1365-2966.
933
+ 2007.12227.x
934
+ [20] Wild, V., Walcher, C.J., Johansson, P.H., Tresse, L., Charlot, S., Pollo,
935
+ A., Le F`evre, O., de Ravel, L.: Post-starburst galaxies: more than just
936
+ an interesting curiosity. MNRAS 395, 144–159 (2009) arXiv:0810.5122
937
+ [astro-ph]. https://doi.org/10.1111/j.1365-2966.2009.14537.x
938
+ [21] Wild, V., Taj Aldeen, L., Carnall, A., Maltby, D., Almaini, O., Werle,
939
+ A., Wilkinson, A., Rowlands, K., Bolzonella, M., Castellano, M., Gar-
940
+ guilo, A., McLure, R., Pentericci, L., Pozzetti, L.: The star formation
941
+ histories of z˜1 post-starburst galaxies. arXiv e-prints, 2001–09154 (2020)
942
+ arXiv:2001.09154 [astro-ph.GA]
943
+ [22] Kauffmann, G., Heckman, T.M., White, S.D.M., Charlot, S., Tremonti,
944
+ C., Brinchmann, J., Bruzual, G., Peng, E.W., Seibert, M., Bernardi, M.,
945
+
946
+ Springer Nature 2021 LATEX template
947
+ 18
948
+ A massive quiescent galaxy at redshift 4.658
949
+ Blanton, M., Brinkmann, J., Castander, F., Cs´abai, I., Fukugita, M.,
950
+ Ivezic, Z., Munn, J.A., Nichol, R.C., Padmanabhan, N., Thakar, A.R.,
951
+ Weinberg, D.H., York, D.: Stellar masses and star formation histories
952
+ for 105 galaxies from the Sloan Digital Sky Survey. MNRAS 341, 33–
953
+ 53 (2003) arXiv:astro-ph/0204055 [astro-ph]. https://doi.org/10.1046/j.
954
+ 1365-8711.2003.06291.x
955
+ [23] Kroupa, P.: On the variation of the initial mass function. MNRAS 322(2),
956
+ 231–246 (2001) arXiv:astro-ph/0009005 [astro-ph]. https://doi.org/10.
957
+ 1046/j.1365-8711.2001.04022.x
958
+ [24] Carnall, A.C., Leja, J., Johnson, B.D., McLure, R.J., Dunlop, J.S., Con-
959
+ roy, C.: How to Measure Galaxy Star Formation Histories. I. Parametric
960
+ Models. ApJ 873, 44 (2019) arXiv:1811.03635 [astro-ph.GA]. https://doi.
961
+ org/10.3847/1538-4357/ab04a2
962
+ [25] Pacifici, C., Kassin, S.A., Weiner, B.J., Holden, B., Gardner, J.P., Faber,
963
+ S.M., Ferguson, H.C., Koo, D.C., Primack, J.R., Bell, E.F., Dekel, A.,
964
+ Gawiser, E., Giavalisco, M., Rafelski, M., Simons, R.C., Barro, G., Croton,
965
+ D.J., Dav´e, R., Fontana, A., Grogin, N.A., Koekemoer, A.M., Lee, S.-K.,
966
+ Salmon, B., Somerville, R., Behroozi, P.: The Evolution of Star Formation
967
+ Histories of Quiescent Galaxies. ApJ 832, 79 (2016) arXiv:1609.03572.
968
+ https://doi.org/10.3847/0004-637X/832/1/79
969
+ [26] Micha�lowski, M.J., Dunlop, J.S., Koprowski, M.P., Cirasuolo, M.,
970
+ Geach,
971
+ J.E.,
972
+ Bowler,
973
+ R.A.A.,
974
+ Mortlock,
975
+ A.,
976
+ Caputi,
977
+ K.I.,
978
+ Aretx-
979
+ aga, I., Arumugam, V., Chen, C.-C., McLure, R.J., Birkinshaw, M.,
980
+ Bourne, N., Farrah, D., Ibar, E., van der Werf, P., Zemcov, M.: The
981
+ SCUBA-2 Cosmology Legacy Survey: the nature of bright submm
982
+ galaxies from 2 deg2 of 850-µm imaging. MNRAS 469(1), 492–515
983
+ (2017) arXiv:1610.02409 [astro-ph.GA]. https://doi.org/10.1093/mnras/
984
+ stx86110.48550/arXiv.1610.02409
985
+ [27] Asplund, M., Grevesse, N., Sauval, A.J., Scott, P.: The Chemical Com-
986
+ position of the Sun. ARA&A 47(1), 481–522 (2009) arXiv:0909.0948
987
+ [astro-ph.SR]. https://doi.org/10.1146/annurev.astro.46.060407.145222
988
+ [28] Leitherer, C., Ortiz Ot´alvaro, P.A., Bresolin, F., Kudritzki, R.-P., Lo Faro,
989
+ B., Pauldrach, A.W.A., Pettini, M., Rix, S.A.: A Library of Theoreti-
990
+ cal Ultraviolet Spectra of Massive, Hot Stars for Evolutionary Synthesis.
991
+ ApJS 189(2), 309–335 (2010) arXiv:1006.5624 [astro-ph.SR]. https://doi.
992
+ org/10.1088/0067-0049/189/2/309
993
+ [29] Conroy, C., Graves, G.J., van Dokkum, P.G.: Early-type Galaxy Arche-
994
+ ology: Ages, Abundance Ratios, and Effective Temperatures from Full-
995
+ spectrum Fitting. ApJ 780(1), 33 (2014) arXiv:1303.6629 [astro-ph.CO].
996
+ https://doi.org/10.1088/0004-637X/780/1/33
997
+
998
+ Springer Nature 2021 LATEX template
999
+ A massive quiescent galaxy at redshift 4.658
1000
+ 19
1001
+ [30] Cullen, F., McLure, R.J., Dunlop, J.S., Khochfar, S., Dav´e, R., Amor´ın,
1002
+ R., Bolzonella, M., Carnall, A.C., Castellano, M., Cimatti, A.: The VAN-
1003
+ DELS survey: the stellar metallicities of star-forming galaxies at 2.5
1004
+ &lt; z &lt; 5.0. MNRAS, 1344 (2019) arXiv:1903.11081 [astro-ph.GA].
1005
+ https://doi.org/10.1093/mnras/stz1402
1006
+ [31] Thomas, D., Maraston, C., Bender, R., Mendes de Oliveira, C.: The
1007
+ Epochs of Early-Type Galaxy Formation as a Function of Environment.
1008
+ ApJ 621, 673–694 (2005) astro-ph/0410209. https://doi.org/10.1086/
1009
+ 426932
1010
+ [32] Carnall, A.C., McLure, R.J., Dunlop, J.S., Hamadouche, M., Cullen, F.,
1011
+ McLeod, D.J., Begley, R., Amorin, R., Bolzonella, M., Castellano, M.,
1012
+ Cimatti, A., Fontanot, F., Gargiulo, A., Garilli, B., Mannucci, F., Pen-
1013
+ tericci, L., Talia, M., Zamorani, G., Calabro, A., Cresci, G., Hathi, N.P.:
1014
+ The Stellar Metallicities of Massive Quiescent Galaxies at 1.0 < z < 1.3
1015
+ from KMOS + VANDELS. ApJ 929(2), 131 (2022) arXiv:2108.13430
1016
+ [astro-ph.GA]. https://doi.org/10.3847/1538-4357/ac5b62
1017
+ [33] Kobayashi, C., Karakas, A.I., Lugaro, M.: The Origin of Elements from
1018
+ Carbon to Uranium. ApJ 900(2), 179 (2020) arXiv:2008.04660 [astro-
1019
+ ph.GA]. https://doi.org/10.3847/1538-4357/abae65
1020
+ [34] Cullen, F., Shapley, A.E., McLure, R.J., Dunlop, J.S., Sanders, R.L.,
1021
+ Topping, M.W., Reddy, N.A., Amor´ın, R., Begley, R., Bolzonella, M.,
1022
+ Calabr`o, A., Carnall, A.C., Castellano, M., Cimatti, A., Cirasuolo, M.,
1023
+ Cresci, G., Fontana, A., Fontanot, F., Garilli, B., Guaita, L., Hamadouche,
1024
+ M., Hathi, N.P., Mannucci, F., McLeod, D.J., Pentericci, L., Saxena, A.,
1025
+ Talia, M., Zamorani, G.: The NIRVANDELS Survey: a robust detection
1026
+ of α-enhancement in star-forming galaxies at z ≃ 3.4. MNRAS 505(1),
1027
+ 903–920 (2021) arXiv:2103.06300 [astro-ph.GA]. https://doi.org/10.1093/
1028
+ mnras/stab1340
1029
+ [35] Chehade, B., Carnall, A.C., Shanks, T., Diener, C., Fumagalli, M.,
1030
+ Findlay, J.R., Metcalfe, N., Hennawi, J., Leibler, C., Murphy, D.N.A.,
1031
+ Prochaska, J.X., Irwin, M.J., Gonzalez-Solares, E.: Two more, bright, z ¿ 6
1032
+ quasars from VST ATLAS and WISE. MNRAS 478(2), 1649–1659 (2018)
1033
+ arXiv:1803.01424 [astro-ph.GA]. https://doi.org/10.1093/mnras/sty690
1034
+ [36] Onoue, M., Kashikawa, N., Matsuoka, Y., Kato, N., Izumi, T., Nagao,
1035
+ T., Strauss, M.A., Harikane, Y., Imanishi, M., Ito, K., Iwasawa, K.,
1036
+ Kawaguchi, T., Lee, C.-H., Noboriguchi, A., Suh, H., Tanaka, M., Toba,
1037
+ Y.: Subaru High-z Exploration of Low-luminosity Quasars (SHELLQs).
1038
+ VI. Black Hole Mass Measurements of Six Quasars at 6.1 ≤ z ≤ 6.7.
1039
+ ApJ 880(2), 77 (2019) arXiv:1904.07278 [astro-ph.GA]. https://doi.org/
1040
+ 10.3847/1538-4357/ab29e9
1041
+
1042
+ Springer Nature 2021 LATEX template
1043
+ 20
1044
+ A massive quiescent galaxy at redshift 4.658
1045
+ [37] Vanden Berk, D.E., Richards, G.T., Bauer, A., Strauss, M.A., Schnei-
1046
+ der, D.P., Heckman, T.M., York, D.G., Hall, P.B., Fan, X., Knapp, G.R.,
1047
+ Anderson, S.F., Annis, J., Bahcall, N.A., Bernardi, M., Briggs, J.W.,
1048
+ Brinkmann, J., Brunner, R., Burles, S., Carey, L., Castander, F.J., Con-
1049
+ nolly, A.J., Crocker, J.H., Csabai, I., Doi, M., Finkbeiner, D., Friedman,
1050
+ S., Frieman, J.A., Fukugita, M., Gunn, J.E., Hennessy, G.S., Ivezi´c, ˇZ.,
1051
+ Kent, S., Kunszt, P.Z., Lamb, D.Q., Leger, R.F., Long, D.C., Loveday,
1052
+ J., Lupton, R.H., Meiksin, A., Merelli, A., Munn, J.A., Newberg, H.J.,
1053
+ Newcomb, M., Nichol, R.C., Owen, R., Pier, J.R., Pope, A., Rockosi,
1054
+ C.M., Schlegel, D.J., Siegmund, W.A., Smee, S., Snir, Y., Stoughton,
1055
+ C., Stubbs, C., SubbaRao, M., Szalay, A.S., Szokoly, G.P., Tremonti,
1056
+ C., Uomoto, A., Waddell, P., Yanny, B., Zheng, W.: Composite Quasar
1057
+ Spectra from the Sloan Digital Sky Survey. AJ 122(2), 549–564 (2001)
1058
+ arXiv:astro-ph/0105231 [astro-ph]. https://doi.org/10.1086/321167
1059
+ [38] Greene, J.E., Ho, L.C.: Estimating Black Hole Masses in Active Galaxies
1060
+ Using the Hα Emission Line. ApJ 630(1), 122–129 (2005) arXiv:astro-
1061
+ ph/0508335 [astro-ph]. https://doi.org/10.1086/431897
1062
+ [39] Kewley, L.J., Dopita, M.A., Leitherer, C., Dav´e, R., Yuan, T., Allen,
1063
+ M., Groves, B., Sutherland, R.: Theoretical Evolution of Optical Strong
1064
+ Lines across Cosmic Time. ApJ 774(2), 100 (2013) arXiv:1307.0508
1065
+ [astro-ph.CO]. https://doi.org/10.1088/0004-637X/774/2/100
1066
+ [40] Maltby, D.T., McLure, O.A.R.J., Wild, V., Dunlop, J., Rowlands, K.,
1067
+ Hartley, W.G., Hatch, N.A., Socolovsky, M., Wilkinson, A., Amorin,
1068
+ R., Bradshaw, E.J., Carnall, A.C., Castellano, M., Cimatti, A., Cresci,
1069
+ G., Cullen, F., De Barros, S., Fontanot, F., Garilli, B., Koekemoer,
1070
+ A.M., McLeod, D.J., Pentericci, L., Talia, M.: High-velocity outflows
1071
+ in massive post-starburst galaxies at z &gt; 1. MNRAS, 2140 (2019)
1072
+ arXiv:1908.02766 [astro-ph.GA]. https://doi.org/10.1093/mnras/stz2211
1073
+ [41] Kennicutt, R.C., Evans, N.J.: Star Formation in the Milky Way and
1074
+ Nearby Galaxies. ARA&A 50, 531–608 (2012) arXiv:1204.3552. https:
1075
+ //doi.org/10.1146/annurev-astro-081811-125610
1076
+ [42] Kormendy, J., Ho, L.C.: Coevolution (Or Not) of Supermassive Black
1077
+ Holes and Host Galaxies. ARA&A 51(1), 511–653 (2013) arXiv:1304.7762
1078
+ [astro-ph.CO]. https://doi.org/10.1146/annurev-astro-082708-101811
1079
+ [43] McLure, R.J., Jarvis, M.J., Targett, T.A., Dunlop, J.S., Best, P.N.: On
1080
+ the evolution of the black hole: spheroid mass ratio. MNRAS 368(3),
1081
+ 1395–1403 (2006) arXiv:astro-ph/0510121 [astro-ph]. https://doi.org/10.
1082
+ 1111/j.1365-2966.2006.10228.x10.48550/arXiv.astro-ph/0510121
1083
+ [44] van der Wel, A., Franx, M., van Dokkum, P.G., Skelton, R.E., Momcheva,
1084
+ I.G., Whitaker, K.E., Brammer, G.B., Bell, E.F., Rix, H.-W., Wuyts,
1085
+
1086
+ Springer Nature 2021 LATEX template
1087
+ A massive quiescent galaxy at redshift 4.658
1088
+ 21
1089
+ S., Ferguson, H.C., Holden, B.P., Barro, G., Koekemoer, A.M., Chang,
1090
+ Y.-Y., McGrath, E.J., H¨aussler, B., Dekel, A., Behroozi, P., Fumagalli,
1091
+ M., Leja, J., Lundgren, B.F., Maseda, M.V., Nelson, E.J., Wake, D.A.,
1092
+ Patel, S.G., Labb´e, I., Faber, S.M., Grogin, N.A., Kocevski, D.D.: 3D-
1093
+ HST+CANDELS: The Evolution of the Galaxy Size-Mass Distribution
1094
+ since z = 3. ApJ 788, 28 (2014) arXiv:1404.2844. https://doi.org/10.
1095
+ 1088/0004-637X/788/1/28
1096
+ [45] Hamadouche, M.L., Carnall, A.C., McLure, R.J., Dunlop, J.S., McLeod,
1097
+ D.J., Cullen, F., Begley, R., Bolzonella, M., Buitrago, F., Castellano,
1098
+ M., Cucciati, O., Fontana, A., Gargiulo, A., Moresco, M., Pozzetti, L.,
1099
+ Zamorani, G.: A combined VANDELS and LEGA-C study: the evolu-
1100
+ tion of quiescent galaxy size, stellar mass, and age from z = 0.6 to z =
1101
+ 1.3. MNRAS 512(1), 1262–1274 (2022) arXiv:2201.10576 [astro-ph.GA].
1102
+ https://doi.org/10.1093/mnras/stac535
1103
+ [46] Almaini, O., Wild, V., Maltby, D.T., Hartley, W.G., Simpson, C., Hatch,
1104
+ N.A., McLure, R.J., Dunlop, J.S., Rowlands, K.: Massive post-starburst
1105
+ galaxies at z > 1 are compact proto-spheroids. MNRAS 472, 1401–1412
1106
+ (2017) arXiv:1708.00005. https://doi.org/10.1093/mnras/stx1957
1107
+ [47] Hopkins, P.F., Murray, N., Quataert, E., Thompson, T.A.: A maxi-
1108
+ mum stellar surface density in dense stellar systems. MNRAS 401(1),
1109
+ 19–23 (2010) arXiv:0908.4088 [astro-ph.CO]. https://doi.org/10.1111/j.
1110
+ 1745-3933.2009.00777.x
1111
+ [48] Newman, A.B., Belli, S., Ellis, R.S., Patel, S.G.: Resolving Quiescent
1112
+ Galaxies at z ≳ 2. II. Direct Measures of Rotational Support. ApJ
1113
+ 862(2), 126 (2018) arXiv:1806.06815 [astro-ph.GA]. https://doi.org/10.
1114
+ 3847/1538-4357/aacd4f
1115
+ [49] Decarli, R., Walter, F., Venemans, B.P., Ba˜nados, E., Bertoldi, F., Carilli,
1116
+ C., Fan, X., Farina, E.P., Mazzucchelli, C., Riechers, D., Rix, H.-W.,
1117
+ Strauss, M.A., Wang, R., Yang, Y.: Rapidly star-forming galaxies adjacent
1118
+ to quasars at redshifts exceeding 6. Nature 545(7655), 457–461 (2017)
1119
+ arXiv:1705.08662 [astro-ph.GA]. https://doi.org/10.1038/nature22358
1120
+ [50] Riechers, D.A., Nayyeri, H., Burgarella, D., Emonts, B.H.C., Clements,
1121
+ D.L., Cooray, A., Ivison, R.J., Oliver, S., P´erez-Fournon, I., Rigopoulou,
1122
+ D., Scott, D.: Rise of the Titans: Gas Excitation and Feedback in
1123
+ a Binary Hyperluminous Dusty Starburst Galaxy at z ∼ 6. ApJ
1124
+ 907(2), 62 (2021) arXiv:2010.15183 [astro-ph.GA]. https://doi.org/10.
1125
+ 3847/1538-4357/abcf2e
1126
+ [51] Gordon, K.D., Bohlin, R., Sloan, G.C., Rieke, G., Volk, K., Boyer,
1127
+ M., Muzerolle, J., Schlawin, E., Deustua, S.E., Hines, D.C., Kraemer,
1128
+ K.E., Mullally, S.E., Su, K.Y.L.: The James Webb Space Telescope
1129
+
1130
+ Springer Nature 2021 LATEX template
1131
+ 22
1132
+ A massive quiescent galaxy at redshift 4.658
1133
+ Absolute Flux Calibration. I. Program Design and Calibrator Stars. AJ
1134
+ 163(6), 267 (2022) arXiv:2204.06500 [astro-ph.IM]. https://doi.org/10.
1135
+ 3847/1538-3881/ac66dc
1136
+ [52] L¨utzgendorf, N., Giardino, G., Alves de Oliveira, C., Zeidler, P., Fer-
1137
+ ruit, P., Jakobsen, P., Kumari, N., Rawle, T., Birkmann, S.M., B¨oker,
1138
+ T., Proffitt, C., Sirianni, M., Te Plate, M., Sohn, S.T.: Astrometric
1139
+ and wavelength calibration of the NIRSpec instrument during commis-
1140
+ sioning using a model-based approach. In: Coyle, L.E., Matsuura, S.,
1141
+ Perrin, M.D. (eds.) Space Telescopes and Instrumentation 2022: Optical,
1142
+ Infrared, and Millimeter Wave. Society of Photo-Optical Instrumenta-
1143
+ tion Engineers (SPIE) Conference Series, vol. 12180, p. 121800 (2022).
1144
+ https://doi.org/10.1117/12.2630069
1145
+ [53] Bohlin, R.C., Gordon, K.D., Tremblay, P.-E.: Techniques and Review of
1146
+ Absolute Flux Calibration from the Ultraviolet to the Mid-Infrared. PASP
1147
+ 126(942), 711 (2014) arXiv:1406.1707 [astro-ph.IM]. https://doi.org/10.
1148
+ 1086/677655
1149
+ [54] Guo, Y., Ferguson, H.C., Giavalisco, M., Barro, G., Willner, S.P., Ashby,
1150
+ M.L.N., Dahlen, T., Donley, J.L., Faber, S.M., Fontana, A., Galametz,
1151
+ A., Grazian, A., Huang, K.-H., Kocevski, D.D., Koekemoer, A.M., Koo,
1152
+ D.C., McGrath, E.J., Peth, M., Salvato, M., Wuyts, S., Castellano, M.,
1153
+ Cooray, A.R., Dickinson, M.E., Dunlop, J.S., Fazio, G.G., Gardner, J.P.,
1154
+ Gawiser, E., Grogin, N.A., Hathi, N.P., Hsu, L.-T., Lee, K.-S., Lucas,
1155
+ R.A., Mobasher, B., Nand ra, K., Newman, J.A., van der Wel, A.:
1156
+ CANDELS Multi-wavelength Catalogs: Source Detection and Photome-
1157
+ try in the GOODS-South Field. ApJS 207(2), 24 (2013) arXiv:1308.4405
1158
+ [astro-ph.CO]. https://doi.org/10.1088/0067-0049/207/2/24
1159
+ [55] Williams, C.C., Tacchella, S., Maseda, M.V., Robertson, B.E., Johnson,
1160
+ B.D., Willott, C.J., Eisenstein, D.J., Willmer, C.N.A., Ji, Z., Hainline,
1161
+ K.N., Helton, J.M., Alberts, S., Baum, S., Bhatawdekar, R., Boyett, K.,
1162
+ Bunker, A.J., Carniani, S., Charlot, S., Chevallard, J., Curtis-Lake, E., de
1163
+ Graaf, A., Egami, E., Franx, M., Kumari, N., Maiolino, R., Nelson, E.J.,
1164
+ Rieke, M.J., Sandles, L., Shivaei, I., Simmonds, C., Smit, R., Suess, K.A.,
1165
+ Sun, F., Ubler, H., Witstok, J.: JEMS: A deep medium-band imaging
1166
+ survey in the Hubble Ultra-Deep Field with JWST NIRCam & NIRISS.
1167
+ arXiv e-prints, 2301–09780 (2023) arXiv:2301.09780 [astro-ph.GA]
1168
+ [56] McLeod, D.J., McLure, R.J., Dunlop, J.S.: The z = 9-10 galaxy population
1169
+ in the Hubble Frontier Fields and CLASH surveys: the z = 9 luminosity
1170
+ function and further evidence for a smooth decline in ultraviolet luminos-
1171
+ ity density at z≥ 8. MNRAS 459(4), 3812–3824 (2016) arXiv:1602.05199
1172
+ [astro-ph.GA]. https://doi.org/10.1093/mnras/stw904
1173
+
1174
+ Springer Nature 2021 LATEX template
1175
+ A massive quiescent galaxy at redshift 4.658
1176
+ 23
1177
+ [57] Carnall, A.C., McLure, R.J., Dunlop, J.S., Dav´e, R.: Inferring the star for-
1178
+ mation histories of massive quiescent galaxies with BAGPIPES: evidence
1179
+ for multiple quenching mechanisms. MNRAS 480, 4379–4401 (2018)
1180
+ arXiv:1712.04452. https://doi.org/10.1093/mnras/sty2169
1181
+ [58] Skilling, J.: Nested sampling for general bayesian computation. Bayesian
1182
+ Anal. 1(4), 833–859 (2006). https://doi.org/10.1214/06-BA127
1183
+ [59] Buchner, J., Georgakakis, A., Nandra, K., Hsu, L., Rangel, C., Brightman,
1184
+ M., Merloni, A., Salvato, M., Donley, J., Kocevski, D.: X-ray spectral
1185
+ modelling of the AGN obscuring region in the CDFS: Bayesian model
1186
+ selection and catalogue. A&A 564, 125 (2014) arXiv:1402.0004 [astro-
1187
+ ph.HE]. https://doi.org/10.1051/0004-6361/201322971
1188
+ [60] Feroz, F., Hobson, M.P., Cameron, E., Pettitt, A.N.: Importance Nested
1189
+ Sampling and the MultiNest Algorithm. The Open Journal of Astro-
1190
+ physics 2(1), 10 (2019) arXiv:1306.2144 [astro-ph.IM]. https://doi.org/
1191
+ 10.21105/astro.1306.2144
1192
+ [61] Bruzual, G., Charlot, S.: Stellar population synthesis at the resolution of
1193
+ 2003. MNRAS 344, 1000–1028 (2003) astro-ph/0309134. https://doi.org/
1194
+ 10.1046/j.1365-8711.2003.06897.x
1195
+ [62] Chevallard, J., Charlot, S.: Modelling and interpreting spectral energy
1196
+ distributions of galaxies with BEAGLE. MNRAS 462, 1415–1443 (2016)
1197
+ arXiv:1603.03037. https://doi.org/10.1093/mnras/stw1756
1198
+ [63] S´anchez-Bl´azquez, P., Peletier, R.F., Jim´enez-Vicente, J., Cardiel, N.,
1199
+ Cenarro, A.J., Falc´on-Barroso, J., Gorgas, J., Selam, S., Vazdekis, A.:
1200
+ Medium-resolution Isaac Newton Telescope library of empirical spec-
1201
+ tra. MNRAS 371(2), 703–718 (2006) arXiv:astro-ph/0607009 [astro-ph].
1202
+ https://doi.org/10.1111/j.1365-2966.2006.10699.x
1203
+ [64] Bressan, A., Marigo, P., Girardi, L., Salasnich, B., Dal Cero, C.,
1204
+ Rubele, S., Nanni, A.: PARSEC: stellar tracks and isochrones with
1205
+ the PAdova and TRieste Stellar Evolution Code. MNRAS 427(1),
1206
+ 127–145 (2012) arXiv:1208.4498 [astro-ph.SR]. https://doi.org/10.1111/j.
1207
+ 1365-2966.2012.21948.x
1208
+ [65] Marigo, P., Bressan, A., Nanni, A., Girardi, L., Pumo, M.L.: Evolution of
1209
+ thermally pulsing asymptotic giant branch stars - I. The COLIBRI code.
1210
+ MNRAS 434(1), 488–526 (2013) arXiv:1305.4485 [astro-ph.SR]. https:
1211
+ //doi.org/10.1093/mnras/stt1034
1212
+ [66] Noll, S., Pierini, D., Cimatti, A., Daddi, E., Kurk, J.D., Bolzonella, M.,
1213
+ Cassata, P., Halliday, C., Mignoli, M., Pozzetti, L., Renzini, A., Berta, S.,
1214
+ Dickinson, M., Franceschini, A., Rodighiero, G., Rosati, P., Zamorani, G.:
1215
+
1216
+ Springer Nature 2021 LATEX template
1217
+ 24
1218
+ A massive quiescent galaxy at redshift 4.658
1219
+ GMASS ultradeep spectroscopy of galaxies at z ˜2. IV. The variety of dust
1220
+ populations. A&A 499(1), 69–85 (2009) arXiv:0903.3972 [astro-ph.CO].
1221
+ https://doi.org/10.1051/0004-6361/200811526
1222
+ [67] Salim, S., Boquien, M., Lee, J.C.: Dust Attenuation Curves in the Local
1223
+ Universe: Demographics and New Laws for Star-forming Galaxies and
1224
+ High-redshift Analogs. ApJ 859(1), 11 (2018) arXiv:1804.05850 [astro-
1225
+ ph.GA]. https://doi.org/10.3847/1538-4357/aabf3c
1226
+ [68] Calzetti, D., Armus, L., Bohlin, R.C., Kinney, A.L., Koornneef, J.,
1227
+ Storchi-Bergmann, T.: The Dust Content and Opacity of Actively Star-
1228
+ forming Galaxies. ApJ 533, 682–695 (2000) astro-ph/9911459. https:
1229
+ //doi.org/10.1086/308692
1230
+ [69] Inoue, A.K., Shimizu, I., Iwata, I., Tanaka, M.: An updated analytic model
1231
+ for attenuation by the intergalactic medium. MNRAS 442(2), 1805–1820
1232
+ (2014) arXiv:1402.0677 [astro-ph.CO]. https://doi.org/10.1093/mnras/
1233
+ stu936
1234
+ [70] Carnall, A.C., McLure, R.J., Dunlop, J.S., Cullen, F., McLeod, D.J.,
1235
+ Wild, V., Johnson, B.D., Appleby, S., Dav´e, R., Amorin, R., Bolzonella,
1236
+ M., Castellano, M., Cimatti, A., Cucciati, O., Gargiulo, A., Garilli,
1237
+ B., Marchi, F., Pentericci, L., Pozzetti, L., Schreiber, C., Talia, M.,
1238
+ Zamorani, G.: The VANDELS survey: the star-formation histories of
1239
+ massive quiescent galaxies at 1.0 < z < 1.3. MNRAS 490(1), 417–439
1240
+ (2019) arXiv:1903.11082 [astro-ph.GA]. https://doi.org/10.1093/mnras/
1241
+ stz254410.48550/arXiv.1903.11082
1242
+ [71] Johnson, B.D., Leja, J., Conroy, C., Speagle, J.S.: Stellar Population Infer-
1243
+ ence with Prospector. ApJS 254(2), 22 (2021) arXiv:2012.01426 [astro-
1244
+ ph.GA]. https://doi.org/10.3847/1538-4365/abef6710.48550/arXiv.2012.
1245
+ 01426
1246
+ [72] Geda, R., Crawford, S.M., Hunt, L., Bershady, M., Tollerud, E., Ran-
1247
+ driamampandry, S.: PetroFit: A python package for computing petrosian
1248
+ radii and fitting galaxy light profiles. The Astronomical Journal 163(5),
1249
+ 202 (2022). https://doi.org/10.3847/1538-3881/ac5908
1250
+
AtFIT4oBgHgl3EQf_CxR/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
B9E2T4oBgHgl3EQf8wmh/content/tmp_files/2301.04222v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
B9E2T4oBgHgl3EQf8wmh/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff