Dataset Viewer
Auto-converted to Parquet
image
imagewidth (px)
2.48k
2.55k
pdf_name
stringclasses
10 values
page_number
int64
0
37
markdown
stringlengths
0
8.85k
html
stringlengths
0
8.9k
layout
stringlengths
104
9.98k
lines
stringlengths
2
25k
images
stringlengths
2
599
equations
stringlengths
2
5.44k
tables
stringclasses
1 value
page_size
stringclasses
2 values
content_list
stringlengths
46
16.1k
base_layout_detection
stringlengths
967
30.8k
pdf_info
stringlengths
3.15k
71.6k
2302.06555
0
# Do Vision and Language Models Share Concepts? A Vector Space Alignment Study Jiaang Li† Yova Kementchedjhieva‡ Constanza Fierro† Anders Søgaard† † University of Copenhagen ‡ Mohamed bin Zayed University of Artificial Intelligence {jili,c.fierro,soegaard}@di.ku.dk, [email protected] # Abstract Large-scale pretrained language models (LMs) are said to “lack the ability to con- nect utterances to the world” (Bender and Koller, 2020), because they do not have “mental models of the world” (Mitchell and Krakauer, 2023). If so, one would expect LM representations to be unrelated to rep- resentations induced by vision models. We present an empirical evaluation across four families of LMs (BERT, GPT-2, OPT and LLaMA-2) and three vision model architec- tures (ResNet, SegFormer, and MAE). Our experiments show that LMs partially con- verge towards representations isomorphic to those of vision models, subject to dispersion, polysemy and frequency. This has important implications for both multi-modal processing and the LM understanding debate (Mitchell and Krakauer, 2023).1 # 1 Introduction The debate around whether LMs can be said to un- derstand is often portrayed as a back-and-forth be- tween two opposing sides (Mitchell and Krakauer, 2023), but in reality, there are many positions. Some researchers have argued that LMs are ‘all syn- tax, no semantics’, i.e., that they learn form, but not meaning (Searle, 1980; Bender and Koller, 2020; Marcus et al., 2023).2 Others have argued that LMs have inferential semantics, but not referential se- mantics (Rapaport, 2002; Sahlgren and Carlsson, 2021; Piantadosi and Hill, 2022),3 whereas some have posited that a form of externalist referential semantics is possible, at least for chatbots engaged in direct conversation (Cappelen and Dever, 2021; Butlin, 2021; Mollo and Millière, 2023; Mandelk- ern and Linzen, 2023). Most researchers agree, however, that LMs “lack the ability to connect ut- terances to the world” (Bender and Koller, 2020), because they do not have “mental models of the world” (Mitchell and Krakauer, 2023). This study provides evidence to the contrary: Language models and computer vision models (VMs) are trained on independent data sources (at least for unsupervised computer vision models). The only common source of bias is the world. If LMs and VMs exhibit similarities, it must be be- cause they both model the world. We examine the representations learned by different LMs and VMs by measuring how similar their geometries are. We consistently find that the better the LMs are, the more they induce representations similar to those induced by computer vision models. The similarity between the two spaces is such that from a very small set of parallel examples we are able to linearly project VMs representations to the lan- guage space and retrieve highly accurate captions, as shown by the examples in Figure 1. Contributions. We present a series of evalua- tions of the vector spaces induced by three families of VMs and four families of LMs, i.e., a total of fourteen VMs and fourteen LMs. We show that within each family, the larger the LMs, the more their vector spaces become structurally similar to
<h1>Do Vision and Language Models Share Concepts? A Vector Space Alignment Study</h1> <p>Jiaang Li† Yova Kementchedjhieva‡ Constanza Fierro† Anders Søgaard†</p> <p>† University of Copenhagen ‡ Mohamed bin Zayed University of Artificial Intelligence {jili,c.fierro,soegaard}@di.ku.dk, [email protected]</p> <h1>Abstract</h1> <p>Large-scale pretrained language models (LMs) are said to “lack the ability to con- nect utterances to the world” (Bender and Koller, 2020), because they do not have “mental models of the world” (Mitchell and Krakauer, 2023). If so, one would expect LM representations to be unrelated to rep- resentations induced by vision models. We present an empirical evaluation across four families of LMs (BERT, GPT-2, OPT and LLaMA-2) and three vision model architec- tures (ResNet, SegFormer, and MAE). Our experiments show that LMs partially con- verge towards representations isomorphic to those of vision models, subject to dispersion, polysemy and frequency. This has important implications for both multi-modal processing and the LM understanding debate (Mitchell and Krakauer, 2023).1</p> <h1>1 Introduction</h1> <p>The debate around whether LMs can be said to un- derstand is often portrayed as a back-and-forth be- tween two opposing sides (Mitchell and Krakauer, 2023), but in reality, there are many positions. Some researchers have argued that LMs are ‘all syn- tax, no semantics’, i.e., that they learn form, but not meaning (Searle, 1980; Bender and Koller, 2020; Marcus et al., 2023).2 Others have argued that LMs</p> <p>have inferential semantics, but not referential se- mantics (Rapaport, 2002; Sahlgren and Carlsson, 2021; Piantadosi and Hill, 2022),3 whereas some have posited that a form of externalist referential semantics is possible, at least for chatbots engaged in direct conversation (Cappelen and Dever, 2021; Butlin, 2021; Mollo and Millière, 2023; Mandelk- ern and Linzen, 2023). Most researchers agree, however, that LMs “lack the ability to connect ut- terances to the world” (Bender and Koller, 2020), because they do not have “mental models of the world” (Mitchell and Krakauer, 2023).</p> <p>This study provides evidence to the contrary: Language models and computer vision models (VMs) are trained on independent data sources (at least for unsupervised computer vision models). The only common source of bias is the world. If LMs and VMs exhibit similarities, it must be be- cause they both model the world. We examine the representations learned by different LMs and VMs by measuring how similar their geometries are. We consistently find that the better the LMs are, the more they induce representations similar to those induced by computer vision models. The similarity between the two spaces is such that from a very small set of parallel examples we are able to linearly project VMs representations to the lan- guage space and retrieve highly accurate captions, as shown by the examples in Figure 1.</p> <p>Contributions. We present a series of evalua- tions of the vector spaces induced by three families of VMs and four families of LMs, i.e., a total of fourteen VMs and fourteen LMs. We show that within each family, the larger the LMs, the more their vector spaces become structurally similar to</p>
[{"type": "title", "coordinates": [143, 67, 454, 102], "content": "Do Vision and Language Models Share Concepts?\nA Vector Space Alignment Study", "block_type": "title", "index": 1}, {"type": "text", "coordinates": [109, 112, 489, 127], "content": "Jiaang Li\u2020 Yova Kementchedjhieva\u2021 Constanza Fierro\u2020 Anders S\u00f8gaard\u2020", "block_type": "text", "index": 2}, {"type": "text", "coordinates": [54, 140, 549, 183], "content": "\u2020 University of Copenhagen\n\u2021 Mohamed bin Zayed University of Artificial Intelligence\n{jili,c.fierro,soegaard}@di.ku.dk, [email protected]", "block_type": "text", "index": 3}, {"type": "title", "coordinates": [159, 206, 204, 217], "content": "Abstract", "block_type": "title", "index": 4}, {"type": "text", "coordinates": [93, 233, 270, 459], "content": "Large-scale pretrained language models\n(LMs) are said to \u201clack the ability to con-\nnect utterances to the world\u201d (Bender and\nKoller, 2020), because they do not have\n\u201cmental models of the world\u201d (Mitchell and\nKrakauer, 2023). If so, one would expect\nLM representations to be unrelated to rep-\nresentations induced by vision models. We\npresent an empirical evaluation across four\nfamilies of LMs (BERT, GPT-2, OPT and\nLLaMA-2) and three vision model architec-\ntures (ResNet, SegFormer, and MAE). Our\nexperiments show that LMs partially con-\nverge towards representations isomorphic to\nthose of vision models, subject to dispersion,\npolysemy and frequency. This has important\nimplications for both multi-modal processing\nand the LM understanding debate (Mitchell\nand Krakauer, 2023).1", "block_type": "text", "index": 5}, {"type": "title", "coordinates": [72, 483, 155, 496], "content": "1 Introduction", "block_type": "title", "index": 6}, {"type": "text", "coordinates": [71, 506, 292, 613], "content": "The debate around whether LMs can be said to un-\nderstand is often portrayed as a back-and-forth be-\ntween two opposing sides (Mitchell and Krakauer,\n2023), but in reality, there are many positions.\nSome researchers have argued that LMs are \u2018all syn-\ntax, no semantics\u2019, i.e., that they learn form, but not\nmeaning (Searle, 1980; Bender and Koller, 2020;\nMarcus et al., 2023).2 Others have argued that LMs", "block_type": "text", "index": 7}, {"type": "text", "coordinates": [306, 206, 528, 368], "content": "have inferential semantics, but not referential se-\nmantics (Rapaport, 2002; Sahlgren and Carlsson,\n2021; Piantadosi and Hill, 2022),3 whereas some\nhave posited that a form of externalist referential\nsemantics is possible, at least for chatbots engaged\nin direct conversation (Cappelen and Dever, 2021;\nButlin, 2021; Mollo and Milli\u00e8re, 2023; Mandelk-\nern and Linzen, 2023). Most researchers agree,\nhowever, that LMs \u201clack the ability to connect ut-\nterances to the world\u201d (Bender and Koller, 2020),\nbecause they do not have \u201cmental models of the\nworld\u201d (Mitchell and Krakauer, 2023).", "block_type": "text", "index": 8}, {"type": "text", "coordinates": [306, 369, 528, 599], "content": "This study provides evidence to the contrary:\nLanguage models and computer vision models\n(VMs) are trained on independent data sources (at\nleast for unsupervised computer vision models).\nThe only common source of bias is the world. If\nLMs and VMs exhibit similarities, it must be be-\ncause they both model the world. We examine\nthe representations learned by different LMs and\nVMs by measuring how similar their geometries\nare. We consistently find that the better the LMs\nare, the more they induce representations similar\nto those induced by computer vision models. The\nsimilarity between the two spaces is such that from\na very small set of parallel examples we are able\nto linearly project VMs representations to the lan-\nguage space and retrieve highly accurate captions,\nas shown by the examples in Figure 1.", "block_type": "text", "index": 9}, {"type": "text", "coordinates": [306, 609, 527, 691], "content": "Contributions. We present a series of evalua-\ntions of the vector spaces induced by three families\nof VMs and four families of LMs, i.e., a total of\nfourteen VMs and fourteen LMs. We show that\nwithin each family, the larger the LMs, the more\ntheir vector spaces become structurally similar to", "block_type": "text", "index": 10}]
[{"type": "text", "coordinates": [145, 69, 451, 84], "content": "Do Vision and Language Models Share Concepts?", "score": 1.0, "index": 1}, {"type": "text", "coordinates": [198, 85, 398, 101], "content": "A Vector Space Alignment Study", "score": 1.0, "index": 2}, {"type": "text", "coordinates": [110, 113, 488, 127], "content": "Jiaang Li\u2020 Yova Kementchedjhieva\u2021 Constanza Fierro\u2020 Anders S\u00f8gaard\u2020", "score": 1.0, "index": 3}, {"type": "text", "coordinates": [231, 138, 369, 158], "content": "\u2020 University of Copenhagen", "score": 1.0, "index": 4}, {"type": "text", "coordinates": [158, 152, 442, 173], "content": "\u2021 Mohamed bin Zayed University of Artificial Intelligence", "score": 1.0, "index": 5}, {"type": "text", "coordinates": [53, 169, 549, 185], "content": "{jili,c.fierro,soegaard}@di.ku.dk, [email protected]", "score": 1.0, "index": 6}, {"type": "text", "coordinates": [158, 204, 205, 218], "content": "Abstract", "score": 1.0, "index": 7}, {"type": "text", "coordinates": [92, 233, 269, 245], "content": "Large-scale pretrained language models", "score": 1.0, "index": 8}, {"type": "text", "coordinates": [93, 246, 270, 256], "content": "(LMs) are said to \u201clack the ability to con-", "score": 1.0, "index": 9}, {"type": "text", "coordinates": [92, 258, 269, 267], "content": "nect utterances to the world\u201d (Bender and", "score": 1.0, "index": 10}, {"type": "text", "coordinates": [92, 269, 269, 280], "content": "Koller, 2020), because they do not have", "score": 1.0, "index": 11}, {"type": "text", "coordinates": [91, 281, 269, 291], "content": "\u201cmental models of the world\u201d (Mitchell and", "score": 1.0, "index": 12}, {"type": "text", "coordinates": [92, 293, 270, 304], "content": "Krakauer, 2023). If so, one would expect", "score": 1.0, "index": 13}, {"type": "text", "coordinates": [92, 304, 270, 317], "content": "LM representations to be unrelated to rep-", "score": 1.0, "index": 14}, {"type": "text", "coordinates": [92, 317, 270, 328], "content": "resentations induced by vision models. We", "score": 1.0, "index": 15}, {"type": "text", "coordinates": [92, 329, 270, 340], "content": "present an empirical evaluation across four", "score": 1.0, "index": 16}, {"type": "text", "coordinates": [92, 341, 270, 351], "content": "families of LMs (BERT, GPT-2, OPT and", "score": 1.0, "index": 17}, {"type": "text", "coordinates": [93, 354, 270, 363], "content": "LLaMA-2) and three vision model architec-", "score": 1.0, "index": 18}, {"type": "text", "coordinates": [93, 365, 269, 375], "content": "tures (ResNet, SegFormer, and MAE). Our", "score": 1.0, "index": 19}, {"type": "text", "coordinates": [93, 377, 270, 388], "content": "experiments show that LMs partially con-", "score": 1.0, "index": 20}, {"type": "text", "coordinates": [93, 390, 269, 399], "content": "verge towards representations isomorphic to", "score": 1.0, "index": 21}, {"type": "text", "coordinates": [92, 400, 269, 412], "content": "those of vision models, subject to dispersion,", "score": 1.0, "index": 22}, {"type": "text", "coordinates": [94, 413, 269, 424], "content": "polysemy and frequency. This has important", "score": 1.0, "index": 23}, {"type": "text", "coordinates": [92, 424, 269, 436], "content": "implications for both multi-modal processing", "score": 1.0, "index": 24}, {"type": "text", "coordinates": [93, 437, 270, 448], "content": "and the LM understanding debate (Mitchell", "score": 1.0, "index": 25}, {"type": "text", "coordinates": [92, 448, 182, 459], "content": "and Krakauer, 2023).1", "score": 1.0, "index": 26}, {"type": "text", "coordinates": [72, 484, 79, 493], "content": "1", "score": 1.0, "index": 27}, {"type": "text", "coordinates": [87, 483, 156, 496], "content": "Introduction", "score": 1.0, "index": 28}, {"type": "text", "coordinates": [71, 507, 292, 517], "content": "The debate around whether LMs can be said to un-", "score": 1.0, "index": 29}, {"type": "text", "coordinates": [71, 520, 292, 533], "content": "derstand is often portrayed as a back-and-forth be-", "score": 1.0, "index": 30}, {"type": "text", "coordinates": [70, 534, 292, 545], "content": "tween two opposing sides (Mitchell and Krakauer,", "score": 1.0, "index": 31}, {"type": "text", "coordinates": [71, 546, 293, 560], "content": "2023), but in reality, there are many positions.", "score": 1.0, "index": 32}, {"type": "text", "coordinates": [71, 561, 292, 573], "content": "Some researchers have argued that LMs are \u2018all syn-", "score": 1.0, "index": 33}, {"type": "text", "coordinates": [71, 575, 291, 585], "content": "tax, no semantics\u2019, i.e., that they learn form, but not", "score": 1.0, "index": 34}, {"type": "text", "coordinates": [70, 588, 291, 599], "content": "meaning (Searle, 1980; Bender and Koller, 2020;", "score": 1.0, "index": 35}, {"type": "text", "coordinates": [70, 600, 291, 614], "content": "Marcus et al., 2023).2 Others have argued that LMs", "score": 1.0, "index": 36}, {"type": "text", "coordinates": [306, 206, 528, 218], "content": "have inferential semantics, but not referential se-", "score": 1.0, "index": 37}, {"type": "text", "coordinates": [306, 220, 527, 232], "content": "mantics (Rapaport, 2002; Sahlgren and Carlsson,", "score": 1.0, "index": 38}, {"type": "text", "coordinates": [306, 232, 527, 245], "content": "2021; Piantadosi and Hill, 2022),3 whereas some", "score": 1.0, "index": 39}, {"type": "text", "coordinates": [306, 247, 527, 259], "content": "have posited that a form of externalist referential", "score": 1.0, "index": 40}, {"type": "text", "coordinates": [307, 260, 527, 273], "content": "semantics is possible, at least for chatbots engaged", "score": 1.0, "index": 41}, {"type": "text", "coordinates": [306, 274, 528, 286], "content": "in direct conversation (Cappelen and Dever, 2021;", "score": 1.0, "index": 42}, {"type": "text", "coordinates": [307, 288, 527, 298], "content": "Butlin, 2021; Mollo and Milli\u00e8re, 2023; Mandelk-", "score": 1.0, "index": 43}, {"type": "text", "coordinates": [306, 300, 528, 313], "content": "ern and Linzen, 2023). Most researchers agree,", "score": 1.0, "index": 44}, {"type": "text", "coordinates": [306, 314, 528, 326], "content": "however, that LMs \u201clack the ability to connect ut-", "score": 1.0, "index": 45}, {"type": "text", "coordinates": [307, 329, 527, 339], "content": "terances to the world\u201d (Bender and Koller, 2020),", "score": 1.0, "index": 46}, {"type": "text", "coordinates": [307, 342, 526, 353], "content": "because they do not have \u201cmental models of the", "score": 1.0, "index": 47}, {"type": "text", "coordinates": [306, 356, 476, 367], "content": "world\u201d (Mitchell and Krakauer, 2023).", "score": 1.0, "index": 48}, {"type": "text", "coordinates": [318, 370, 527, 382], "content": "This study provides evidence to the contrary:", "score": 1.0, "index": 49}, {"type": "text", "coordinates": [306, 383, 526, 396], "content": "Language models and computer vision models", "score": 1.0, "index": 50}, {"type": "text", "coordinates": [306, 397, 527, 410], "content": "(VMs) are trained on independent data sources (at", "score": 1.0, "index": 51}, {"type": "text", "coordinates": [305, 410, 528, 423], "content": "least for unsupervised computer vision models).", "score": 1.0, "index": 52}, {"type": "text", "coordinates": [306, 424, 528, 436], "content": "The only common source of bias is the world. If", "score": 1.0, "index": 53}, {"type": "text", "coordinates": [307, 438, 527, 448], "content": "LMs and VMs exhibit similarities, it must be be-", "score": 1.0, "index": 54}, {"type": "text", "coordinates": [307, 452, 526, 462], "content": "cause they both model the world. We examine", "score": 1.0, "index": 55}, {"type": "text", "coordinates": [307, 465, 527, 477], "content": "the representations learned by different LMs and", "score": 1.0, "index": 56}, {"type": "text", "coordinates": [307, 478, 526, 490], "content": "VMs by measuring how similar their geometries", "score": 1.0, "index": 57}, {"type": "text", "coordinates": [307, 492, 526, 503], "content": "are. We consistently find that the better the LMs", "score": 1.0, "index": 58}, {"type": "text", "coordinates": [307, 506, 526, 516], "content": "are, the more they induce representations similar", "score": 1.0, "index": 59}, {"type": "text", "coordinates": [307, 520, 526, 531], "content": "to those induced by computer vision models. The", "score": 1.0, "index": 60}, {"type": "text", "coordinates": [307, 533, 526, 545], "content": "similarity between the two spaces is such that from", "score": 1.0, "index": 61}, {"type": "text", "coordinates": [307, 547, 526, 558], "content": "a very small set of parallel examples we are able", "score": 1.0, "index": 62}, {"type": "text", "coordinates": [307, 560, 527, 572], "content": "to linearly project VMs representations to the lan-", "score": 1.0, "index": 63}, {"type": "text", "coordinates": [306, 573, 527, 586], "content": "guage space and retrieve highly accurate captions,", "score": 1.0, "index": 64}, {"type": "text", "coordinates": [307, 588, 474, 599], "content": "as shown by the examples in Figure 1.", "score": 1.0, "index": 65}, {"type": "text", "coordinates": [307, 610, 527, 622], "content": "Contributions. We present a series of evalua-", "score": 1.0, "index": 66}, {"type": "text", "coordinates": [306, 624, 526, 636], "content": "tions of the vector spaces induced by three families", "score": 1.0, "index": 67}, {"type": "text", "coordinates": [307, 637, 527, 649], "content": "of VMs and four families of LMs, i.e., a total of", "score": 1.0, "index": 68}, {"type": "text", "coordinates": [306, 651, 527, 663], "content": "fourteen VMs and fourteen LMs. We show that", "score": 1.0, "index": 69}, {"type": "text", "coordinates": [307, 665, 526, 676], "content": "within each family, the larger the LMs, the more", "score": 1.0, "index": 70}, {"type": "text", "coordinates": [307, 678, 526, 690], "content": "their vector spaces become structurally similar to", "score": 1.0, "index": 71}]
[]
[]
[]
[595.2760009765625, 841.8900146484375]
[{"type": "text", "text": "Do Vision and Language Models Share Concepts? A Vector Space Alignment Study ", "text_level": 1, "page_idx": 0}, {"type": "text", "text": "Jiaang Li\u2020 Yova Kementchedjhieva\u2021 Constanza Fierro\u2020 Anders S\u00f8gaard\u2020 ", "page_idx": 0}, {"type": "text", "text": "\u2020 University of Copenhagen \u2021 Mohamed bin Zayed University of Artificial Intelligence {jili,c.fierro,soegaard}@di.ku.dk, [email protected] ", "page_idx": 0}, {"type": "text", "text": "Abstract ", "text_level": 1, "page_idx": 0}, {"type": "text", "text": "Large-scale pretrained language models (LMs) are said to \u201clack the ability to connect utterances to the world\u201d (Bender and Koller, 2020), because they do not have \u201cmental models of the world\u201d (Mitchell and Krakauer, 2023). If so, one would expect LM representations to be unrelated to representations induced by vision models. We present an empirical evaluation across four families of LMs (BERT, GPT-2, OPT and LLaMA-2) and three vision model architectures (ResNet, SegFormer, and MAE). Our experiments show that LMs partially converge towards representations isomorphic to those of vision models, subject to dispersion, polysemy and frequency. This has important implications for both multi-modal processing and the LM understanding debate (Mitchell and Krakauer, 2023).1 ", "page_idx": 0}, {"type": "text", "text": "1 Introduction ", "text_level": 1, "page_idx": 0}, {"type": "text", "text": "The debate around whether LMs can be said to understand is often portrayed as a back-and-forth between two opposing sides (Mitchell and Krakauer, 2023), but in reality, there are many positions. Some researchers have argued that LMs are \u2018all syntax, no semantics\u2019, i.e., that they learn form, but not meaning (Searle, 1980; Bender and Koller, 2020; Marcus et al., 2023).2 Others have argued that LMs have inferential semantics, but not referential semantics (Rapaport, 2002; Sahlgren and Carlsson, 2021; Piantadosi and Hill, 2022),3 whereas some have posited that a form of externalist referential semantics is possible, at least for chatbots engaged in direct conversation (Cappelen and Dever, 2021; Butlin, 2021; Mollo and Milli\u00e8re, 2023; Mandelkern and Linzen, 2023). Most researchers agree, however, that LMs \u201clack the ability to connect utterances to the world\u201d (Bender and Koller, 2020), because they do not have \u201cmental models of the world\u201d (Mitchell and Krakauer, 2023). ", "page_idx": 0}, {"type": "text", "text": "", "page_idx": 0}, {"type": "text", "text": "This study provides evidence to the contrary: Language models and computer vision models (VMs) are trained on independent data sources (at least for unsupervised computer vision models). The only common source of bias is the world. If LMs and VMs exhibit similarities, it must be because they both model the world. We examine the representations learned by different LMs and VMs by measuring how similar their geometries are. We consistently find that the better the LMs are, the more they induce representations similar to those induced by computer vision models. The similarity between the two spaces is such that from a very small set of parallel examples we are able to linearly project VMs representations to the language space and retrieve highly accurate captions, as shown by the examples in Figure 1. ", "page_idx": 0}, {"type": "text", "text": "Contributions. We present a series of evaluations of the vector spaces induced by three families of VMs and four families of LMs, i.e., a total of fourteen VMs and fourteen LMs. We show that within each family, the larger the LMs, the more their vector spaces become structurally similar to those of computer vision models. This enables retrieval of language representations of images (referential semantics) with minimal supervision. Retrieval precision depends on dispersion of image and language, polysemy, and frequency, but consistently improves with language model size. We discuss the implications of the finding that language and computer vision models learn representations with similar geometries. ", "page_idx": 0}]
[{"category_id": 1, "poly": [851.0445556640625, 572.8573608398438, 1465.8782958984375, 572.8573608398438, 1465.8782958984375, 1023.3301391601562, 851.0445556640625, 1023.3301391601562], "score": 0.9999955892562866}, {"category_id": 1, "poly": [850.8772583007812, 1026.036865234375, 1465.61279296875, 1026.036865234375, 1465.61279296875, 1666.1845703125, 850.8772583007812, 1666.1845703125], "score": 0.9999898672103882}, {"category_id": 1, "poly": [258.86773681640625, 649.2800903320312, 750.6548461914062, 649.2800903320312, 750.6548461914062, 1277.31884765625, 258.86773681640625, 1277.31884765625], "score": 0.99998539686203}, {"category_id": 0, "poly": [398.98492431640625, 188.02796936035156, 1260.0943603515625, 188.02796936035156, 1260.0943603515625, 284.3391418457031, 398.98492431640625, 284.3391418457031], "score": 0.9999737739562988}, {"category_id": 1, "poly": [151.57057189941406, 389.889404296875, 1526.064453125, 389.889404296875, 1526.064453125, 510.83349609375, 151.57057189941406, 510.83349609375], "score": 0.9999648332595825}, {"category_id": 2, "poly": [850.6769409179688, 1944.5916748046875, 1463.4500732421875, 1944.5916748046875, 1463.4500732421875, 2126.70263671875, 850.6769409179688, 2126.70263671875], "score": 0.9999618530273438}, {"category_id": 1, "poly": [197.22946166992188, 1406.180419921875, 811.5084228515625, 1406.180419921875, 811.5084228515625, 1705.2373046875, 197.22946166992188, 1705.2373046875], "score": 0.9999414682388306}, {"category_id": 2, "poly": [197.9559783935547, 1736.5682373046875, 809.03662109375, 1736.5682373046875, 809.03662109375, 2128.240478515625, 197.9559783935547, 2128.240478515625], "score": 0.9999384880065918}, {"category_id": 0, "poly": [199.86529541015625, 1343.234375, 431.4802551269531, 1343.234375, 431.4802551269531, 1377.8818359375, 199.86529541015625, 1377.8818359375], "score": 0.9999099969863892}, {"category_id": 2, "poly": [38.85932159423828, 755.0780639648438, 102.26101684570312, 755.0780639648438, 102.26101684570312, 1694.00341796875, 38.85932159423828, 1694.00341796875], "score": 0.999865710735321}, {"category_id": 0, "poly": [441.41802978515625, 573.35498046875, 566.9368896484375, 573.35498046875, 566.9368896484375, 604.847900390625, 441.41802978515625, 604.847900390625], "score": 0.9998639822006226}, {"category_id": 1, "poly": [851.593994140625, 1693.8402099609375, 1463.7982177734375, 1693.8402099609375, 1463.7982177734375, 1919.6624755859375, 851.593994140625, 1919.6624755859375], "score": 0.999808669090271}, {"category_id": 1, "poly": [304.0348815917969, 313.02447509765625, 1358.3475341796875, 313.02447509765625, 1358.3475341796875, 354.631103515625, 304.0348815917969, 354.631103515625], "score": 0.9989251494407654}, {"category_id": 2, "poly": [889.4788818359375, 2096.245849609375, 1293.7803955078125, 2096.245849609375, 1293.7803955078125, 2127.533935546875, 889.4788818359375, 2127.533935546875], "score": 0.22307536005973816}, {"category_id": 15, "poly": [851.0, 574.0, 1466.0, 574.0, 1466.0, 606.0, 851.0, 606.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [851.0, 613.0, 1465.0, 613.0, 1465.0, 645.0, 851.0, 645.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [850.0, 646.0, 1464.0, 646.0, 1464.0, 683.0, 850.0, 683.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [850.0, 688.0, 1464.0, 688.0, 1464.0, 720.0, 850.0, 720.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [852.0, 725.0, 1464.0, 725.0, 1464.0, 759.0, 852.0, 759.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [850.0, 762.0, 1466.0, 762.0, 1466.0, 796.0, 850.0, 796.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [853.0, 802.0, 1464.0, 802.0, 1464.0, 830.0, 853.0, 830.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [851.0, 836.0, 1466.0, 836.0, 1466.0, 872.0, 851.0, 872.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [850.0, 874.0, 1468.0, 874.0, 1468.0, 908.0, 850.0, 908.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [852.0, 915.0, 1463.0, 915.0, 1463.0, 943.0, 852.0, 943.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [852.0, 951.0, 1461.0, 951.0, 1461.0, 983.0, 852.0, 983.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [851.0, 989.0, 1322.0, 989.0, 1322.0, 1020.0, 851.0, 1020.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [883.0, 1028.0, 1463.0, 1028.0, 1463.0, 1063.0, 883.0, 1063.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [851.0, 1066.0, 1462.0, 1066.0, 1462.0, 1101.0, 851.0, 1101.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [850.0, 1103.0, 1463.0, 1103.0, 1463.0, 1140.0, 850.0, 1140.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [848.0, 1140.0, 1466.0, 1140.0, 1466.0, 1176.0, 848.0, 1176.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [851.0, 1179.0, 1466.0, 1179.0, 1466.0, 1213.0, 851.0, 1213.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [852.0, 1219.0, 1464.0, 1219.0, 1464.0, 1247.0, 852.0, 1247.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [852.0, 1257.0, 1462.0, 1257.0, 1462.0, 1286.0, 852.0, 1286.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [852.0, 1293.0, 1464.0, 1293.0, 1464.0, 1327.0, 852.0, 1327.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [852.0, 1330.0, 1462.0, 1330.0, 1462.0, 1363.0, 852.0, 1363.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [853.0, 1369.0, 1462.0, 1369.0, 1462.0, 1400.0, 853.0, 1400.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [854.0, 1408.0, 1461.0, 1408.0, 1461.0, 1436.0, 854.0, 1436.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [852.0, 1445.0, 1461.0, 1445.0, 1461.0, 1476.0, 852.0, 1476.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [854.0, 1483.0, 1462.0, 1483.0, 1462.0, 1514.0, 854.0, 1514.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [853.0, 1521.0, 1461.0, 1521.0, 1461.0, 1552.0, 853.0, 1552.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [852.0, 1556.0, 1464.0, 1556.0, 1464.0, 1591.0, 852.0, 1591.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [851.0, 1593.0, 1465.0, 1593.0, 1465.0, 1629.0, 851.0, 1629.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [853.0, 1634.0, 1317.0, 1634.0, 1317.0, 1665.0, 853.0, 1665.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [257.0, 648.0, 748.0, 648.0, 748.0, 681.0, 257.0, 681.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [259.0, 685.0, 751.0, 685.0, 751.0, 713.0, 259.0, 713.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [256.0, 718.0, 749.0, 718.0, 749.0, 743.0, 256.0, 743.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [258.0, 750.0, 749.0, 750.0, 749.0, 778.0, 258.0, 778.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [255.0, 781.0, 749.0, 781.0, 749.0, 811.0, 255.0, 811.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [257.0, 815.0, 751.0, 815.0, 751.0, 847.0, 257.0, 847.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [256.0, 846.0, 752.0, 846.0, 752.0, 883.0, 256.0, 883.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [257.0, 882.0, 750.0, 882.0, 750.0, 913.0, 257.0, 913.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [257.0, 916.0, 751.0, 916.0, 751.0, 945.0, 257.0, 945.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [258.0, 949.0, 750.0, 949.0, 750.0, 977.0, 258.0, 977.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [259.0, 984.0, 751.0, 984.0, 751.0, 1011.0, 259.0, 1011.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [259.0, 1016.0, 749.0, 1016.0, 749.0, 1044.0, 259.0, 1044.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [260.0, 1049.0, 751.0, 1049.0, 751.0, 1080.0, 260.0, 1080.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [259.0, 1084.0, 748.0, 1084.0, 748.0, 1111.0, 259.0, 1111.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [258.0, 1113.0, 749.0, 1113.0, 749.0, 1145.0, 258.0, 1145.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [261.0, 1150.0, 748.0, 1150.0, 748.0, 1178.0, 261.0, 1178.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [258.0, 1180.0, 748.0, 1180.0, 748.0, 1213.0, 258.0, 1213.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [259.0, 1214.0, 750.0, 1214.0, 750.0, 1245.0, 259.0, 1245.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [256.0, 1247.0, 507.0, 1247.0, 507.0, 1277.0, 256.0, 1277.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [405.0, 193.0, 1252.0, 193.0, 1252.0, 234.0, 405.0, 234.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [550.0, 237.0, 1107.0, 237.0, 1107.0, 282.0, 550.0, 282.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [642.0, 385.0, 1026.0, 385.0, 1026.0, 441.0, 642.0, 441.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [439.0, 425.0, 1229.0, 425.0, 1229.0, 481.0, 439.0, 481.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [149.0, 472.0, 1524.0, 472.0, 1524.0, 515.0, 149.0, 515.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [851.0, 1949.0, 1463.0, 1949.0, 1463.0, 1976.0, 851.0, 1976.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [852.0, 1978.0, 1462.0, 1978.0, 1462.0, 2006.0, 852.0, 2006.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [853.0, 2009.0, 1462.0, 2009.0, 1462.0, 2036.0, 853.0, 2036.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [851.0, 2037.0, 1462.0, 2037.0, 1462.0, 2069.0, 851.0, 2069.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [853.0, 2070.0, 991.0, 2070.0, 991.0, 2096.0, 853.0, 2096.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [885.0, 2096.0, 1294.0, 2096.0, 1294.0, 2131.0, 885.0, 2131.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [198.0, 1409.0, 812.0, 1409.0, 812.0, 1438.0, 198.0, 1438.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [198.0, 1445.0, 812.0, 1445.0, 812.0, 1481.0, 198.0, 1481.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [196.0, 1484.0, 813.0, 1484.0, 813.0, 1516.0, 196.0, 1516.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [198.0, 1519.0, 814.0, 1519.0, 814.0, 1557.0, 198.0, 1557.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [198.0, 1559.0, 812.0, 1559.0, 812.0, 1593.0, 198.0, 1593.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [199.0, 1598.0, 810.0, 1598.0, 810.0, 1627.0, 199.0, 1627.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [197.0, 1634.0, 810.0, 1634.0, 810.0, 1666.0, 197.0, 1666.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [197.0, 1669.0, 810.0, 1669.0, 810.0, 1706.0, 197.0, 1706.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [235.0, 1731.0, 476.0, 1731.0, 476.0, 1764.0, 235.0, 1764.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [522.0, 1733.0, 808.0, 1733.0, 808.0, 1763.0, 522.0, 1763.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [201.0, 1765.0, 404.0, 1765.0, 404.0, 1794.0, 201.0, 1794.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [231.0, 1788.0, 813.0, 1788.0, 813.0, 1829.0, 231.0, 1829.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [197.0, 1824.0, 813.0, 1824.0, 813.0, 1856.0, 197.0, 1856.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [199.0, 1856.0, 809.0, 1856.0, 809.0, 1885.0, 199.0, 1885.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [200.0, 1887.0, 809.0, 1887.0, 809.0, 1916.0, 200.0, 1916.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [198.0, 1917.0, 809.0, 1917.0, 809.0, 1947.0, 198.0, 1947.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [198.0, 1947.0, 808.0, 1947.0, 808.0, 1976.0, 198.0, 1976.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [200.0, 1980.0, 808.0, 1980.0, 808.0, 2006.0, 200.0, 2006.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [198.0, 2010.0, 809.0, 2010.0, 809.0, 2037.0, 198.0, 2037.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [198.0, 2039.0, 808.0, 2039.0, 808.0, 2066.0, 198.0, 2066.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [201.0, 2071.0, 809.0, 2071.0, 809.0, 2097.0, 201.0, 2097.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [197.0, 2099.0, 806.0, 2099.0, 806.0, 2129.0, 197.0, 2129.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [200.0, 1347.0, 220.0, 1347.0, 220.0, 1372.0, 200.0, 1372.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [244.0, 1342.0, 433.0, 1342.0, 433.0, 1379.0, 244.0, 1379.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [42.0, 760.0, 104.0, 760.0, 104.0, 1687.0, 42.0, 1687.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [439.0, 569.0, 571.0, 569.0, 571.0, 608.0, 439.0, 608.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [853.0, 1696.0, 1465.0, 1696.0, 1465.0, 1730.0, 853.0, 1730.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [851.0, 1735.0, 1462.0, 1735.0, 1462.0, 1768.0, 851.0, 1768.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [852.0, 1772.0, 1463.0, 1772.0, 1463.0, 1804.0, 852.0, 1804.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [851.0, 1809.0, 1463.0, 1809.0, 1463.0, 1842.0, 851.0, 1842.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [853.0, 1849.0, 1462.0, 1849.0, 1462.0, 1880.0, 853.0, 1880.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [853.0, 1884.0, 1462.0, 1884.0, 1462.0, 1919.0, 853.0, 1919.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [308.0, 314.0, 1356.0, 314.0, 1356.0, 353.0, 308.0, 353.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [886.0, 2094.0, 1293.0, 2094.0, 1293.0, 2131.0, 886.0, 2131.0], "score": 1.0, "text": ""}]
{"preproc_blocks": [{"type": "title", "bbox": [143, 67, 454, 102], "lines": [{"bbox": [145, 69, 451, 84], "spans": [{"bbox": [145, 69, 451, 84], "score": 1.0, "content": "Do Vision and Language Models Share Concepts?", "type": "text"}], "index": 0}, {"bbox": [198, 85, 398, 101], "spans": [{"bbox": [198, 85, 398, 101], "score": 1.0, "content": "A Vector Space Alignment Study", "type": "text"}], "index": 1}], "index": 0.5}, {"type": "text", "bbox": [109, 112, 489, 127], "lines": [{"bbox": [110, 113, 488, 127], "spans": [{"bbox": [110, 113, 488, 127], "score": 1.0, "content": "Jiaang Li\u2020 Yova Kementchedjhieva\u2021 Constanza Fierro\u2020 Anders S\u00f8gaard\u2020", "type": "text"}], "index": 2}], "index": 2}, {"type": "text", "bbox": [54, 140, 549, 183], "lines": [{"bbox": [231, 138, 369, 158], "spans": [{"bbox": [231, 138, 369, 158], "score": 1.0, "content": "\u2020 University of Copenhagen", "type": "text"}], "index": 3}, {"bbox": [158, 152, 442, 173], "spans": [{"bbox": [158, 152, 442, 173], "score": 1.0, "content": "\u2021 Mohamed bin Zayed University of Artificial Intelligence", "type": "text"}], "index": 4}, {"bbox": [53, 169, 549, 185], "spans": [{"bbox": [53, 169, 549, 185], "score": 1.0, "content": "{jili,c.fierro,soegaard}@di.ku.dk, [email protected]", "type": "text"}], "index": 5}], "index": 4}, {"type": "title", "bbox": [159, 206, 204, 217], "lines": [{"bbox": [158, 204, 205, 218], "spans": [{"bbox": [158, 204, 205, 218], "score": 1.0, "content": "Abstract", "type": "text"}], "index": 6}], "index": 6}, {"type": "text", "bbox": [93, 233, 270, 459], "lines": [{"bbox": [92, 233, 269, 245], "spans": [{"bbox": [92, 233, 269, 245], "score": 1.0, "content": "Large-scale pretrained language models", "type": "text"}], "index": 7}, {"bbox": [93, 246, 270, 256], "spans": [{"bbox": [93, 246, 270, 256], "score": 1.0, "content": "(LMs) are said to \u201clack the ability to con-", "type": "text"}], "index": 8}, {"bbox": [92, 258, 269, 267], "spans": [{"bbox": [92, 258, 269, 267], "score": 1.0, "content": "nect utterances to the world\u201d (Bender and", "type": "text"}], "index": 9}, {"bbox": [92, 269, 269, 280], "spans": [{"bbox": [92, 269, 269, 280], "score": 1.0, "content": "Koller, 2020), because they do not have", "type": "text"}], "index": 10}, {"bbox": [91, 281, 269, 291], "spans": [{"bbox": [91, 281, 269, 291], "score": 1.0, "content": "\u201cmental models of the world\u201d (Mitchell and", "type": "text"}], "index": 11}, {"bbox": [92, 293, 270, 304], "spans": [{"bbox": [92, 293, 270, 304], "score": 1.0, "content": "Krakauer, 2023). If so, one would expect", "type": "text"}], "index": 12}, {"bbox": [92, 304, 270, 317], "spans": [{"bbox": [92, 304, 270, 317], "score": 1.0, "content": "LM representations to be unrelated to rep-", "type": "text"}], "index": 13}, {"bbox": [92, 317, 270, 328], "spans": [{"bbox": [92, 317, 270, 328], "score": 1.0, "content": "resentations induced by vision models. We", "type": "text"}], "index": 14}, {"bbox": [92, 329, 270, 340], "spans": [{"bbox": [92, 329, 270, 340], "score": 1.0, "content": "present an empirical evaluation across four", "type": "text"}], "index": 15}, {"bbox": [92, 341, 270, 351], "spans": [{"bbox": [92, 341, 270, 351], "score": 1.0, "content": "families of LMs (BERT, GPT-2, OPT and", "type": "text"}], "index": 16}, {"bbox": [93, 354, 270, 363], "spans": [{"bbox": [93, 354, 270, 363], "score": 1.0, "content": "LLaMA-2) and three vision model architec-", "type": "text"}], "index": 17}, {"bbox": [93, 365, 269, 375], "spans": [{"bbox": [93, 365, 269, 375], "score": 1.0, "content": "tures (ResNet, SegFormer, and MAE). Our", "type": "text"}], "index": 18}, {"bbox": [93, 377, 270, 388], "spans": [{"bbox": [93, 377, 270, 388], "score": 1.0, "content": "experiments show that LMs partially con-", "type": "text"}], "index": 19}, {"bbox": [93, 390, 269, 399], "spans": [{"bbox": [93, 390, 269, 399], "score": 1.0, "content": "verge towards representations isomorphic to", "type": "text"}], "index": 20}, {"bbox": [92, 400, 269, 412], "spans": [{"bbox": [92, 400, 269, 412], "score": 1.0, "content": "those of vision models, subject to dispersion,", "type": "text"}], "index": 21}, {"bbox": [94, 413, 269, 424], "spans": [{"bbox": [94, 413, 269, 424], "score": 1.0, "content": "polysemy and frequency. This has important", "type": "text"}], "index": 22}, {"bbox": [92, 424, 269, 436], "spans": [{"bbox": [92, 424, 269, 436], "score": 1.0, "content": "implications for both multi-modal processing", "type": "text"}], "index": 23}, {"bbox": [93, 437, 270, 448], "spans": [{"bbox": [93, 437, 270, 448], "score": 1.0, "content": "and the LM understanding debate (Mitchell", "type": "text"}], "index": 24}, {"bbox": [92, 448, 182, 459], "spans": [{"bbox": [92, 448, 182, 459], "score": 1.0, "content": "and Krakauer, 2023).1", "type": "text"}], "index": 25}], "index": 16}, {"type": "title", "bbox": [72, 483, 155, 496], "lines": [{"bbox": [72, 483, 156, 496], "spans": [{"bbox": [72, 484, 79, 493], "score": 1.0, "content": "1", "type": "text"}, {"bbox": [87, 483, 156, 496], "score": 1.0, "content": "Introduction", "type": "text"}], "index": 26}], "index": 26}, {"type": "text", "bbox": [71, 506, 292, 613], "lines": [{"bbox": [71, 507, 292, 517], "spans": [{"bbox": [71, 507, 292, 517], "score": 1.0, "content": "The debate around whether LMs can be said to un-", "type": "text"}], "index": 27}, {"bbox": [71, 520, 292, 533], "spans": [{"bbox": [71, 520, 292, 533], "score": 1.0, "content": "derstand is often portrayed as a back-and-forth be-", "type": "text"}], "index": 28}, {"bbox": [70, 534, 292, 545], "spans": [{"bbox": [70, 534, 292, 545], "score": 1.0, "content": "tween two opposing sides (Mitchell and Krakauer,", "type": "text"}], "index": 29}, {"bbox": [71, 546, 293, 560], "spans": [{"bbox": [71, 546, 293, 560], "score": 1.0, "content": "2023), but in reality, there are many positions.", "type": "text"}], "index": 30}, {"bbox": [71, 561, 292, 573], "spans": [{"bbox": [71, 561, 292, 573], "score": 1.0, "content": "Some researchers have argued that LMs are \u2018all syn-", "type": "text"}], "index": 31}, {"bbox": [71, 575, 291, 585], "spans": [{"bbox": [71, 575, 291, 585], "score": 1.0, "content": "tax, no semantics\u2019, i.e., that they learn form, but not", "type": "text"}], "index": 32}, {"bbox": [70, 588, 291, 599], "spans": [{"bbox": [70, 588, 291, 599], "score": 1.0, "content": "meaning (Searle, 1980; Bender and Koller, 2020;", "type": "text"}], "index": 33}, {"bbox": [70, 600, 291, 614], "spans": [{"bbox": [70, 600, 291, 614], "score": 1.0, "content": "Marcus et al., 2023).2 Others have argued that LMs", "type": "text"}], "index": 34}], "index": 30.5}, {"type": "text", "bbox": [306, 206, 528, 368], "lines": [{"bbox": [306, 206, 528, 218], "spans": [{"bbox": [306, 206, 528, 218], "score": 1.0, "content": "have inferential semantics, but not referential se-", "type": "text"}], "index": 35}, {"bbox": [306, 220, 527, 232], "spans": [{"bbox": [306, 220, 527, 232], "score": 1.0, "content": "mantics (Rapaport, 2002; Sahlgren and Carlsson,", "type": "text"}], "index": 36}, {"bbox": [306, 232, 527, 245], "spans": [{"bbox": [306, 232, 527, 245], "score": 1.0, "content": "2021; Piantadosi and Hill, 2022),3 whereas some", "type": "text"}], "index": 37}, {"bbox": [306, 247, 527, 259], "spans": [{"bbox": [306, 247, 527, 259], "score": 1.0, "content": "have posited that a form of externalist referential", "type": "text"}], "index": 38}, {"bbox": [307, 260, 527, 273], "spans": [{"bbox": [307, 260, 527, 273], "score": 1.0, "content": "semantics is possible, at least for chatbots engaged", "type": "text"}], "index": 39}, {"bbox": [306, 274, 528, 286], "spans": [{"bbox": [306, 274, 528, 286], "score": 1.0, "content": "in direct conversation (Cappelen and Dever, 2021;", "type": "text"}], "index": 40}, {"bbox": [307, 288, 527, 298], "spans": [{"bbox": [307, 288, 527, 298], "score": 1.0, "content": "Butlin, 2021; Mollo and Milli\u00e8re, 2023; Mandelk-", "type": "text"}], "index": 41}, {"bbox": [306, 300, 528, 313], "spans": [{"bbox": [306, 300, 528, 313], "score": 1.0, "content": "ern and Linzen, 2023). Most researchers agree,", "type": "text"}], "index": 42}, {"bbox": [306, 314, 528, 326], "spans": [{"bbox": [306, 314, 528, 326], "score": 1.0, "content": "however, that LMs \u201clack the ability to connect ut-", "type": "text"}], "index": 43}, {"bbox": [307, 329, 527, 339], "spans": [{"bbox": [307, 329, 527, 339], "score": 1.0, "content": "terances to the world\u201d (Bender and Koller, 2020),", "type": "text"}], "index": 44}, {"bbox": [307, 342, 526, 353], "spans": [{"bbox": [307, 342, 526, 353], "score": 1.0, "content": "because they do not have \u201cmental models of the", "type": "text"}], "index": 45}, {"bbox": [306, 356, 476, 367], "spans": [{"bbox": [306, 356, 476, 367], "score": 1.0, "content": "world\u201d (Mitchell and Krakauer, 2023).", "type": "text"}], "index": 46}], "index": 40.5}, {"type": "text", "bbox": [306, 369, 528, 599], "lines": [{"bbox": [318, 370, 527, 382], "spans": [{"bbox": [318, 370, 527, 382], "score": 1.0, "content": "This study provides evidence to the contrary:", "type": "text"}], "index": 47}, {"bbox": [306, 383, 526, 396], "spans": [{"bbox": [306, 383, 526, 396], "score": 1.0, "content": "Language models and computer vision models", "type": "text"}], "index": 48}, {"bbox": [306, 397, 527, 410], "spans": [{"bbox": [306, 397, 527, 410], "score": 1.0, "content": "(VMs) are trained on independent data sources (at", "type": "text"}], "index": 49}, {"bbox": [305, 410, 528, 423], "spans": [{"bbox": [305, 410, 528, 423], "score": 1.0, "content": "least for unsupervised computer vision models).", "type": "text"}], "index": 50}, {"bbox": [306, 424, 528, 436], "spans": [{"bbox": [306, 424, 528, 436], "score": 1.0, "content": "The only common source of bias is the world. If", "type": "text"}], "index": 51}, {"bbox": [307, 438, 527, 448], "spans": [{"bbox": [307, 438, 527, 448], "score": 1.0, "content": "LMs and VMs exhibit similarities, it must be be-", "type": "text"}], "index": 52}, {"bbox": [307, 452, 526, 462], "spans": [{"bbox": [307, 452, 526, 462], "score": 1.0, "content": "cause they both model the world. We examine", "type": "text"}], "index": 53}, {"bbox": [307, 465, 527, 477], "spans": [{"bbox": [307, 465, 527, 477], "score": 1.0, "content": "the representations learned by different LMs and", "type": "text"}], "index": 54}, {"bbox": [307, 478, 526, 490], "spans": [{"bbox": [307, 478, 526, 490], "score": 1.0, "content": "VMs by measuring how similar their geometries", "type": "text"}], "index": 55}, {"bbox": [307, 492, 526, 503], "spans": [{"bbox": [307, 492, 526, 503], "score": 1.0, "content": "are. We consistently find that the better the LMs", "type": "text"}], "index": 56}, {"bbox": [307, 506, 526, 516], "spans": [{"bbox": [307, 506, 526, 516], "score": 1.0, "content": "are, the more they induce representations similar", "type": "text"}], "index": 57}, {"bbox": [307, 520, 526, 531], "spans": [{"bbox": [307, 520, 526, 531], "score": 1.0, "content": "to those induced by computer vision models. The", "type": "text"}], "index": 58}, {"bbox": [307, 533, 526, 545], "spans": [{"bbox": [307, 533, 526, 545], "score": 1.0, "content": "similarity between the two spaces is such that from", "type": "text"}], "index": 59}, {"bbox": [307, 547, 526, 558], "spans": [{"bbox": [307, 547, 526, 558], "score": 1.0, "content": "a very small set of parallel examples we are able", "type": "text"}], "index": 60}, {"bbox": [307, 560, 527, 572], "spans": [{"bbox": [307, 560, 527, 572], "score": 1.0, "content": "to linearly project VMs representations to the lan-", "type": "text"}], "index": 61}, {"bbox": [306, 573, 527, 586], "spans": [{"bbox": [306, 573, 527, 586], "score": 1.0, "content": "guage space and retrieve highly accurate captions,", "type": "text"}], "index": 62}, {"bbox": [307, 588, 474, 599], "spans": [{"bbox": [307, 588, 474, 599], "score": 1.0, "content": "as shown by the examples in Figure 1.", "type": "text"}], "index": 63}], "index": 55}, {"type": "text", "bbox": [306, 609, 527, 691], "lines": [{"bbox": [307, 610, 527, 622], "spans": [{"bbox": [307, 610, 527, 622], "score": 1.0, "content": "Contributions. We present a series of evalua-", "type": "text"}], "index": 64}, {"bbox": [306, 624, 526, 636], "spans": [{"bbox": [306, 624, 526, 636], "score": 1.0, "content": "tions of the vector spaces induced by three families", "type": "text"}], "index": 65}, {"bbox": [307, 637, 527, 649], "spans": [{"bbox": [307, 637, 527, 649], "score": 1.0, "content": "of VMs and four families of LMs, i.e., a total of", "type": "text"}], "index": 66}, {"bbox": [306, 651, 527, 663], "spans": [{"bbox": [306, 651, 527, 663], "score": 1.0, "content": "fourteen VMs and fourteen LMs. We show that", "type": "text"}], "index": 67}, {"bbox": [307, 665, 526, 676], "spans": [{"bbox": [307, 665, 526, 676], "score": 1.0, "content": "within each family, the larger the LMs, the more", "type": "text"}], "index": 68}, {"bbox": [307, 678, 526, 690], "spans": [{"bbox": [307, 678, 526, 690], "score": 1.0, "content": "their vector spaces become structurally similar to", "type": "text"}], "index": 69}], "index": 66.5}], "layout_bboxes": [], "page_idx": 0, "page_size": [595.2760009765625, 841.8900146484375], "_layout_tree": [], "images": [], "tables": [], "interline_equations": [], "discarded_blocks": [{"type": "discarded", "bbox": [306, 700, 527, 765], "lines": [{"bbox": [306, 701, 527, 711], "spans": [{"bbox": [306, 701, 527, 711], "score": 1.0, "content": "receives text messages in this language and follows a rule book", "type": "text"}]}, {"bbox": [307, 712, 526, 722], "spans": [{"bbox": [307, 712, 526, 722], "score": 1.0, "content": "to reply to the messages. The interlocutor is Searle\u2019s caricature", "type": "text"}]}, {"bbox": [307, 723, 526, 732], "spans": [{"bbox": [307, 723, 526, 732], "score": 1.0, "content": "of artificial intelligence, and is obviously, Searle claims, not", "type": "text"}]}, {"bbox": [306, 733, 526, 744], "spans": [{"bbox": [306, 733, 526, 744], "score": 1.0, "content": "endowed with meaning or understanding, but merely symbol", "type": "text"}]}, {"bbox": [307, 745, 357, 754], "spans": [{"bbox": [307, 745, 357, 754], "score": 1.0, "content": "manipulation.", "type": "text"}]}, {"bbox": [318, 754, 466, 767], "spans": [{"bbox": [318, 754, 466, 767], "score": 1.0, "content": "3See Marconi (1997) for this distinction.", "type": "text"}]}]}, {"type": "discarded", "bbox": [71, 625, 291, 766], "lines": [{"bbox": [84, 623, 291, 635], "spans": [{"bbox": [84, 623, 171, 635], "score": 1.0, "content": "1Code and dataset:", "type": "text"}, {"bbox": [188, 623, 291, 634], "score": 1.0, "content": "https://github.com/", "type": "text"}]}, {"bbox": [72, 635, 145, 645], "spans": [{"bbox": [72, 635, 145, 645], "score": 1.0, "content": "jiaangli/VLCA.", "type": "text"}]}, {"bbox": [83, 643, 292, 658], "spans": [{"bbox": [83, 643, 292, 658], "score": 1.0, "content": "2The idea that computers are \u2018all syntax, no semantics\u2019", "type": "text"}]}, {"bbox": [70, 656, 292, 668], "spans": [{"bbox": [70, 656, 292, 668], "score": 1.0, "content": "can be traced back to German 17th century philosopher Leib-", "type": "text"}]}, {"bbox": [71, 668, 291, 678], "spans": [{"bbox": [71, 668, 291, 678], "score": 1.0, "content": "niz\u2019s Mill Argument (Lodge and Bobro, 1998). The Mill", "type": "text"}]}, {"bbox": [72, 679, 291, 689], "spans": [{"bbox": [72, 679, 291, 689], "score": 1.0, "content": "Argument states that mental states cannot be reduced to physi-", "type": "text"}]}, {"bbox": [71, 690, 291, 700], "spans": [{"bbox": [71, 690, 291, 700], "score": 1.0, "content": "cal states, so if the capacity to understand language requires", "type": "text"}]}, {"bbox": [71, 700, 291, 711], "spans": [{"bbox": [71, 700, 291, 711], "score": 1.0, "content": "mental states, this capacity cannot be instantiated, merely", "type": "text"}]}, {"bbox": [72, 712, 291, 722], "spans": [{"bbox": [72, 712, 291, 722], "score": 1.0, "content": "imitated, by machines. In 1980, Searle introduced an even", "type": "text"}]}, {"bbox": [71, 723, 291, 733], "spans": [{"bbox": [71, 723, 291, 733], "score": 1.0, "content": "more popular argument against the possibility of LM under-", "type": "text"}]}, {"bbox": [71, 734, 291, 743], "spans": [{"bbox": [71, 734, 291, 743], "score": 1.0, "content": "standing, in the form of the so-called Chinese Room thought", "type": "text"}]}, {"bbox": [72, 745, 291, 754], "spans": [{"bbox": [72, 745, 291, 754], "score": 1.0, "content": "experiment (Searle, 1980). The Chinese Room presents an in-", "type": "text"}]}, {"bbox": [70, 755, 290, 766], "spans": [{"bbox": [70, 755, 290, 766], "score": 1.0, "content": "terlocutor with no prior knowledge of a foreign language, who", "type": "text"}]}]}, {"type": "discarded", "bbox": [14, 271, 36, 609], "lines": [{"bbox": [15, 273, 37, 607], "spans": [{"bbox": [15, 273, 37, 607], "score": 1.0, "content": "arXiv:2302.06555v2 [cs.CL] 6 Jul 2024", "type": "text", "height": 334, "width": 22}]}]}], "need_drop": false, "drop_reason": [], "para_blocks": [{"type": "title", "bbox": [143, 67, 454, 102], "lines": [{"bbox": [145, 69, 451, 84], "spans": [{"bbox": [145, 69, 451, 84], "score": 1.0, "content": "Do Vision and Language Models Share Concepts?", "type": "text"}], "index": 0}, {"bbox": [198, 85, 398, 101], "spans": [{"bbox": [198, 85, 398, 101], "score": 1.0, "content": "A Vector Space Alignment Study", "type": "text"}], "index": 1}], "index": 0.5, "page_num": "page_0", "page_size": [595.2760009765625, 841.8900146484375]}, {"type": "text", "bbox": [109, 112, 489, 127], "lines": [{"bbox": [110, 113, 488, 127], "spans": [{"bbox": [110, 113, 488, 127], "score": 1.0, "content": "Jiaang Li\u2020 Yova Kementchedjhieva\u2021 Constanza Fierro\u2020 Anders S\u00f8gaard\u2020", "type": "text"}], "index": 2}], "index": 2, "page_num": "page_0", "page_size": [595.2760009765625, 841.8900146484375], "bbox_fs": [110, 113, 488, 127]}, {"type": "text", "bbox": [54, 140, 549, 183], "lines": [{"bbox": [231, 138, 369, 158], "spans": [{"bbox": [231, 138, 369, 158], "score": 1.0, "content": "\u2020 University of Copenhagen", "type": "text"}], "index": 3}, {"bbox": [158, 152, 442, 173], "spans": [{"bbox": [158, 152, 442, 173], "score": 1.0, "content": "\u2021 Mohamed bin Zayed University of Artificial Intelligence", "type": "text"}], "index": 4}, {"bbox": [53, 169, 549, 185], "spans": [{"bbox": [53, 169, 549, 185], "score": 1.0, "content": "{jili,c.fierro,soegaard}@di.ku.dk, [email protected]", "type": "text"}], "index": 5}], "index": 4, "page_num": "page_0", "page_size": [595.2760009765625, 841.8900146484375], "bbox_fs": [53, 138, 549, 185]}, {"type": "title", "bbox": [159, 206, 204, 217], "lines": [{"bbox": [158, 204, 205, 218], "spans": [{"bbox": [158, 204, 205, 218], "score": 1.0, "content": "Abstract", "type": "text"}], "index": 6}], "index": 6, "page_num": "page_0", "page_size": [595.2760009765625, 841.8900146484375]}, {"type": "text", "bbox": [93, 233, 270, 459], "lines": [{"bbox": [92, 233, 269, 245], "spans": [{"bbox": [92, 233, 269, 245], "score": 1.0, "content": "Large-scale pretrained language models", "type": "text"}], "index": 7}, {"bbox": [93, 246, 270, 256], "spans": [{"bbox": [93, 246, 270, 256], "score": 1.0, "content": "(LMs) are said to \u201clack the ability to con-", "type": "text"}], "index": 8}, {"bbox": [92, 258, 269, 267], "spans": [{"bbox": [92, 258, 269, 267], "score": 1.0, "content": "nect utterances to the world\u201d (Bender and", "type": "text"}], "index": 9}, {"bbox": [92, 269, 269, 280], "spans": [{"bbox": [92, 269, 269, 280], "score": 1.0, "content": "Koller, 2020), because they do not have", "type": "text"}], "index": 10}, {"bbox": [91, 281, 269, 291], "spans": [{"bbox": [91, 281, 269, 291], "score": 1.0, "content": "\u201cmental models of the world\u201d (Mitchell and", "type": "text"}], "index": 11}, {"bbox": [92, 293, 270, 304], "spans": [{"bbox": [92, 293, 270, 304], "score": 1.0, "content": "Krakauer, 2023). If so, one would expect", "type": "text"}], "index": 12}, {"bbox": [92, 304, 270, 317], "spans": [{"bbox": [92, 304, 270, 317], "score": 1.0, "content": "LM representations to be unrelated to rep-", "type": "text"}], "index": 13}, {"bbox": [92, 317, 270, 328], "spans": [{"bbox": [92, 317, 270, 328], "score": 1.0, "content": "resentations induced by vision models. We", "type": "text"}], "index": 14}, {"bbox": [92, 329, 270, 340], "spans": [{"bbox": [92, 329, 270, 340], "score": 1.0, "content": "present an empirical evaluation across four", "type": "text"}], "index": 15}, {"bbox": [92, 341, 270, 351], "spans": [{"bbox": [92, 341, 270, 351], "score": 1.0, "content": "families of LMs (BERT, GPT-2, OPT and", "type": "text"}], "index": 16}, {"bbox": [93, 354, 270, 363], "spans": [{"bbox": [93, 354, 270, 363], "score": 1.0, "content": "LLaMA-2) and three vision model architec-", "type": "text"}], "index": 17}, {"bbox": [93, 365, 269, 375], "spans": [{"bbox": [93, 365, 269, 375], "score": 1.0, "content": "tures (ResNet, SegFormer, and MAE). Our", "type": "text"}], "index": 18}, {"bbox": [93, 377, 270, 388], "spans": [{"bbox": [93, 377, 270, 388], "score": 1.0, "content": "experiments show that LMs partially con-", "type": "text"}], "index": 19}, {"bbox": [93, 390, 269, 399], "spans": [{"bbox": [93, 390, 269, 399], "score": 1.0, "content": "verge towards representations isomorphic to", "type": "text"}], "index": 20}, {"bbox": [92, 400, 269, 412], "spans": [{"bbox": [92, 400, 269, 412], "score": 1.0, "content": "those of vision models, subject to dispersion,", "type": "text"}], "index": 21}, {"bbox": [94, 413, 269, 424], "spans": [{"bbox": [94, 413, 269, 424], "score": 1.0, "content": "polysemy and frequency. This has important", "type": "text"}], "index": 22}, {"bbox": [92, 424, 269, 436], "spans": [{"bbox": [92, 424, 269, 436], "score": 1.0, "content": "implications for both multi-modal processing", "type": "text"}], "index": 23}, {"bbox": [93, 437, 270, 448], "spans": [{"bbox": [93, 437, 270, 448], "score": 1.0, "content": "and the LM understanding debate (Mitchell", "type": "text"}], "index": 24}, {"bbox": [92, 448, 182, 459], "spans": [{"bbox": [92, 448, 182, 459], "score": 1.0, "content": "and Krakauer, 2023).1", "type": "text"}], "index": 25}], "index": 16, "page_num": "page_0", "page_size": [595.2760009765625, 841.8900146484375], "bbox_fs": [91, 233, 270, 459]}, {"type": "title", "bbox": [72, 483, 155, 496], "lines": [{"bbox": [72, 483, 156, 496], "spans": [{"bbox": [72, 484, 79, 493], "score": 1.0, "content": "1", "type": "text"}, {"bbox": [87, 483, 156, 496], "score": 1.0, "content": "Introduction", "type": "text"}], "index": 26}], "index": 26, "page_num": "page_0", "page_size": [595.2760009765625, 841.8900146484375]}, {"type": "text", "bbox": [71, 506, 292, 613], "lines": [{"bbox": [71, 507, 292, 517], "spans": [{"bbox": [71, 507, 292, 517], "score": 1.0, "content": "The debate around whether LMs can be said to un-", "type": "text"}], "index": 27}, {"bbox": [71, 520, 292, 533], "spans": [{"bbox": [71, 520, 292, 533], "score": 1.0, "content": "derstand is often portrayed as a back-and-forth be-", "type": "text"}], "index": 28}, {"bbox": [70, 534, 292, 545], "spans": [{"bbox": [70, 534, 292, 545], "score": 1.0, "content": "tween two opposing sides (Mitchell and Krakauer,", "type": "text"}], "index": 29}, {"bbox": [71, 546, 293, 560], "spans": [{"bbox": [71, 546, 293, 560], "score": 1.0, "content": "2023), but in reality, there are many positions.", "type": "text"}], "index": 30}, {"bbox": [71, 561, 292, 573], "spans": [{"bbox": [71, 561, 292, 573], "score": 1.0, "content": "Some researchers have argued that LMs are \u2018all syn-", "type": "text"}], "index": 31}, {"bbox": [71, 575, 291, 585], "spans": [{"bbox": [71, 575, 291, 585], "score": 1.0, "content": "tax, no semantics\u2019, i.e., that they learn form, but not", "type": "text"}], "index": 32}, {"bbox": [70, 588, 291, 599], "spans": [{"bbox": [70, 588, 291, 599], "score": 1.0, "content": "meaning (Searle, 1980; Bender and Koller, 2020;", "type": "text"}], "index": 33}, {"bbox": [70, 600, 291, 614], "spans": [{"bbox": [70, 600, 291, 614], "score": 1.0, "content": "Marcus et al., 2023).2 Others have argued that LMs", "type": "text"}], "index": 34}, {"bbox": [306, 206, 528, 218], "spans": [{"bbox": [306, 206, 528, 218], "score": 1.0, "content": "have inferential semantics, but not referential se-", "type": "text"}], "index": 35}, {"bbox": [306, 220, 527, 232], "spans": [{"bbox": [306, 220, 527, 232], "score": 1.0, "content": "mantics (Rapaport, 2002; Sahlgren and Carlsson,", "type": "text"}], "index": 36}, {"bbox": [306, 232, 527, 245], "spans": [{"bbox": [306, 232, 527, 245], "score": 1.0, "content": "2021; Piantadosi and Hill, 2022),3 whereas some", "type": "text"}], "index": 37}, {"bbox": [306, 247, 527, 259], "spans": [{"bbox": [306, 247, 527, 259], "score": 1.0, "content": "have posited that a form of externalist referential", "type": "text"}], "index": 38}, {"bbox": [307, 260, 527, 273], "spans": [{"bbox": [307, 260, 527, 273], "score": 1.0, "content": "semantics is possible, at least for chatbots engaged", "type": "text"}], "index": 39}, {"bbox": [306, 274, 528, 286], "spans": [{"bbox": [306, 274, 528, 286], "score": 1.0, "content": "in direct conversation (Cappelen and Dever, 2021;", "type": "text"}], "index": 40}, {"bbox": [307, 288, 527, 298], "spans": [{"bbox": [307, 288, 527, 298], "score": 1.0, "content": "Butlin, 2021; Mollo and Milli\u00e8re, 2023; Mandelk-", "type": "text"}], "index": 41}, {"bbox": [306, 300, 528, 313], "spans": [{"bbox": [306, 300, 528, 313], "score": 1.0, "content": "ern and Linzen, 2023). Most researchers agree,", "type": "text"}], "index": 42}, {"bbox": [306, 314, 528, 326], "spans": [{"bbox": [306, 314, 528, 326], "score": 1.0, "content": "however, that LMs \u201clack the ability to connect ut-", "type": "text"}], "index": 43}, {"bbox": [307, 329, 527, 339], "spans": [{"bbox": [307, 329, 527, 339], "score": 1.0, "content": "terances to the world\u201d (Bender and Koller, 2020),", "type": "text"}], "index": 44}, {"bbox": [307, 342, 526, 353], "spans": [{"bbox": [307, 342, 526, 353], "score": 1.0, "content": "because they do not have \u201cmental models of the", "type": "text"}], "index": 45}, {"bbox": [306, 356, 476, 367], "spans": [{"bbox": [306, 356, 476, 367], "score": 1.0, "content": "world\u201d (Mitchell and Krakauer, 2023).", "type": "text"}], "index": 46}], "index": 30.5, "page_num": "page_0", "page_size": [595.2760009765625, 841.8900146484375], "bbox_fs": [70, 507, 293, 614]}, {"type": "text", "bbox": [306, 206, 528, 368], "lines": [], "index": 40.5, "page_num": "page_0", "page_size": [595.2760009765625, 841.8900146484375], "bbox_fs": [306, 206, 528, 367], "lines_deleted": true}, {"type": "text", "bbox": [306, 369, 528, 599], "lines": [{"bbox": [318, 370, 527, 382], "spans": [{"bbox": [318, 370, 527, 382], "score": 1.0, "content": "This study provides evidence to the contrary:", "type": "text"}], "index": 47}, {"bbox": [306, 383, 526, 396], "spans": [{"bbox": [306, 383, 526, 396], "score": 1.0, "content": "Language models and computer vision models", "type": "text"}], "index": 48}, {"bbox": [306, 397, 527, 410], "spans": [{"bbox": [306, 397, 527, 410], "score": 1.0, "content": "(VMs) are trained on independent data sources (at", "type": "text"}], "index": 49}, {"bbox": [305, 410, 528, 423], "spans": [{"bbox": [305, 410, 528, 423], "score": 1.0, "content": "least for unsupervised computer vision models).", "type": "text"}], "index": 50}, {"bbox": [306, 424, 528, 436], "spans": [{"bbox": [306, 424, 528, 436], "score": 1.0, "content": "The only common source of bias is the world. If", "type": "text"}], "index": 51}, {"bbox": [307, 438, 527, 448], "spans": [{"bbox": [307, 438, 527, 448], "score": 1.0, "content": "LMs and VMs exhibit similarities, it must be be-", "type": "text"}], "index": 52}, {"bbox": [307, 452, 526, 462], "spans": [{"bbox": [307, 452, 526, 462], "score": 1.0, "content": "cause they both model the world. We examine", "type": "text"}], "index": 53}, {"bbox": [307, 465, 527, 477], "spans": [{"bbox": [307, 465, 527, 477], "score": 1.0, "content": "the representations learned by different LMs and", "type": "text"}], "index": 54}, {"bbox": [307, 478, 526, 490], "spans": [{"bbox": [307, 478, 526, 490], "score": 1.0, "content": "VMs by measuring how similar their geometries", "type": "text"}], "index": 55}, {"bbox": [307, 492, 526, 503], "spans": [{"bbox": [307, 492, 526, 503], "score": 1.0, "content": "are. We consistently find that the better the LMs", "type": "text"}], "index": 56}, {"bbox": [307, 506, 526, 516], "spans": [{"bbox": [307, 506, 526, 516], "score": 1.0, "content": "are, the more they induce representations similar", "type": "text"}], "index": 57}, {"bbox": [307, 520, 526, 531], "spans": [{"bbox": [307, 520, 526, 531], "score": 1.0, "content": "to those induced by computer vision models. The", "type": "text"}], "index": 58}, {"bbox": [307, 533, 526, 545], "spans": [{"bbox": [307, 533, 526, 545], "score": 1.0, "content": "similarity between the two spaces is such that from", "type": "text"}], "index": 59}, {"bbox": [307, 547, 526, 558], "spans": [{"bbox": [307, 547, 526, 558], "score": 1.0, "content": "a very small set of parallel examples we are able", "type": "text"}], "index": 60}, {"bbox": [307, 560, 527, 572], "spans": [{"bbox": [307, 560, 527, 572], "score": 1.0, "content": "to linearly project VMs representations to the lan-", "type": "text"}], "index": 61}, {"bbox": [306, 573, 527, 586], "spans": [{"bbox": [306, 573, 527, 586], "score": 1.0, "content": "guage space and retrieve highly accurate captions,", "type": "text"}], "index": 62}, {"bbox": [307, 588, 474, 599], "spans": [{"bbox": [307, 588, 474, 599], "score": 1.0, "content": "as shown by the examples in Figure 1.", "type": "text"}], "index": 63}], "index": 55, "page_num": "page_0", "page_size": [595.2760009765625, 841.8900146484375], "bbox_fs": [305, 370, 528, 599]}, {"type": "text", "bbox": [306, 609, 527, 691], "lines": [{"bbox": [307, 610, 527, 622], "spans": [{"bbox": [307, 610, 527, 622], "score": 1.0, "content": "Contributions. We present a series of evalua-", "type": "text"}], "index": 64}, {"bbox": [306, 624, 526, 636], "spans": [{"bbox": [306, 624, 526, 636], "score": 1.0, "content": "tions of the vector spaces induced by three families", "type": "text"}], "index": 65}, {"bbox": [307, 637, 527, 649], "spans": [{"bbox": [307, 637, 527, 649], "score": 1.0, "content": "of VMs and four families of LMs, i.e., a total of", "type": "text"}], "index": 66}, {"bbox": [306, 651, 527, 663], "spans": [{"bbox": [306, 651, 527, 663], "score": 1.0, "content": "fourteen VMs and fourteen LMs. We show that", "type": "text"}], "index": 67}, {"bbox": [307, 665, 526, 676], "spans": [{"bbox": [307, 665, 526, 676], "score": 1.0, "content": "within each family, the larger the LMs, the more", "type": "text"}], "index": 68}, {"bbox": [307, 678, 526, 690], "spans": [{"bbox": [307, 678, 526, 690], "score": 1.0, "content": "their vector spaces become structurally similar to", "type": "text"}], "index": 69}, {"bbox": [71, 65, 291, 76], "spans": [{"bbox": [71, 65, 291, 76], "score": 1.0, "content": "those of computer vision models. This enables", "type": "text", "cross_page": true}], "index": 0}, {"bbox": [70, 79, 292, 91], "spans": [{"bbox": [70, 79, 292, 91], "score": 1.0, "content": "retrieval of language representations of images (ref-", "type": "text", "cross_page": true}], "index": 1}, {"bbox": [71, 92, 292, 105], "spans": [{"bbox": [71, 92, 292, 105], "score": 1.0, "content": "erential semantics) with minimal supervision. Re-", "type": "text", "cross_page": true}], "index": 2}, {"bbox": [71, 105, 291, 118], "spans": [{"bbox": [71, 105, 291, 118], "score": 1.0, "content": "trieval precision depends on dispersion of image", "type": "text", "cross_page": true}], "index": 3}, {"bbox": [70, 119, 292, 132], "spans": [{"bbox": [70, 119, 292, 132], "score": 1.0, "content": "and language, polysemy, and frequency, but con-", "type": "text", "cross_page": true}], "index": 4}, {"bbox": [71, 133, 291, 145], "spans": [{"bbox": [71, 133, 291, 145], "score": 1.0, "content": "sistently improves with language model size. We", "type": "text", "cross_page": true}], "index": 5}, {"bbox": [71, 146, 291, 159], "spans": [{"bbox": [71, 146, 291, 159], "score": 1.0, "content": "discuss the implications of the finding that language", "type": "text", "cross_page": true}], "index": 6}, {"bbox": [71, 160, 291, 172], "spans": [{"bbox": [71, 160, 291, 172], "score": 1.0, "content": "and computer vision models learn representations", "type": "text", "cross_page": true}], "index": 7}, {"bbox": [70, 173, 178, 186], "spans": [{"bbox": [70, 173, 178, 186], "score": 1.0, "content": "with similar geometries.", "type": "text", "cross_page": true}], "index": 8}], "index": 66.5, "page_num": "page_0", "page_size": [595.2760009765625, 841.8900146484375], "bbox_fs": [306, 610, 527, 690]}]}
2302.06555
1
those of computer vision models. This enables retrieval of language representations of images (ref- erential semantics) with minimal supervision. Re- trieval precision depends on dispersion of image and language, polysemy, and frequency, but con- sistently improves with language model size. We discuss the implications of the finding that language and computer vision models learn representations with similar geometries. # 2 Related Work Inspiration from cognitive science. Computa- tional modeling is a cornerstone of cognitive sci- ence in the pursuit for a better understanding of how representations in the brain come about. As such, the field has shown a growing interest in computa- tional representations induced with self-supervised learning (Orhan et al., 2020; Halvagal and Zenke, 2022). Cognitive scientists have also noted how the objectives of supervised language and vision models bear resemblances to predictive process- ing (Schrimpf et al., 2018; Goldstein et al., 2021; Caucheteux et al., 2022; Li et al., 2023) (but see Antonello and Huth (2022) for a critical discussion of such work). Studies have looked at the alignability of neu- ral language representations and human brain acti- vations, with more promising results as language models grow better at modeling language (Sassen- hagen and Fiebach, 2020; Schrimpf et al., 2021). In these studies, the partial alignability of brain and model representations is interpreted as evidence that brain and models might process language in the same way (Caucheteux and King, 2022). Cross-modal alignment. The idea of cross- modal retrieval is not new (Lazaridou et al., 2014), but previously it has mostly been studied with prac- tical considerations in mind. Recently, Merullo et al. (2023) showed that language representations in LMs are functionally similar to image repre- sentations in VMs, in that a linear transformation applied to an image representation can be used to prompt a language model into producing a relevant caption. We dial back from function and study whether the concept representations converge to- ward structural similarity (isomorphism). The key question we address is whether despite the lack of explicit grounding, the representations learned by large pretrained language models structurally resemble properties of the physical world as cap- tured by vision models. More related to our work, Huh et al. (2024) proposes a similar hypothesis, although studying it from a different perspective, and our findings corroborate theirs. # 3 Methodology Our primary objective is to compare the representa- tions derived from VMs and LMs and assess their alignability, i.e. the extent to which LMs converge toward VMs’ geometries. In the following sections, we introduce the procedures for obtaining the rep- resentations and aligning them, with an illustration of our methodology provided in Figure 2. Vision models. We include fourteen VMs in our experiments, representing three model families: SegFormer (Xie et al., 2021), MAE (He et al., 2022), and ResNet (He et al., 2016). For all three types of VMs, we only employ the encoder compo- nent as a visual feature extractor.4 SegFormer models consist of a Transformer- based encoder and a light-weight feed-forward decoder. They are pretrained on object classifi- cation data and finetuned on scene parsing data for scene segmentation and object classification. We hypothesize that the reasoning necessary to
<p>those of computer vision models. This enables retrieval of language representations of images (ref- erential semantics) with minimal supervision. Re- trieval precision depends on dispersion of image and language, polysemy, and frequency, but con- sistently improves with language model size. We discuss the implications of the finding that language and computer vision models learn representations with similar geometries.</p> <h1>2 Related Work</h1> <p>Inspiration from cognitive science. Computa- tional modeling is a cornerstone of cognitive sci- ence in the pursuit for a better understanding of how representations in the brain come about. As such, the field has shown a growing interest in computa- tional representations induced with self-supervised learning (Orhan et al., 2020; Halvagal and Zenke, 2022). Cognitive scientists have also noted how the objectives of supervised language and vision models bear resemblances to predictive process- ing (Schrimpf et al., 2018; Goldstein et al., 2021; Caucheteux et al., 2022; Li et al., 2023) (but see Antonello and Huth (2022) for a critical discussion of such work).</p> <p>Studies have looked at the alignability of neu- ral language representations and human brain acti- vations, with more promising results as language models grow better at modeling language (Sassen- hagen and Fiebach, 2020; Schrimpf et al., 2021). In these studies, the partial alignability of brain and model representations is interpreted as evidence that brain and models might process language in the same way (Caucheteux and King, 2022).</p> <p>Cross-modal alignment. The idea of cross- modal retrieval is not new (Lazaridou et al., 2014), but previously it has mostly been studied with prac- tical considerations in mind. Recently, Merullo et al. (2023) showed that language representations in LMs are functionally similar to image repre- sentations in VMs, in that a linear transformation applied to an image representation can be used to prompt a language model into producing a relevant caption. We dial back from function and study whether the concept representations converge to- ward structural similarity (isomorphism). The key question we address is whether despite the lack of explicit grounding, the representations learned by large pretrained language models structurally resemble properties of the physical world as cap- tured by vision models. More related to our work,</p> <p>Huh et al. (2024) proposes a similar hypothesis, although studying it from a different perspective, and our findings corroborate theirs.</p> <h1>3 Methodology</h1> <p>Our primary objective is to compare the representa- tions derived from VMs and LMs and assess their alignability, i.e. the extent to which LMs converge toward VMs’ geometries. In the following sections, we introduce the procedures for obtaining the rep- resentations and aligning them, with an illustration of our methodology provided in Figure 2.</p> <p>Vision models. We include fourteen VMs in our experiments, representing three model families: SegFormer (Xie et al., 2021), MAE (He et al., 2022), and ResNet (He et al., 2016). For all three types of VMs, we only employ the encoder compo- nent as a visual feature extractor.4</p> <p>SegFormer models consist of a Transformer- based encoder and a light-weight feed-forward decoder. They are pretrained on object classifi- cation data and finetuned on scene parsing data for scene segmentation and object classification. We hypothesize that the reasoning necessary to</p>
[{"type": "text", "coordinates": [70, 63, 293, 185], "content": "those of computer vision models. This enables\nretrieval of language representations of images (ref-\nerential semantics) with minimal supervision. Re-\ntrieval precision depends on dispersion of image\nand language, polysemy, and frequency, but con-\nsistently improves with language model size. We\ndiscuss the implications of the finding that language\nand computer vision models learn representations\nwith similar geometries.", "block_type": "text", "index": 1}, {"type": "title", "coordinates": [71, 195, 162, 208], "content": "2 Related Work", "block_type": "title", "index": 2}, {"type": "text", "coordinates": [70, 217, 293, 406], "content": "Inspiration from cognitive science. Computa-\ntional modeling is a cornerstone of cognitive sci-\nence in the pursuit for a better understanding of how\nrepresentations in the brain come about. As such,\nthe field has shown a growing interest in computa-\ntional representations induced with self-supervised\nlearning (Orhan et al., 2020; Halvagal and Zenke,\n2022). Cognitive scientists have also noted how\nthe objectives of supervised language and vision\nmodels bear resemblances to predictive process-\ning (Schrimpf et al., 2018; Goldstein et al., 2021;\nCaucheteux et al., 2022; Li et al., 2023) (but see\nAntonello and Huth (2022) for a critical discussion\nof such work).", "block_type": "text", "index": 3}, {"type": "text", "coordinates": [70, 407, 293, 529], "content": "Studies have looked at the alignability of neu-\nral language representations and human brain acti-\nvations, with more promising results as language\nmodels grow better at modeling language (Sassen-\nhagen and Fiebach, 2020; Schrimpf et al., 2021).\nIn these studies, the partial alignability of brain and\nmodel representations is interpreted as evidence\nthat brain and models might process language in\nthe same way (Caucheteux and King, 2022).", "block_type": "text", "index": 4}, {"type": "text", "coordinates": [70, 535, 293, 767], "content": "Cross-modal alignment. The idea of cross-\nmodal retrieval is not new (Lazaridou et al., 2014),\nbut previously it has mostly been studied with prac-\ntical considerations in mind. Recently, Merullo\net al. (2023) showed that language representations\nin LMs are functionally similar to image repre-\nsentations in VMs, in that a linear transformation\napplied to an image representation can be used to\nprompt a language model into producing a relevant\ncaption. We dial back from function and study\nwhether the concept representations converge to-\nward structural similarity (isomorphism). The key\nquestion we address is whether despite the lack\nof explicit grounding, the representations learned\nby large pretrained language models structurally\nresemble properties of the physical world as cap-\ntured by vision models. More related to our work,", "block_type": "text", "index": 5}, {"type": "image", "coordinates": [312, 67, 520, 315], "content": "", "block_type": "image", "index": 6}, {"type": "text", "coordinates": [306, 377, 528, 417], "content": "Huh et al. (2024) proposes a similar hypothesis,\nalthough studying it from a different perspective,\nand our findings corroborate theirs.", "block_type": "text", "index": 7}, {"type": "title", "coordinates": [307, 427, 392, 441], "content": "3 Methodology", "block_type": "title", "index": 8}, {"type": "text", "coordinates": [306, 449, 528, 544], "content": "Our primary objective is to compare the representa-\ntions derived from VMs and LMs and assess their\nalignability, i.e. the extent to which LMs converge\ntoward VMs\u2019 geometries. In the following sections,\nwe introduce the procedures for obtaining the rep-\nresentations and aligning them, with an illustration\nof our methodology provided in Figure 2.", "block_type": "text", "index": 9}, {"type": "text", "coordinates": [306, 551, 528, 632], "content": "Vision models. We include fourteen VMs in our\nexperiments, representing three model families:\nSegFormer (Xie et al., 2021), MAE (He et al.,\n2022), and ResNet (He et al., 2016). For all three\ntypes of VMs, we only employ the encoder compo-\nnent as a visual feature extractor.4", "block_type": "text", "index": 10}, {"type": "text", "coordinates": [306, 633, 528, 715], "content": "SegFormer models consist of a Transformer-\nbased encoder and a light-weight feed-forward\ndecoder. They are pretrained on object classifi-\ncation data and finetuned on scene parsing data\nfor scene segmentation and object classification.\nWe hypothesize that the reasoning necessary to", "block_type": "text", "index": 11}]
[{"type": "text", "coordinates": [71, 65, 291, 76], "content": "those of computer vision models. This enables", "score": 1.0, "index": 1}, {"type": "text", "coordinates": [70, 79, 292, 91], "content": "retrieval of language representations of images (ref-", "score": 1.0, "index": 2}, {"type": "text", "coordinates": [71, 92, 292, 105], "content": "erential semantics) with minimal supervision. Re-", "score": 1.0, "index": 3}, {"type": "text", "coordinates": [71, 105, 291, 118], "content": "trieval precision depends on dispersion of image", "score": 1.0, "index": 4}, {"type": "text", "coordinates": [70, 119, 292, 132], "content": "and language, polysemy, and frequency, but con-", "score": 1.0, "index": 5}, {"type": "text", "coordinates": [71, 133, 291, 145], "content": "sistently improves with language model size. We", "score": 1.0, "index": 6}, {"type": "text", "coordinates": [71, 146, 291, 159], "content": "discuss the implications of the finding that language", "score": 1.0, "index": 7}, {"type": "text", "coordinates": [71, 160, 291, 172], "content": "and computer vision models learn representations", "score": 1.0, "index": 8}, {"type": "text", "coordinates": [70, 173, 178, 186], "content": "with similar geometries.", "score": 1.0, "index": 9}, {"type": "text", "coordinates": [71, 196, 79, 206], "content": "2", "score": 1.0, "index": 10}, {"type": "text", "coordinates": [88, 195, 161, 208], "content": "Related Work", "score": 1.0, "index": 11}, {"type": "text", "coordinates": [72, 218, 292, 230], "content": "Inspiration from cognitive science. Computa-", "score": 1.0, "index": 12}, {"type": "text", "coordinates": [71, 232, 292, 244], "content": "tional modeling is a cornerstone of cognitive sci-", "score": 1.0, "index": 13}, {"type": "text", "coordinates": [72, 246, 291, 257], "content": "ence in the pursuit for a better understanding of how", "score": 1.0, "index": 14}, {"type": "text", "coordinates": [72, 259, 291, 270], "content": "representations in the brain come about. As such,", "score": 1.0, "index": 15}, {"type": "text", "coordinates": [70, 272, 292, 285], "content": "the field has shown a growing interest in computa-", "score": 1.0, "index": 16}, {"type": "text", "coordinates": [71, 286, 291, 297], "content": "tional representations induced with self-supervised", "score": 1.0, "index": 17}, {"type": "text", "coordinates": [70, 299, 292, 311], "content": "learning (Orhan et al., 2020; Halvagal and Zenke,", "score": 1.0, "index": 18}, {"type": "text", "coordinates": [72, 313, 291, 324], "content": "2022). Cognitive scientists have also noted how", "score": 1.0, "index": 19}, {"type": "text", "coordinates": [71, 326, 291, 338], "content": "the objectives of supervised language and vision", "score": 1.0, "index": 20}, {"type": "text", "coordinates": [70, 340, 292, 352], "content": "models bear resemblances to predictive process-", "score": 1.0, "index": 21}, {"type": "text", "coordinates": [70, 354, 292, 365], "content": "ing (Schrimpf et al., 2018; Goldstein et al., 2021;", "score": 1.0, "index": 22}, {"type": "text", "coordinates": [70, 366, 291, 379], "content": "Caucheteux et al., 2022; Li et al., 2023) (but see", "score": 1.0, "index": 23}, {"type": "text", "coordinates": [71, 380, 290, 392], "content": "Antonello and Huth (2022) for a critical discussion", "score": 1.0, "index": 24}, {"type": "text", "coordinates": [71, 394, 135, 406], "content": "of such work).", "score": 1.0, "index": 25}, {"type": "text", "coordinates": [82, 408, 292, 419], "content": "Studies have looked at the alignability of neu-", "score": 1.0, "index": 26}, {"type": "text", "coordinates": [71, 421, 292, 433], "content": "ral language representations and human brain acti-", "score": 1.0, "index": 27}, {"type": "text", "coordinates": [71, 434, 291, 448], "content": "vations, with more promising results as language", "score": 1.0, "index": 28}, {"type": "text", "coordinates": [71, 449, 292, 461], "content": "models grow better at modeling language (Sassen-", "score": 1.0, "index": 29}, {"type": "text", "coordinates": [72, 462, 291, 473], "content": "hagen and Fiebach, 2020; Schrimpf et al., 2021).", "score": 1.0, "index": 30}, {"type": "text", "coordinates": [70, 475, 291, 488], "content": "In these studies, the partial alignability of brain and", "score": 1.0, "index": 31}, {"type": "text", "coordinates": [71, 489, 291, 501], "content": "model representations is interpreted as evidence", "score": 1.0, "index": 32}, {"type": "text", "coordinates": [70, 502, 291, 515], "content": "that brain and models might process language in", "score": 1.0, "index": 33}, {"type": "text", "coordinates": [71, 516, 266, 528], "content": "the same way (Caucheteux and King, 2022).", "score": 1.0, "index": 34}, {"type": "text", "coordinates": [72, 538, 292, 549], "content": "Cross-modal alignment. The idea of cross-", "score": 1.0, "index": 35}, {"type": "text", "coordinates": [71, 552, 292, 562], "content": "modal retrieval is not new (Lazaridou et al., 2014),", "score": 1.0, "index": 36}, {"type": "text", "coordinates": [71, 565, 292, 576], "content": "but previously it has mostly been studied with prac-", "score": 1.0, "index": 37}, {"type": "text", "coordinates": [70, 577, 290, 590], "content": "tical considerations in mind. Recently, Merullo", "score": 1.0, "index": 38}, {"type": "text", "coordinates": [70, 591, 291, 603], "content": "et al. (2023) showed that language representations", "score": 1.0, "index": 39}, {"type": "text", "coordinates": [70, 605, 291, 617], "content": "in LMs are functionally similar to image repre-", "score": 1.0, "index": 40}, {"type": "text", "coordinates": [71, 619, 291, 630], "content": "sentations in VMs, in that a linear transformation", "score": 1.0, "index": 41}, {"type": "text", "coordinates": [71, 632, 292, 644], "content": "applied to an image representation can be used to", "score": 1.0, "index": 42}, {"type": "text", "coordinates": [70, 646, 292, 658], "content": "prompt a language model into producing a relevant", "score": 1.0, "index": 43}, {"type": "text", "coordinates": [71, 659, 291, 671], "content": "caption. We dial back from function and study", "score": 1.0, "index": 44}, {"type": "text", "coordinates": [70, 672, 292, 685], "content": "whether the concept representations converge to-", "score": 1.0, "index": 45}, {"type": "text", "coordinates": [70, 686, 291, 699], "content": "ward structural similarity (isomorphism). The key", "score": 1.0, "index": 46}, {"type": "text", "coordinates": [70, 700, 292, 712], "content": "question we address is whether despite the lack", "score": 1.0, "index": 47}, {"type": "text", "coordinates": [71, 713, 292, 726], "content": "of explicit grounding, the representations learned", "score": 1.0, "index": 48}, {"type": "text", "coordinates": [70, 727, 291, 739], "content": "by large pretrained language models structurally", "score": 1.0, "index": 49}, {"type": "text", "coordinates": [71, 741, 291, 753], "content": "resemble properties of the physical world as cap-", "score": 1.0, "index": 50}, {"type": "text", "coordinates": [71, 754, 291, 765], "content": "tured by vision models. More related to our work,", "score": 1.0, "index": 51}, {"type": "text", "coordinates": [306, 377, 528, 391], "content": "Huh et al. (2024) proposes a similar hypothesis,", "score": 1.0, "index": 52}, {"type": "text", "coordinates": [307, 392, 527, 404], "content": "although studying it from a different perspective,", "score": 1.0, "index": 53}, {"type": "text", "coordinates": [307, 406, 461, 417], "content": "and our findings corroborate theirs.", "score": 1.0, "index": 54}, {"type": "text", "coordinates": [307, 429, 314, 438], "content": "3", "score": 1.0, "index": 55}, {"type": "text", "coordinates": [322, 426, 393, 442], "content": "Methodology", "score": 1.0, "index": 56}, {"type": "text", "coordinates": [307, 451, 527, 463], "content": "Our primary objective is to compare the representa-", "score": 1.0, "index": 57}, {"type": "text", "coordinates": [307, 465, 526, 475], "content": "tions derived from VMs and LMs and assess their", "score": 1.0, "index": 58}, {"type": "text", "coordinates": [307, 478, 526, 490], "content": "alignability, i.e. the extent to which LMs converge", "score": 1.0, "index": 59}, {"type": "text", "coordinates": [306, 491, 527, 503], "content": "toward VMs\u2019 geometries. In the following sections,", "score": 1.0, "index": 60}, {"type": "text", "coordinates": [306, 503, 527, 518], "content": "we introduce the procedures for obtaining the rep-", "score": 1.0, "index": 61}, {"type": "text", "coordinates": [306, 518, 527, 530], "content": "resentations and aligning them, with an illustration", "score": 1.0, "index": 62}, {"type": "text", "coordinates": [307, 531, 488, 544], "content": "of our methodology provided in Figure 2.", "score": 1.0, "index": 63}, {"type": "text", "coordinates": [307, 552, 527, 564], "content": "Vision models. We include fourteen VMs in our", "score": 1.0, "index": 64}, {"type": "text", "coordinates": [307, 567, 527, 578], "content": "experiments, representing three model families:", "score": 1.0, "index": 65}, {"type": "text", "coordinates": [307, 579, 527, 591], "content": "SegFormer (Xie et al., 2021), MAE (He et al.,", "score": 1.0, "index": 66}, {"type": "text", "coordinates": [307, 593, 527, 605], "content": "2022), and ResNet (He et al., 2016). For all three", "score": 1.0, "index": 67}, {"type": "text", "coordinates": [307, 607, 527, 619], "content": "types of VMs, we only employ the encoder compo-", "score": 1.0, "index": 68}, {"type": "text", "coordinates": [305, 621, 456, 631], "content": "nent as a visual feature extractor.4", "score": 1.0, "index": 69}, {"type": "text", "coordinates": [318, 634, 528, 646], "content": "SegFormer models consist of a Transformer-", "score": 1.0, "index": 70}, {"type": "text", "coordinates": [306, 647, 527, 659], "content": "based encoder and a light-weight feed-forward", "score": 1.0, "index": 71}, {"type": "text", "coordinates": [307, 661, 527, 673], "content": "decoder. They are pretrained on object classifi-", "score": 1.0, "index": 72}, {"type": "text", "coordinates": [307, 674, 526, 687], "content": "cation data and finetuned on scene parsing data", "score": 1.0, "index": 73}, {"type": "text", "coordinates": [306, 688, 527, 700], "content": "for scene segmentation and object classification.", "score": 1.0, "index": 74}, {"type": "text", "coordinates": [307, 701, 526, 714], "content": "We hypothesize that the reasoning necessary to", "score": 1.0, "index": 75}]
[{"coordinates": [312, 67, 520, 315], "index": 65.75, "caption": " (text). Gold labels are in green.", "caption_coordinates": [305, 329, 528, 355]}]
[]
[]
[595.2760009765625, 841.8900146484375]
[{"type": "text", "text": "", "page_idx": 1}, {"type": "text", "text": "2 Related Work ", "text_level": 1, "page_idx": 1}, {"type": "text", "text": "Inspiration from cognitive science. Computational modeling is a cornerstone of cognitive science in the pursuit for a better understanding of how representations in the brain come about. As such, the field has shown a growing interest in computational representations induced with self-supervised learning (Orhan et al., 2020; Halvagal and Zenke, 2022). Cognitive scientists have also noted how the objectives of supervised language and vision models bear resemblances to predictive processing (Schrimpf et al., 2018; Goldstein et al., 2021; Caucheteux et al., 2022; Li et al., 2023) (but see Antonello and Huth (2022) for a critical discussion of such work). ", "page_idx": 1}, {"type": "text", "text": "Studies have looked at the alignability of neural language representations and human brain activations, with more promising results as language models grow better at modeling language (Sassenhagen and Fiebach, 2020; Schrimpf et al., 2021). In these studies, the partial alignability of brain and model representations is interpreted as evidence that brain and models might process language in the same way (Caucheteux and King, 2022). ", "page_idx": 1}, {"type": "text", "text": "Cross-modal alignment. The idea of crossmodal retrieval is not new (Lazaridou et al., 2014), but previously it has mostly been studied with practical considerations in mind. Recently, Merullo et al. (2023) showed that language representations in LMs are functionally similar to image representations in VMs, in that a linear transformation applied to an image representation can be used to prompt a language model into producing a relevant caption. We dial back from function and study whether the concept representations converge toward structural similarity (isomorphism). The key question we address is whether despite the lack of explicit grounding, the representations learned by large pretrained language models structurally resemble properties of the physical world as captured by vision models. More related to our work, ", "page_idx": 1}, {"type": "image", "img_path": "images/addc48e2a3820de9c4200831b5f87c1b67899ae49ba5c504c50eabcbba29c272.jpg", "img_caption": ["Figure 1: Mapping from $\\mathbf{MAE}_{\\mathrm{Huge}}$ (images) to $\\mathrm{OPT}_{30\\mathrm{B}}$ (text). Gold labels are in green. "], "img_footnote": [], "page_idx": 1}, {"type": "text", "text": "Huh et al. (2024) proposes a similar hypothesis, although studying it from a different perspective, and our findings corroborate theirs. ", "page_idx": 1}, {"type": "text", "text": "3 Methodology ", "text_level": 1, "page_idx": 1}, {"type": "text", "text": "Our primary objective is to compare the representations derived from VMs and LMs and assess their alignability, i.e. the extent to which LMs converge toward VMs\u2019 geometries. In the following sections, we introduce the procedures for obtaining the representations and aligning them, with an illustration of our methodology provided in Figure 2. ", "page_idx": 1}, {"type": "text", "text": "Vision models. We include fourteen VMs in our experiments, representing three model families: SegFormer (Xie et al., 2021), MAE (He et al., 2022), and ResNet (He et al., 2016). For all three types of VMs, we only employ the encoder component as a visual feature extractor.4 ", "page_idx": 1}, {"type": "text", "text": "SegFormer models consist of a Transformerbased encoder and a light-weight feed-forward decoder. They are pretrained on object classification data and finetuned on scene parsing data for scene segmentation and object classification. We hypothesize that the reasoning necessary to ", "page_idx": 1}]
[{"category_id": 1, "poly": [194.3141632080078, 176.55699157714844, 814.8945922851562, 176.55699157714844, 814.8945922851562, 515.8296508789062, 194.3141632080078, 515.8296508789062], "score": 0.9999968409538269}, {"category_id": 2, "poly": [848.525634765625, 2004.5821533203125, 1466.481201171875, 2004.5821533203125, 1466.481201171875, 2128.156982421875, 848.525634765625, 2128.156982421875], "score": 0.9999914169311523}, {"category_id": 1, "poly": [196.13275146484375, 605.0405883789062, 813.8555297851562, 605.0405883789062, 813.8555297851562, 1129.2491455078125, 196.13275146484375, 1129.2491455078125], "score": 0.999990701675415}, {"category_id": 1, "poly": [849.9745483398438, 1531.32763671875, 1466.8675537109375, 1531.32763671875, 1466.8675537109375, 1756.6495361328125, 849.9745483398438, 1756.6495361328125], "score": 0.9999873042106628}, {"category_id": 1, "poly": [195.0758056640625, 1488.0035400390625, 813.5328979492188, 1488.0035400390625, 813.5328979492188, 2131.31640625, 195.0758056640625, 2131.31640625], "score": 0.9999861717224121}, {"category_id": 1, "poly": [197.01490783691406, 1132.119873046875, 813.2503662109375, 1132.119873046875, 813.2503662109375, 1471.2578125, 197.01490783691406, 1471.2578125], "score": 0.99998539686203}, {"category_id": 3, "poly": [866.4552001953125, 188.44313049316406, 1444.2437744140625, 188.44313049316406, 1444.2437744140625, 877.3353881835938, 866.4552001953125, 877.3353881835938], "score": 0.9999812841415405}, {"category_id": 1, "poly": [850.5169677734375, 1760.2696533203125, 1466.5247802734375, 1760.2696533203125, 1466.5247802734375, 1986.8740234375, 850.5169677734375, 1986.8740234375], "score": 0.9999799728393555}, {"category_id": 1, "poly": [850.647216796875, 1249.7265625, 1466.7557373046875, 1249.7265625, 1466.7557373046875, 1512.461181640625, 850.647216796875, 1512.461181640625], "score": 0.9999769926071167}, {"category_id": 0, "poly": [852.3656616210938, 1188.229736328125, 1089.9410400390625, 1188.229736328125, 1089.9410400390625, 1226.435791015625, 852.3656616210938, 1226.435791015625], "score": 0.9999736547470093}, {"category_id": 0, "poly": [197.42074584960938, 543.5637817382812, 452.0062255859375, 543.5637817382812, 452.0062255859375, 579.2445068359375, 197.42074584960938, 579.2445068359375], "score": 0.9998921155929565}, {"category_id": 1, "poly": [850.7444458007812, 1049.48876953125, 1467.40185546875, 1049.48876953125, 1467.40185546875, 1160.7408447265625, 850.7444458007812, 1160.7408447265625], "score": 0.999397873878479}, {"category_id": 4, "poly": [849.1942138671875, 915.796875, 1465.5360107421875, 915.796875, 1465.5360107421875, 988.3284301757812, 849.1942138671875, 988.3284301757812], "score": 0.976058840751648}, {"category_id": 1, "poly": [849.5391235351562, 914.3115234375, 1465.300537109375, 914.3115234375, 1465.300537109375, 988.04638671875, 849.5391235351562, 988.04638671875], "score": 0.2375212162733078}, {"category_id": 13, "poly": [850, 951, 949, 951, 949, 987, 850, 987], "score": 0.79, "latex": "\\mathrm{OPT}_{30\\mathrm{B}}"}, {"category_id": 13, "poly": [1181, 915, 1305, 915, 1305, 951, 1181, 951], "score": 0.53, "latex": "\\mathbf{MAE}_{\\mathrm{Huge}}"}, {"category_id": 15, "poly": [199.0, 183.0, 809.0, 183.0, 809.0, 213.0, 199.0, 213.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [196.0, 220.0, 812.0, 220.0, 812.0, 253.0, 196.0, 253.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [199.0, 257.0, 813.0, 257.0, 813.0, 292.0, 199.0, 292.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [198.0, 294.0, 809.0, 294.0, 809.0, 330.0, 198.0, 330.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [196.0, 331.0, 813.0, 331.0, 813.0, 369.0, 196.0, 369.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [198.0, 370.0, 810.0, 370.0, 810.0, 403.0, 198.0, 403.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [198.0, 406.0, 808.0, 406.0, 808.0, 444.0, 198.0, 444.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [199.0, 445.0, 809.0, 445.0, 809.0, 479.0, 199.0, 479.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [197.0, 481.0, 494.0, 481.0, 494.0, 517.0, 197.0, 517.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [886.0, 2002.0, 1466.0, 2002.0, 1466.0, 2041.0, 886.0, 2041.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [852.0, 2038.0, 1464.0, 2038.0, 1464.0, 2069.0, 852.0, 2069.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [851.0, 2069.0, 1464.0, 2069.0, 1464.0, 2097.0, 851.0, 2097.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [852.0, 2100.0, 1362.0, 2100.0, 1362.0, 2128.0, 852.0, 2128.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [200.0, 608.0, 813.0, 608.0, 813.0, 641.0, 200.0, 641.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [198.0, 645.0, 813.0, 645.0, 813.0, 678.0, 198.0, 678.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [200.0, 685.0, 809.0, 685.0, 809.0, 715.0, 200.0, 715.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [200.0, 722.0, 810.0, 722.0, 810.0, 752.0, 200.0, 752.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [196.0, 756.0, 811.0, 756.0, 811.0, 793.0, 196.0, 793.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [198.0, 797.0, 809.0, 797.0, 809.0, 827.0, 198.0, 827.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [197.0, 833.0, 812.0, 833.0, 812.0, 866.0, 197.0, 866.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [200.0, 871.0, 808.0, 871.0, 808.0, 901.0, 200.0, 901.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [198.0, 907.0, 809.0, 907.0, 809.0, 940.0, 198.0, 940.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [197.0, 945.0, 811.0, 945.0, 811.0, 978.0, 197.0, 978.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [196.0, 984.0, 813.0, 984.0, 813.0, 1015.0, 196.0, 1015.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [197.0, 1019.0, 808.0, 1019.0, 808.0, 1054.0, 197.0, 1054.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [198.0, 1058.0, 807.0, 1058.0, 807.0, 1089.0, 198.0, 1089.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [199.0, 1097.0, 377.0, 1097.0, 377.0, 1128.0, 199.0, 1128.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [852.0, 1535.0, 1464.0, 1535.0, 1464.0, 1568.0, 852.0, 1568.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [852.0, 1576.0, 1465.0, 1576.0, 1465.0, 1606.0, 852.0, 1606.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [854.0, 1611.0, 1464.0, 1611.0, 1464.0, 1642.0, 854.0, 1642.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [853.0, 1650.0, 1463.0, 1650.0, 1463.0, 1681.0, 853.0, 1681.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [852.0, 1688.0, 1463.0, 1688.0, 1463.0, 1720.0, 852.0, 1720.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [849.0, 1726.0, 1266.0, 1726.0, 1266.0, 1754.0, 849.0, 1754.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [201.0, 1495.0, 813.0, 1495.0, 813.0, 1526.0, 201.0, 1526.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [199.0, 1534.0, 811.0, 1534.0, 811.0, 1562.0, 199.0, 1562.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [199.0, 1570.0, 811.0, 1570.0, 811.0, 1602.0, 199.0, 1602.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [196.0, 1605.0, 807.0, 1605.0, 807.0, 1641.0, 196.0, 1641.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [197.0, 1644.0, 808.0, 1644.0, 808.0, 1677.0, 197.0, 1677.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [196.0, 1682.0, 810.0, 1682.0, 810.0, 1716.0, 196.0, 1716.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [199.0, 1721.0, 810.0, 1721.0, 810.0, 1752.0, 199.0, 1752.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [198.0, 1758.0, 811.0, 1758.0, 811.0, 1791.0, 198.0, 1791.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [197.0, 1795.0, 811.0, 1795.0, 811.0, 1828.0, 197.0, 1828.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [199.0, 1832.0, 809.0, 1832.0, 809.0, 1866.0, 199.0, 1866.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [197.0, 1869.0, 811.0, 1869.0, 811.0, 1905.0, 197.0, 1905.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [196.0, 1906.0, 808.0, 1906.0, 808.0, 1943.0, 196.0, 1943.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [197.0, 1946.0, 811.0, 1946.0, 811.0, 1979.0, 197.0, 1979.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [198.0, 1983.0, 811.0, 1983.0, 811.0, 2017.0, 198.0, 2017.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [197.0, 2021.0, 808.0, 2021.0, 808.0, 2055.0, 197.0, 2055.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [198.0, 2059.0, 810.0, 2059.0, 810.0, 2092.0, 198.0, 2092.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [199.0, 2097.0, 810.0, 2097.0, 810.0, 2126.0, 199.0, 2126.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [229.0, 1134.0, 812.0, 1134.0, 812.0, 1166.0, 229.0, 1166.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [198.0, 1171.0, 813.0, 1171.0, 813.0, 1204.0, 198.0, 1204.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [198.0, 1207.0, 809.0, 1207.0, 809.0, 1245.0, 198.0, 1245.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [199.0, 1248.0, 812.0, 1248.0, 812.0, 1281.0, 199.0, 1281.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [200.0, 1286.0, 810.0, 1286.0, 810.0, 1316.0, 200.0, 1316.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [196.0, 1321.0, 810.0, 1321.0, 810.0, 1356.0, 196.0, 1356.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [198.0, 1361.0, 809.0, 1361.0, 809.0, 1392.0, 198.0, 1392.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [197.0, 1397.0, 808.0, 1397.0, 808.0, 1432.0, 197.0, 1432.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [198.0, 1436.0, 739.0, 1436.0, 739.0, 1469.0, 198.0, 1469.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [884.0, 1762.0, 1466.0, 1762.0, 1466.0, 1796.0, 884.0, 1796.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [851.0, 1800.0, 1463.0, 1800.0, 1463.0, 1833.0, 851.0, 1833.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [852.0, 1837.0, 1465.0, 1837.0, 1465.0, 1872.0, 852.0, 1872.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [852.0, 1875.0, 1462.0, 1875.0, 1462.0, 1910.0, 852.0, 1910.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [850.0, 1913.0, 1464.0, 1913.0, 1464.0, 1946.0, 850.0, 1946.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [852.0, 1950.0, 1462.0, 1950.0, 1462.0, 1984.0, 852.0, 1984.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [853.0, 1253.0, 1464.0, 1253.0, 1464.0, 1287.0, 853.0, 1287.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [852.0, 1292.0, 1462.0, 1292.0, 1462.0, 1321.0, 852.0, 1321.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [853.0, 1328.0, 1462.0, 1328.0, 1462.0, 1362.0, 853.0, 1362.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [850.0, 1365.0, 1465.0, 1365.0, 1465.0, 1399.0, 850.0, 1399.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [850.0, 1400.0, 1464.0, 1400.0, 1464.0, 1439.0, 850.0, 1439.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [851.0, 1441.0, 1463.0, 1441.0, 1463.0, 1473.0, 851.0, 1473.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [852.0, 1477.0, 1357.0, 1477.0, 1357.0, 1512.0, 852.0, 1512.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [853.0, 1193.0, 872.0, 1193.0, 872.0, 1219.0, 853.0, 1219.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [896.0, 1185.0, 1091.0, 1185.0, 1091.0, 1230.0, 896.0, 1230.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [198.0, 546.0, 222.0, 546.0, 222.0, 575.0, 198.0, 575.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [245.0, 544.0, 449.0, 544.0, 449.0, 578.0, 245.0, 578.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [850.0, 1050.0, 1466.0, 1050.0, 1466.0, 1088.0, 850.0, 1088.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [854.0, 1090.0, 1465.0, 1090.0, 1465.0, 1123.0, 854.0, 1123.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [854.0, 1130.0, 1280.0, 1130.0, 1280.0, 1159.0, 854.0, 1159.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [851.0, 916.0, 1180.0, 916.0, 1180.0, 957.0, 851.0, 957.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [1306.0, 916.0, 1461.0, 916.0, 1461.0, 957.0, 1306.0, 957.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [950.0, 954.0, 1339.0, 954.0, 1339.0, 990.0, 950.0, 990.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [850.0, 914.0, 1180.0, 914.0, 1180.0, 956.0, 850.0, 956.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [1306.0, 914.0, 1461.0, 914.0, 1461.0, 956.0, 1306.0, 956.0], "score": 1.0, "text": ""}, {"category_id": 15, "poly": [950.0, 955.0, 1340.0, 955.0, 1340.0, 989.0, 950.0, 989.0], "score": 1.0, "text": ""}]
{"preproc_blocks": [{"type": "text", "bbox": [70, 63, 293, 185], "lines": [{"bbox": [71, 65, 291, 76], "spans": [{"bbox": [71, 65, 291, 76], "score": 1.0, "content": "those of computer vision models. This enables", "type": "text"}], "index": 0}, {"bbox": [70, 79, 292, 91], "spans": [{"bbox": [70, 79, 292, 91], "score": 1.0, "content": "retrieval of language representations of images (ref-", "type": "text"}], "index": 1}, {"bbox": [71, 92, 292, 105], "spans": [{"bbox": [71, 92, 292, 105], "score": 1.0, "content": "erential semantics) with minimal supervision. Re-", "type": "text"}], "index": 2}, {"bbox": [71, 105, 291, 118], "spans": [{"bbox": [71, 105, 291, 118], "score": 1.0, "content": "trieval precision depends on dispersion of image", "type": "text"}], "index": 3}, {"bbox": [70, 119, 292, 132], "spans": [{"bbox": [70, 119, 292, 132], "score": 1.0, "content": "and language, polysemy, and frequency, but con-", "type": "text"}], "index": 4}, {"bbox": [71, 133, 291, 145], "spans": [{"bbox": [71, 133, 291, 145], "score": 1.0, "content": "sistently improves with language model size. We", "type": "text"}], "index": 5}, {"bbox": [71, 146, 291, 159], "spans": [{"bbox": [71, 146, 291, 159], "score": 1.0, "content": "discuss the implications of the finding that language", "type": "text"}], "index": 6}, {"bbox": [71, 160, 291, 172], "spans": [{"bbox": [71, 160, 291, 172], "score": 1.0, "content": "and computer vision models learn representations", "type": "text"}], "index": 7}, {"bbox": [70, 173, 178, 186], "spans": [{"bbox": [70, 173, 178, 186], "score": 1.0, "content": "with similar geometries.", "type": "text"}], "index": 8}], "index": 4}, {"type": "title", "bbox": [71, 195, 162, 208], "lines": [{"bbox": [71, 195, 161, 208], "spans": [{"bbox": [71, 196, 79, 206], "score": 1.0, "content": "2", "type": "text"}, {"bbox": [88, 195, 161, 208], "score": 1.0, "content": "Related Work", "type": "text"}], "index": 9}], "index": 9}, {"type": "text", "bbox": [70, 217, 293, 406], "lines": [{"bbox": [72, 218, 292, 230], "spans": [{"bbox": [72, 218, 292, 230], "score": 1.0, "content": "Inspiration from cognitive science. Computa-", "type": "text"}], "index": 10}, {"bbox": [71, 232, 292, 244], "spans": [{"bbox": [71, 232, 292, 244], "score": 1.0, "content": "tional modeling is a cornerstone of cognitive sci-", "type": "text"}], "index": 11}, {"bbox": [72, 246, 291, 257], "spans": [{"bbox": [72, 246, 291, 257], "score": 1.0, "content": "ence in the pursuit for a better understanding of how", "type": "text"}], "index": 12}, {"bbox": [72, 259, 291, 270], "spans": [{"bbox": [72, 259, 291, 270], "score": 1.0, "content": "representations in the brain come about. As such,", "type": "text"}], "index": 13}, {"bbox": [70, 272, 292, 285], "spans": [{"bbox": [70, 272, 292, 285], "score": 1.0, "content": "the field has shown a growing interest in computa-", "type": "text"}], "index": 14}, {"bbox": [71, 286, 291, 297], "spans": [{"bbox": [71, 286, 291, 297], "score": 1.0, "content": "tional representations induced with self-supervised", "type": "text"}], "index": 15}, {"bbox": [70, 299, 292, 311], "spans": [{"bbox": [70, 299, 292, 311], "score": 1.0, "content": "learning (Orhan et al., 2020; Halvagal and Zenke,", "type": "text"}], "index": 16}, {"bbox": [72, 313, 291, 324], "spans": [{"bbox": [72, 313, 291, 324], "score": 1.0, "content": "2022). Cognitive scientists have also noted how", "type": "text"}], "index": 17}, {"bbox": [71, 326, 291, 338], "spans": [{"bbox": [71, 326, 291, 338], "score": 1.0, "content": "the objectives of supervised language and vision", "type": "text"}], "index": 18}, {"bbox": [70, 340, 292, 352], "spans": [{"bbox": [70, 340, 292, 352], "score": 1.0, "content": "models bear resemblances to predictive process-", "type": "text"}], "index": 19}, {"bbox": [70, 354, 292, 365], "spans": [{"bbox": [70, 354, 292, 365], "score": 1.0, "content": "ing (Schrimpf et al., 2018; Goldstein et al., 2021;", "type": "text"}], "index": 20}, {"bbox": [70, 366, 291, 379], "spans": [{"bbox": [70, 366, 291, 379], "score": 1.0, "content": "Caucheteux et al., 2022; Li et al., 2023) (but see", "type": "text"}], "index": 21}, {"bbox": [71, 380, 290, 392], "spans": [{"bbox": [71, 380, 290, 392], "score": 1.0, "content": "Antonello and Huth (2022) for a critical discussion", "type": "text"}], "index": 22}, {"bbox": [71, 394, 135, 406], "spans": [{"bbox": [71, 394, 135, 406], "score": 1.0, "content": "of such work).", "type": "text"}], "index": 23}], "index": 16.5}, {"type": "text", "bbox": [70, 407, 293, 529], "lines": [{"bbox": [82, 408, 292, 419], "spans": [{"bbox": [82, 408, 292, 419], "score": 1.0, "content": "Studies have looked at the alignability of neu-", "type": "text"}], "index": 24}, {"bbox": [71, 421, 292, 433], "spans": [{"bbox": [71, 421, 292, 433], "score": 1.0, "content": "ral language representations and human brain acti-", "type": "text"}], "index": 25}, {"bbox": [71, 434, 291, 448], "spans": [{"bbox": [71, 434, 291, 448], "score": 1.0, "content": "vations, with more promising results as language", "type": "text"}], "index": 26}, {"bbox": [71, 449, 292, 461], "spans": [{"bbox": [71, 449, 292, 461], "score": 1.0, "content": "models grow better at modeling language (Sassen-", "type": "text"}], "index": 27}, {"bbox": [72, 462, 291, 473], "spans": [{"bbox": [72, 462, 291, 473], "score": 1.0, "content": "hagen and Fiebach, 2020; Schrimpf et al., 2021).", "type": "text"}], "index": 28}, {"bbox": [70, 475, 291, 488], "spans": [{"bbox": [70, 475, 291, 488], "score": 1.0, "content": "In these studies, the partial alignability of brain and", "type": "text"}], "index": 29}, {"bbox": [71, 489, 291, 501], "spans": [{"bbox": [71, 489, 291, 501], "score": 1.0, "content": "model representations is interpreted as evidence", "type": "text"}], "index": 30}, {"bbox": [70, 502, 291, 515], "spans": [{"bbox": [70, 502, 291, 515], "score": 1.0, "content": "that brain and models might process language in", "type": "text"}], "index": 31}, {"bbox": [71, 516, 266, 528], "spans": [{"bbox": [71, 516, 266, 528], "score": 1.0, "content": "the same way (Caucheteux and King, 2022).", "type": "text"}], "index": 32}], "index": 28}, {"type": "text", "bbox": [70, 535, 293, 767], "lines": [{"bbox": [72, 538, 292, 549], "spans": [{"bbox": [72, 538, 292, 549], "score": 1.0, "content": "Cross-modal alignment. The idea of cross-", "type": "text"}], "index": 33}, {"bbox": [71, 552, 292, 562], "spans": [{"bbox": [71, 552, 292, 562], "score": 1.0, "content": "modal retrieval is not new (Lazaridou et al., 2014),", "type": "text"}], "index": 34}, {"bbox": [71, 565, 292, 576], "spans": [{"bbox": [71, 565, 292, 576], "score": 1.0, "content": "but previously it has mostly been studied with prac-", "type": "text"}], "index": 35}, {"bbox": [70, 577, 290, 590], "spans": [{"bbox": [70, 577, 290, 590], "score": 1.0, "content": "tical considerations in mind. Recently, Merullo", "type": "text"}], "index": 36}, {"bbox": [70, 591, 291, 603], "spans": [{"bbox": [70, 591, 291, 603], "score": 1.0, "content": "et al. (2023) showed that language representations", "type": "text"}], "index": 37}, {"bbox": [70, 605, 291, 617], "spans": [{"bbox": [70, 605, 291, 617], "score": 1.0, "content": "in LMs are functionally similar to image repre-", "type": "text"}], "index": 38}, {"bbox": [71, 619, 291, 630], "spans": [{"bbox": [71, 619, 291, 630], "score": 1.0, "content": "sentations in VMs, in that a linear transformation", "type": "text"}], "index": 39}, {"bbox": [71, 632, 292, 644], "spans": [{"bbox": [71, 632, 292, 644], "score": 1.0, "content": "applied to an image representation can be used to", "type": "text"}], "index": 40}, {"bbox": [70, 646, 292, 658], "spans": [{"bbox": [70, 646, 292, 658], "score": 1.0, "content": "prompt a language model into producing a relevant", "type": "text"}], "index": 41}, {"bbox": [71, 659, 291, 671], "spans": [{"bbox": [71, 659, 291, 671], "score": 1.0, "content": "caption. We dial back from function and study", "type": "text"}], "index": 42}, {"bbox": [70, 672, 292, 685], "spans": [{"bbox": [70, 672, 292, 685], "score": 1.0, "content": "whether the concept representations converge to-", "type": "text"}], "index": 43}, {"bbox": [70, 686, 291, 699], "spans": [{"bbox": [70, 686, 291, 699], "score": 1.0, "content": "ward structural similarity (isomorphism). The key", "type": "text"}], "index": 44}, {"bbox": [70, 700, 292, 712], "spans": [{"bbox": [70, 700, 292, 712], "score": 1.0, "content": "question we address is whether despite the lack", "type": "text"}], "index": 45}, {"bbox": [71, 713, 292, 726], "spans": [{"bbox": [71, 713, 292, 726], "score": 1.0, "content": "of explicit grounding, the representations learned", "type": "text"}], "index": 46}, {"bbox": [70, 727, 291, 739], "spans": [{"bbox": [70, 727, 291, 739], "score": 1.0, "content": "by large pretrained language models structurally", "type": "text"}], "index": 47}, {"bbox": [71, 741, 291, 753], "spans": [{"bbox": [71, 741, 291, 753], "score": 1.0, "content": "resemble properties of the physical world as cap-", "type": "text"}], "index": 48}, {"bbox": [71, 754, 291, 765], "spans": [{"bbox": [71, 754, 291, 765], "score": 1.0, "content": "tured by vision models. More related to our work,", "type": "text"}], "index": 49}], "index": 41}, {"type": "image", "bbox": [312, 67, 520, 315], "blocks": [{"type": "image_body", "bbox": [312, 67, 520, 315], "group_id": 0, "lines": [{"bbox": [312, 67, 520, 315], "spans": [{"bbox": [312, 67, 520, 315], "score": 0.9999812841415405, "type": "image", "image_path": "addc48e2a3820de9c4200831b5f87c1b67899ae49ba5c504c50eabcbba29c272.jpg"}]}], "index": 60, "virtual_lines": [{"bbox": [312, 67, 520, 79], "spans": [], "index": 50}, {"bbox": [312, 79, 520, 91], "spans": [], "index": 51}, {"bbox": [312, 91, 520, 103], "spans": [], "index": 52}, {"bbox": [312, 103, 520, 115], "spans": [], "index": 53}, {"bbox": [312, 115, 520, 127], "spans": [], "index": 54}, {"bbox": [312, 127, 520, 139], "spans": [], "index": 55}, {"bbox": [312, 139, 520, 151], "spans": [], "index": 56}, {"bbox": [312, 151, 520, 163], "spans": [], "index": 57}, {"bbox": [312, 163, 520, 175], "spans": [], "index": 58}, {"bbox": [312, 175, 520, 187], "spans": [], "index": 59}, {"bbox": [312, 187, 520, 199], "spans": [], "index": 60}, {"bbox": [312, 199, 520, 211], "spans": [], "index": 61}, {"bbox": [312, 211, 520, 223], "spans": [], "index": 62}, {"bbox": [312, 223, 520, 235], "spans": [], "index": 63}, {"bbox": [312, 235, 520, 247], "spans": [], "index": 64}, {"bbox": [312, 247, 520, 259], "spans": [], "index": 65}, {"bbox": [312, 259, 520, 271], "spans": [], "index": 66}, {"bbox": [312, 271, 520, 283], "spans": [], "index": 67}, {"bbox": [312, 283, 520, 295], "spans": [], "index": 68}, {"bbox": [312, 295, 520, 307], "spans": [], "index": 69}, {"bbox": [312, 307, 520, 319], "spans": [], "index": 70}]}, {"type": "image_caption", "bbox": [305, 329, 528, 355], "group_id": 0, "lines": [{"bbox": [306, 329, 526, 344], "spans": [{"bbox": [306, 329, 425, 344], "score": 1.0, "content": "Figure 1: Mapping from ", "type": "text"}, {"bbox": [425, 329, 470, 342], "score": 0.53, "content": "\\mathbf{MAE}_{\\mathrm{Huge}}", "type": "inline_equation", "height": 13, "width": 45}, {"bbox": [470, 329, 526, 344], "score": 1.0, "content": " (images) to", "type": "text"}], "index": 71}, {"bbox": [306, 342, 482, 356], "spans": [{"bbox": [306, 342, 341, 355], "score": 0.79, "content": "\\mathrm{OPT}_{30\\mathrm{B}}", "type": "inline_equation", "height": 13, "width": 35}, {"bbox": [342, 343, 482, 356], "score": 1.0, "content": " (text). Gold labels are in green.", "type": "text"}], "index": 72}], "index": 71.5}], "index": 65.75}, {"type": "text", "bbox": [306, 377, 528, 417], "lines": [{"bbox": [306, 377, 528, 391], "spans": [{"bbox": [306, 377, 528, 391], "score": 1.0, "content": "Huh et al. (2024) proposes a similar hypothesis,", "type": "text"}], "index": 73}, {"bbox": [307, 392, 527, 404], "spans": [{"bbox": [307, 392, 527, 404], "score": 1.0, "content": "although studying it from a different perspective,", "type": "text"}], "index": 74}, {"bbox": [307, 406, 461, 417], "spans": [{"bbox": [307, 406, 461, 417], "score": 1.0, "content": "and our findings corroborate theirs.", "type": "text"}], "index": 75}], "index": 74}, {"type": "title", "bbox": [307, 427, 392, 441], "lines": [{"bbox": [307, 426, 393, 442], "spans": [{"bbox": [307, 429, 314, 438], "score": 1.0, "content": "3", "type": "text"}, {"bbox": [322, 426, 393, 442], "score": 1.0, "content": "Methodology", "type": "text"}], "index": 76}], "index": 76}, {"type": "text", "bbox": [306, 449, 528, 544], "lines": [{"bbox": [307, 451, 527, 463], "spans": [{"bbox": [307, 451, 527, 463], "score": 1.0, "content": "Our primary objective is to compare the representa-", "type": "text"}], "index": 77}, {"bbox": [307, 465, 526, 475], "spans": [{"bbox": [307, 465, 526, 475], "score": 1.0, "content": "tions derived from VMs and LMs and assess their", "type": "text"}], "index": 78}, {"bbox": [307, 478, 526, 490], "spans": [{"bbox": [307, 478, 526, 490], "score": 1.0, "content": "alignability, i.e. the extent to which LMs converge", "type": "text"}], "index": 79}, {"bbox": [306, 491, 527, 503], "spans": [{"bbox": [306, 491, 527, 503], "score": 1.0, "content": "toward VMs\u2019 geometries. In the following sections,", "type": "text"}], "index": 80}, {"bbox": [306, 503, 527, 518], "spans": [{"bbox": [306, 503, 527, 518], "score": 1.0, "content": "we introduce the procedures for obtaining the rep-", "type": "text"}], "index": 81}, {"bbox": [306, 518, 527, 530], "spans": [{"bbox": [306, 518, 527, 530], "score": 1.0, "content": "resentations and aligning them, with an illustration", "type": "text"}], "index": 82}, {"bbox": [307, 531, 488, 544], "spans": [{"bbox": [307, 531, 488, 544], "score": 1.0, "content": "of our methodology provided in Figure 2.", "type": "text"}], "index": 83}], "index": 80}, {"type": "text", "bbox": [306, 551, 528, 632], "lines": [{"bbox": [307, 552, 527, 564], "spans": [{"bbox": [307, 552, 527, 564], "score": 1.0, "content": "Vision models. We include fourteen VMs in our", "type": "text"}], "index": 84}, {"bbox": [307, 567, 527, 578], "spans": [{"bbox": [307, 567, 527, 578], "score": 1.0, "content": "experiments, representing three model families:", "type": "text"}], "index": 85}, {"bbox": [307, 579, 527, 591], "spans": [{"bbox": [307, 579, 527, 591], "score": 1.0, "content": "SegFormer (Xie et al., 2021), MAE (He et al.,", "type": "text"}], "index": 86}, {"bbox": [307, 593, 527, 605], "spans": [{"bbox": [307, 593, 527, 605], "score": 1.0, "content": "2022), and ResNet (He et al., 2016). For all three", "type": "text"}], "index": 87}, {"bbox": [307, 607, 527, 619], "spans": [{"bbox": [307, 607, 527, 619], "score": 1.0, "content": "types of VMs, we only employ the encoder compo-", "type": "text"}], "index": 88}, {"bbox": [305, 621, 456, 631], "spans": [{"bbox": [305, 621, 456, 631], "score": 1.0, "content": "nent as a visual feature extractor.4", "type": "text"}], "index": 89}], "index": 86.5}, {"type": "text", "bbox": [306, 633, 528, 715], "lines": [{"bbox": [318, 634, 528, 646], "spans": [{"bbox": [318, 634, 528, 646], "score": 1.0, "content": "SegFormer models consist of a Transformer-", "type": "text"}], "index": 90}, {"bbox": [306, 647, 527, 659], "spans": [{"bbox": [306, 647, 527, 659], "score": 1.0, "content": "based encoder and a light-weight feed-forward", "type": "text"}], "index": 91}, {"bbox": [307, 661, 527, 673], "spans": [{"bbox": [307, 661, 527, 673], "score": 1.0, "content": "decoder. They are pretrained on object classifi-", "type": "text"}], "index": 92}, {"bbox": [307, 674, 526, 687], "spans": [{"bbox": [307, 674, 526, 687], "score": 1.0, "content": "cation data and finetuned on scene parsing data", "type": "text"}], "index": 93}, {"bbox": [306, 688, 527, 700], "spans": [{"bbox": [306, 688, 527, 700], "score": 1.0, "content": "for scene segmentation and object classification.", "type": "text"}], "index": 94}, {"bbox": [307, 701, 526, 714], "spans": [{"bbox": [307, 701, 526, 714], "score": 1.0, "content": "We hypothesize that the reasoning necessary to", "type": "text"}], "index": 95}], "index": 92.5}], "layout_bboxes": [], "page_idx": 1, "page_size": [595.2760009765625, 841.8900146484375], "_layout_tree": [], "images": [{"type": "image", "bbox": [312, 67, 520, 315], "blocks": [{"type": "image_body", "bbox": [312, 67, 520, 315], "group_id": 0, "lines": [{"bbox": [312, 67, 520, 315], "spans": [{"bbox": [312, 67, 520, 315], "score": 0.9999812841415405, "type": "image", "image_path": "addc48e2a3820de9c4200831b5f87c1b67899ae49ba5c504c50eabcbba29c272.jpg"}]}], "index": 60, "virtual_lines": [{"bbox": [312, 67, 520, 79], "spans": [], "index": 50}, {"bbox": [312, 79, 520, 91], "spans": [], "index": 51}, {"bbox": [312, 91, 520, 103], "spans": [], "index": 52}, {"bbox": [312, 103, 520, 115], "spans": [], "index": 53}, {"bbox": [312, 115, 520, 127], "spans": [], "index": 54}, {"bbox": [312, 127, 520, 139], "spans": [], "index": 55}, {"bbox": [312, 139, 520, 151], "spans": [], "index": 56}, {"bbox": [312, 151, 520, 163], "spans": [], "index": 57}, {"bbox": [312, 163, 520, 175], "spans": [], "index": 58}, {"bbox": [312, 175, 520, 187], "spans": [], "index": 59}, {"bbox": [312, 187, 520, 199], "spans": [], "index": 60}, {"bbox": [312, 199, 520, 211], "spans": [], "index": 61}, {"bbox": [312, 211, 520, 223], "spans": [], "index": 62}, {"bbox": [312, 223, 520, 235], "spans": [], "index": 63}, {"bbox": [312, 235, 520, 247], "spans": [], "index": 64}, {"bbox": [312, 247, 520, 259], "spans": [], "index": 65}, {"bbox": [312, 259, 520, 271], "spans": [], "index": 66}, {"bbox": [312, 271, 520, 283], "spans": [], "index": 67}, {"bbox": [312, 283, 520, 295], "spans": [], "index": 68}, {"bbox": [312, 295, 520, 307], "spans": [], "index": 69}, {"bbox": [312, 307, 520, 319], "spans": [], "index": 70}]}, {"type": "image_caption", "bbox": [305, 329, 528, 355], "group_id": 0, "lines": [{"bbox": [306, 329, 526, 344], "spans": [{"bbox": [306, 329, 425, 344], "score": 1.0, "content": "Figure 1: Mapping from ", "type": "text"}, {"bbox": [425, 329, 470, 342], "score": 0.53, "content": "\\mathbf{MAE}_{\\mathrm{Huge}}", "type": "inline_equation", "height": 13, "width": 45}, {"bbox": [470, 329, 526, 344], "score": 1.0, "content": " (images) to", "type": "text"}], "index": 71}, {"bbox": [306, 342, 482, 356], "spans": [{"bbox": [306, 342, 341, 355], "score": 0.79, "content": "\\mathrm{OPT}_{30\\mathrm{B}}", "type": "inline_equation", "height": 13, "width": 35}, {"bbox": [342, 343, 482, 356], "score": 1.0, "content": " (text). Gold labels are in green.", "type": "text"}], "index": 72}], "index": 71.5}], "index": 65.75}], "tables": [], "interline_equations": [], "discarded_blocks": [{"type": "discarded", "bbox": [305, 721, 528, 766], "lines": [{"bbox": [319, 720, 528, 734], "spans": [{"bbox": [319, 720, 528, 734], "score": 1.0, "content": "4We ran experiments with CLIP (Radford et al., 2021),", "type": "text"}]}, {"bbox": [307, 733, 527, 744], "spans": [{"bbox": [307, 733, 527, 744], "score": 1.0, "content": "but report on these separately, since CLIP does not meet the", "type": "text"}]}, {"bbox": [306, 744, 527, 754], "spans": [{"bbox": [306, 744, 527, 754], "score": 1.0, "content": "criteria of our study, being trained on a mixture of text and", "type": "text"}]}, {"bbox": [307, 755, 490, 766], "spans": [{"bbox": [307, 755, 490, 766], "score": 1.0, "content": "images. CLIP results are presented in Appendix C.", "type": "text"}]}]}], "need_drop": false, "drop_reason": [], "para_blocks": [{"type": "text", "bbox": [70, 63, 293, 185], "lines": [], "index": 4, "page_num": "page_1", "page_size": [595.2760009765625, 841.8900146484375], "bbox_fs": [70, 65, 292, 186], "lines_deleted": true}, {"type": "title", "bbox": [71, 195, 162, 208], "lines": [{"bbox": [71, 195, 161, 208], "spans": [{"bbox": [71, 196, 79, 206], "score": 1.0, "content": "2", "type": "text"}, {"bbox": [88, 195, 161, 208], "score": 1.0, "content": "Related Work", "type": "text"}], "index": 9}], "index": 9, "page_num": "page_1", "page_size": [595.2760009765625, 841.8900146484375]}, {"type": "text", "bbox": [70, 217, 293, 406], "lines": [{"bbox": [72, 218, 292, 230], "spans": [{"bbox": [72, 218, 292, 230], "score": 1.0, "content": "Inspiration from cognitive science. Computa-", "type": "text"}], "index": 10}, {"bbox": [71, 232, 292, 244], "spans": [{"bbox": [71, 232, 292, 244], "score": 1.0, "content": "tional modeling is a cornerstone of cognitive sci-", "type": "text"}], "index": 11}, {"bbox": [72, 246, 291, 257], "spans": [{"bbox": [72, 246, 291, 257], "score": 1.0, "content": "ence in the pursuit for a better understanding of how", "type": "text"}], "index": 12}, {"bbox": [72, 259, 291, 270], "spans": [{"bbox": [72, 259, 291, 270], "score": 1.0, "content": "representations in the brain come about. As such,", "type": "text"}], "index": 13}, {"bbox": [70, 272, 292, 285], "spans": [{"bbox": [70, 272, 292, 285], "score": 1.0, "content": "the field has shown a growing interest in computa-", "type": "text"}], "index": 14}, {"bbox": [71, 286, 291, 297], "spans": [{"bbox": [71, 286, 291, 297], "score": 1.0, "content": "tional representations induced with self-supervised", "type": "text"}], "index": 15}, {"bbox": [70, 299, 292, 311], "spans": [{"bbox": [70, 299, 292, 311], "score": 1.0, "content": "learning (Orhan et al., 2020; Halvagal and Zenke,", "type": "text"}], "index": 16}, {"bbox": [72, 313, 291, 324], "spans": [{"bbox": [72, 313, 291, 324], "score": 1.0, "content": "2022). Cognitive scientists have also noted how", "type": "text"}], "index": 17}, {"bbox": [71, 326, 291, 338], "spans": [{"bbox": [71, 326, 291, 338], "score": 1.0, "content": "the objectives of supervised language and vision", "type": "text"}], "index": 18}, {"bbox": [70, 340, 292, 352], "spans": [{"bbox": [70, 340, 292, 352], "score": 1.0, "content": "models bear resemblances to predictive process-", "type": "text"}], "index": 19}, {"bbox": [70, 354, 292, 365], "spans": [{"bbox": [70, 354, 292, 365], "score": 1.0, "content": "ing (Schrimpf et al., 2018; Goldstein et al., 2021;", "type": "text"}], "index": 20}, {"bbox": [70, 366, 291, 379], "spans": [{"bbox": [70, 366, 291, 379], "score": 1.0, "content": "Caucheteux et al., 2022; Li et al., 2023) (but see", "type": "text"}], "index": 21}, {"bbox": [71, 380, 290, 392], "spans": [{"bbox": [71, 380, 290, 392], "score": 1.0, "content": "Antonello and Huth (2022) for a critical discussion", "type": "text"}], "index": 22}, {"bbox": [71, 394, 135, 406], "spans": [{"bbox": [71, 394, 135, 406], "score": 1.0, "content": "of such work).", "type": "text"}], "index": 23}], "index": 16.5, "page_num": "page_1", "page_size": [595.2760009765625, 841.8900146484375], "bbox_fs": [70, 218, 292, 406]}, {"type": "text", "bbox": [70, 407, 293, 529], "lines": [{"bbox": [82, 408, 292, 419], "spans": [{"bbox": [82, 408, 292, 419], "score": 1.0, "content": "Studies have looked at the alignability of neu-", "type": "text"}], "index": 24}, {"bbox": [71, 421, 292, 433], "spans": [{"bbox": [71, 421, 292, 433], "score": 1.0, "content": "ral language representations and human brain acti-", "type": "text"}], "index": 25}, {"bbox": [71, 434, 291, 448], "spans": [{"bbox": [71, 434, 291, 448], "score": 1.0, "content": "vations, with more promising results as language", "type": "text"}], "index": 26}, {"bbox": [71, 449, 292, 461], "spans": [{"bbox": [71, 449, 292, 461], "score": 1.0, "content": "models grow better at modeling language (Sassen-", "type": "text"}], "index": 27}, {"bbox": [72, 462, 291, 473], "spans": [{"bbox": [72, 462, 291, 473], "score": 1.0, "content": "hagen and Fiebach, 2020; Schrimpf et al., 2021).", "type": "text"}], "index": 28}, {"bbox": [70, 475, 291, 488], "spans": [{"bbox": [70, 475, 291, 488], "score": 1.0, "content": "In these studies, the partial alignability of brain and", "type": "text"}], "index": 29}, {"bbox": [71, 489, 291, 501], "spans": [{"bbox": [71, 489, 291, 501], "score": 1.0, "content": "model representations is interpreted as evidence", "type": "text"}], "index": 30}, {"bbox": [70, 502, 291, 515], "spans": [{"bbox": [70, 502, 291, 515], "score": 1.0, "content": "that brain and models might process language in", "type": "text"}], "index": 31}, {"bbox": [71, 516, 266, 528], "spans": [{"bbox": [71, 516, 266, 528], "score": 1.0, "content": "the same way (Caucheteux and King, 2022).", "type": "text"}], "index": 32}], "index": 28, "page_num": "page_1", "page_size": [595.2760009765625, 841.8900146484375], "bbox_fs": [70, 408, 292, 528]}, {"type": "text", "bbox": [70, 535, 293, 767], "lines": [{"bbox": [72, 538, 292, 549], "spans": [{"bbox": [72, 538, 292, 549], "score": 1.0, "content": "Cross-modal alignment. The idea of cross-", "type": "text"}], "index": 33}, {"bbox": [71, 552, 292, 562], "spans": [{"bbox": [71, 552, 292, 562], "score": 1.0, "content": "modal retrieval is not new (Lazaridou et al., 2014),", "type": "text"}], "index": 34}, {"bbox": [71, 565, 292, 576], "spans": [{"bbox": [71, 565, 292, 576], "score": 1.0, "content": "but previously it has mostly been studied with prac-", "type": "text"}], "index": 35}, {"bbox": [70, 577, 290, 590], "spans": [{"bbox": [70, 577, 290, 590], "score": 1.0, "content": "tical considerations in mind. Recently, Merullo", "type": "text"}], "index": 36}, {"bbox": [70, 591, 291, 603], "spans": [{"bbox": [70, 591, 291, 603], "score": 1.0, "content": "et al. (2023) showed that language representations", "type": "text"}], "index": 37}, {"bbox": [70, 605, 291, 617], "spans": [{"bbox": [70, 605, 291, 617], "score": 1.0, "content": "in LMs are functionally similar to image repre-", "type": "text"}], "index": 38}, {"bbox": [71, 619, 291, 630], "spans": [{"bbox": [71, 619, 291, 630], "score": 1.0, "content": "sentations in VMs, in that a linear transformation", "type": "text"}], "index": 39}, {"bbox": [71, 632, 292, 644], "spans": [{"bbox": [71, 632, 292, 644], "score": 1.0, "content": "applied to an image representation can be used to", "type": "text"}], "index": 40}, {"bbox": [70, 646, 292, 658], "spans": [{"bbox": [70, 646, 292, 658], "score": 1.0, "content": "prompt a language model into producing a relevant", "type": "text"}], "index": 41}, {"bbox": [71, 659, 291, 671], "spans": [{"bbox": [71, 659, 291, 671], "score": 1.0, "content": "caption. We dial back from function and study", "type": "text"}], "index": 42}, {"bbox": [70, 672, 292, 685], "spans": [{"bbox": [70, 672, 292, 685], "score": 1.0, "content": "whether the concept representations converge to-", "type": "text"}], "index": 43}, {"bbox": [70, 686, 291, 699], "spans": [{"bbox": [70, 686, 291, 699], "score": 1.0, "content": "ward structural similarity (isomorphism). The key", "type": "text"}], "index": 44}, {"bbox": [70, 700, 292, 712], "spans": [{"bbox": [70, 700, 292, 712], "score": 1.0, "content": "question we address is whether despite the lack", "type": "text"}], "index": 45}, {"bbox": [71, 713, 292, 726], "spans": [{"bbox": [71, 713, 292, 726], "score": 1.0, "content": "of explicit grounding, the representations learned", "type": "text"}], "index": 46}, {"bbox": [70, 727, 291, 739], "spans": [{"bbox": [70, 727, 291, 739], "score": 1.0, "content": "by large pretrained language models structurally", "type": "text"}], "index": 47}, {"bbox": [71, 741, 291, 753], "spans": [{"bbox": [71, 741, 291, 753], "score": 1.0, "content": "resemble properties of the physical world as cap-", "type": "text"}], "index": 48}, {"bbox": [71, 754, 291, 765], "spans": [{"bbox": [71, 754, 291, 765], "score": 1.0, "content": "tured by vision models. More related to our work,", "type": "text"}], "index": 49}], "index": 41, "page_num": "page_1", "page_size": [595.2760009765625, 841.8900146484375], "bbox_fs": [70, 538, 292, 765]}, {"type": "image", "bbox": [312, 67, 520, 315], "blocks": [{"type": "image_body", "bbox": [312, 67, 520, 315], "group_id": 0, "lines": [{"bbox": [312, 67, 520, 315], "spans": [{"bbox": [312, 67, 520, 315], "score": 0.9999812841415405, "type": "image", "image_path": "addc48e2a3820de9c4200831b5f87c1b67899ae49ba5c504c50eabcbba29c272.jpg"}]}], "index": 60, "virtual_lines": [{"bbox": [312, 67, 520, 79], "spans": [], "index": 50}, {"bbox": [312, 79, 520, 91], "spans": [], "index": 51}, {"bbox": [312, 91, 520, 103], "spans": [], "index": 52}, {"bbox": [312, 103, 520, 115], "spans": [], "index": 53}, {"bbox": [312, 115, 520, 127], "spans": [], "index": 54}, {"bbox": [312, 127, 520, 139], "spans": [], "index": 55}, {"bbox": [312, 139, 520, 151], "spans": [], "index": 56}, {"bbox": [312, 151, 520, 163], "spans": [], "index": 57}, {"bbox": [312, 163, 520, 175], "spans": [], "index": 58}, {"bbox": [312, 175, 520, 187], "spans": [], "index": 59}, {"bbox": [312, 187, 520, 199], "spans": [], "index": 60}, {"bbox": [312, 199, 520, 211], "spans": [], "index": 61}, {"bbox": [312, 211, 520, 223], "spans": [], "index": 62}, {"bbox": [312, 223, 520, 235], "spans": [], "index": 63}, {"bbox": [312, 235, 520, 247], "spans": [], "index": 64}, {"bbox": [312, 247, 520, 259], "spans": [], "index": 65}, {"bbox": [312, 259, 520, 271], "spans": [], "index": 66}, {"bbox": [312, 271, 520, 283], "spans": [], "index": 67}, {"bbox": [312, 283, 520, 295], "spans": [], "index": 68}, {"bbox": [312, 295, 520, 307], "spans": [], "index": 69}, {"bbox": [312, 307, 520, 319], "spans": [], "index": 70}]}, {"type": "image_caption", "bbox": [305, 329, 528, 355], "group_id": 0, "lines": [{"bbox": [306, 329, 526, 344], "spans": [{"bbox": [306, 329, 425, 344], "score": 1.0, "content": "Figure 1: Mapping from ", "type": "text"}, {"bbox": [425, 329, 470, 342], "score": 0.53, "content": "\\mathbf{MAE}_{\\mathrm{Huge}}", "type": "inline_equation", "height": 13, "width": 45}, {"bbox": [470, 329, 526, 344], "score": 1.0, "content": " (images) to", "type": "text"}], "index": 71}, {"bbox": [306, 342, 482, 356], "spans": [{"bbox": [306, 342, 341, 355], "score": 0.79, "content": "\\mathrm{OPT}_{30\\mathrm{B}}", "type": "inline_equation", "height": 13, "width": 35}, {"bbox": [342, 343, 482, 356], "score": 1.0, "content": " (text). Gold labels are in green.", "type": "text"}], "index": 72}], "index": 71.5}], "index": 65.75, "page_num": "page_1", "page_size": [595.2760009765625, 841.8900146484375]}, {"type": "text", "bbox": [306, 377, 528, 417], "lines": [{"bbox": [306, 377, 528, 391], "spans": [{"bbox": [306, 377, 528, 391], "score": 1.0, "content": "Huh et al. (2024) proposes a similar hypothesis,", "type": "text"}], "index": 73}, {"bbox": [307, 392, 527, 404], "spans": [{"bbox": [307, 392, 527, 404], "score": 1.0, "content": "although studying it from a different perspective,", "type": "text"}], "index": 74}, {"bbox": [307, 406, 461, 417], "spans": [{"bbox": [307, 406, 461, 417], "score": 1.0, "content": "and our findings corroborate theirs.", "type": "text"}], "index": 75}], "index": 74, "page_num": "page_1", "page_size": [595.2760009765625, 841.8900146484375], "bbox_fs": [306, 377, 528, 417]}, {"type": "title", "bbox": [307, 427, 392, 441], "lines": [{"bbox": [307, 426, 393, 442], "spans": [{"bbox": [307, 429, 314, 438], "score": 1.0, "content": "3", "type": "text"}, {"bbox": [322, 426, 393, 442], "score": 1.0, "content": "Methodology", "type": "text"}], "index": 76}], "index": 76, "page_num": "page_1", "page_size": [595.2760009765625, 841.8900146484375]}, {"type": "text", "bbox": [306, 449, 528, 544], "lines": [{"bbox": [307, 451, 527, 463], "spans": [{"bbox": [307, 451, 527, 463], "score": 1.0, "content": "Our primary objective is to compare the representa-", "type": "text"}], "index": 77}, {"bbox": [307, 465, 526, 475], "spans": [{"bbox": [307, 465, 526, 475], "score": 1.0, "content": "tions derived from VMs and LMs and assess their", "type": "text"}], "index": 78}, {"bbox": [307, 478, 526, 490], "spans": [{"bbox": [307, 478, 526, 490], "score": 1.0, "content": "alignability, i.e. the extent to which LMs converge", "type": "text"}], "index": 79}, {"bbox": [306, 491, 527, 503], "spans": [{"bbox": [306, 491, 527, 503], "score": 1.0, "content": "toward VMs\u2019 geometries. In the following sections,", "type": "text"}], "index": 80}, {"bbox": [306, 503, 527, 518], "spans": [{"bbox": [306, 503, 527, 518], "score": 1.0, "content": "we introduce the procedures for obtaining the rep-", "type": "text"}], "index": 81}, {"bbox": [306, 518, 527, 530], "spans": [{"bbox": [306, 518, 527, 530], "score": 1.0, "content": "resentations and aligning them, with an illustration", "type": "text"}], "index": 82}, {"bbox": [307, 531, 488, 544], "spans": [{"bbox": [307, 531, 488, 544], "score": 1.0, "content": "of our methodology provided in Figure 2.", "type": "text"}], "index": 83}], "index": 80, "page_num": "page_1", "page_size": [595.2760009765625, 841.8900146484375], "bbox_fs": [306, 451, 527, 544]}, {"type": "text", "bbox": [306, 551, 528, 632], "lines": [{"bbox": [307, 552, 527, 564], "spans": [{"bbox": [307, 552, 527, 564], "score": 1.0, "content": "Vision models. We include fourteen VMs in our", "type": "text"}], "index": 84}, {"bbox": [307, 567, 527, 578], "spans": [{"bbox": [307, 567, 527, 578], "score": 1.0, "content": "experiments, representing three model families:", "type": "text"}], "index": 85}, {"bbox": [307, 579, 527, 591], "spans": [{"bbox": [307, 579, 527, 591], "score": 1.0, "content": "SegFormer (Xie et al., 2021), MAE (He et al.,", "type": "text"}], "index": 86}, {"bbox": [307, 593, 527, 605], "spans": [{"bbox": [307, 593, 527, 605], "score": 1.0, "content": "2022), and ResNet (He et al., 2016). For all three", "type": "text"}], "index": 87}, {"bbox": [307, 607, 527, 619], "spans": [{"bbox": [307, 607, 527, 619], "score": 1.0, "content": "types of VMs, we only employ the encoder compo-", "type": "text"}], "index": 88}, {"bbox": [305, 621, 456, 631], "spans": [{"bbox": [305, 621, 456, 631], "score": 1.0, "content": "nent as a visual feature extractor.4", "type": "text"}], "index": 89}], "index": 86.5, "page_num": "page_1", "page_size": [595.2760009765625, 841.8900146484375], "bbox_fs": [305, 552, 527, 631]}, {"type": "text", "bbox": [306, 633, 528, 715], "lines": [{"bbox": [318, 634, 528, 646], "spans": [{"bbox": [318, 634, 528, 646], "score": 1.0, "content": "SegFormer models consist of a Transformer-", "type": "text"}], "index": 90}, {"bbox": [306, 647, 527, 659], "spans": [{"bbox": [306, 647, 527, 659], "score": 1.0, "content": "based encoder and a light-weight feed-forward", "type": "text"}], "index": 91}, {"bbox": [307, 661, 527, 673], "spans": [{"bbox": [307, 661, 527, 673], "score": 1.0, "content": "decoder. They are pretrained on object classifi-", "type": "text"}], "index": 92}, {"bbox": [307, 674, 526, 687], "spans": [{"bbox": [307, 674, 526, 687], "score": 1.0, "content": "cation data and finetuned on scene parsing data", "type": "text"}], "index": 93}, {"bbox": [306, 688, 527, 700], "spans": [{"bbox": [306, 688, 527, 700], "score": 1.0, "content": "for scene segmentation and object classification.", "type": "text"}], "index": 94}, {"bbox": [307, 701, 526, 714], "spans": [{"bbox": [307, 701, 526, 714], "score": 1.0, "content": "We hypothesize that the reasoning necessary to", "type": "text"}], "index": 95}], "index": 92.5, "page_num": "page_1", "page_size": [595.2760009765625, 841.8900146484375], "bbox_fs": [306, 634, 528, 714]}]}
2302.06555
10
"Implications for the study of emergent prop-\nerties. The literature on large-scale, pretrained\nmo(...TRUNCATED)
"<p>Implications for the study of emergent prop-\nerties. The literature on large-scale, pretrained\(...TRUNCATED)
"[{\"type\": \"text\", \"coordinates\": [71, 64, 292, 171], \"content\": \"Implications for the stud(...TRUNCATED)
"[{\"type\": \"text\", \"coordinates\": [71, 64, 293, 78], \"content\": \"Implications for the study(...TRUNCATED)
[]
[]
[]
[595.2760009765625, 841.8900146484375]
"[{\"type\": \"text\", \"text\": \"Implications for the study of emergent properties. The literature(...TRUNCATED)
"[{\"category_id\": 1, \"poly\": [849.6249389648438, 924.4407348632812, 1466.5975341796875, 924.4407(...TRUNCATED)
"{\"preproc_blocks\": [{\"type\": \"text\", \"bbox\": [71, 64, 292, 171], \"lines\": [{\"bbox\": [71(...TRUNCATED)
2302.06555
11
"# 8 Conclusion\n\nIn this work, we have studied the question of\nwhether language and computer visi(...TRUNCATED)
"<h1>8 Conclusion</h1>\n<p>In this work, we have studied the question of\nwhether language and compu(...TRUNCATED)
"[{\"type\": \"title\", \"coordinates\": [71, 64, 147, 75], \"content\": \"8 Conclusion\", \"block_t(...TRUNCATED)
"[{\"type\": \"text\", \"coordinates\": [71, 64, 78, 74], \"content\": \"8\", \"score\": 1.0, \"inde(...TRUNCATED)
[]
"[{\"type\": \"inline\", \"coordinates\": [70, 247, 86, 259], \"content\": \"1\\\\%\", \"caption\": (...TRUNCATED)
[]
[595.2760009765625, 841.8900146484375]
"[{\"type\": \"text\", \"text\": \"8 Conclusion \", \"text_level\": 1, \"page_idx\": 11}, {\"type\":(...TRUNCATED)
"[{\"category_id\": 1, \"poly\": [199.0985565185547, 236.55697631835938, 811.77001953125, 236.556976(...TRUNCATED)
"{\"preproc_blocks\": [{\"type\": \"title\", \"bbox\": [71, 64, 147, 75], \"lines\": [{\"bbox\": [71(...TRUNCATED)
2302.06555
12
"Alexis Conneau, Guillaume Lample, Marc’Aurelio\nRanzato, Ludovic Denoyer, and Hervé Jégou.\n201(...TRUNCATED)
"<p>Alexis Conneau, Guillaume Lample, Marc’Aurelio\nRanzato, Ludovic Denoyer, and Hervé Jégou.\n(...TRUNCATED)
"[{\"type\": \"text\", \"coordinates\": [71, 64, 292, 117], \"content\": \"Alexis Conneau, Guillaume(...TRUNCATED)
"[{\"type\": \"text\", \"coordinates\": [72, 65, 290, 76], \"content\": \"Alexis Conneau, Guillaume (...TRUNCATED)
[]
[]
[]
[595.2760009765625, 841.8900146484375]
"[{\"type\": \"text\", \"text\": \"Alexis Conneau, Guillaume Lample, Marc\\u2019Aurelio Ranzato, Lud(...TRUNCATED)
"[{\"category_id\": 1, \"poly\": [199.17237854003906, 757.8592529296875, 812.6544799804688, 757.8592(...TRUNCATED)
"{\"preproc_blocks\": [{\"type\": \"text\", \"bbox\": [71, 64, 292, 117], \"lines\": [{\"bbox\": [72(...TRUNCATED)
2302.06555
13
"networks for improved multi-modal semantics.\nIn Proceedings of the 2014 Conference on Em-\npirical(...TRUNCATED)
"<p>networks for improved multi-modal semantics.\nIn Proceedings of the 2014 Conference on Em-\npiri(...TRUNCATED)
"[{\"type\": \"text\", \"coordinates\": [80, 64, 292, 130], \"content\": \"networks for improved mul(...TRUNCATED)
"[{\"type\": \"text\", \"coordinates\": [82, 65, 292, 77], \"content\": \"networks for improved mult(...TRUNCATED)
[]
[]
[]
[595.2760009765625, 841.8900146484375]
"[{\"type\": \"text\", \"text\": \"\", \"page_idx\": 13}, {\"type\": \"text\", \"text\": \"Douwe Kie(...TRUNCATED)
"[{\"category_id\": 1, \"poly\": [849.2387084960938, 1537.968505859375, 1466.3426513671875, 1537.968(...TRUNCATED)
"{\"preproc_blocks\": [{\"type\": \"text\", \"bbox\": [80, 64, 292, 130], \"lines\": [{\"bbox\": [82(...TRUNCATED)
2302.06555
14
"Alec Radford, Jong Wook Kim, Chris Hallacy,\nAditya Ramesh, Gabriel Goh, Sandhini Agarwal,\nGirish (...TRUNCATED)
"<p>Alec Radford, Jong Wook Kim, Chris Hallacy,\nAditya Ramesh, Gabriel Goh, Sandhini Agarwal,\nGiri(...TRUNCATED)
"[{\"type\": \"text\", \"coordinates\": [71, 63, 292, 157], \"content\": \"Alec Radford, Jong Wook K(...TRUNCATED)
"[{\"type\": \"text\", \"coordinates\": [72, 64, 291, 77], \"content\": \"Alec Radford, Jong Wook Ki(...TRUNCATED)
[]
[]
[]
[595.2760009765625, 841.8900146484375]
"[{\"type\": \"text\", \"text\": \"Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabrie(...TRUNCATED)
"[{\"category_id\": 1, \"poly\": [198.94984436035156, 177.6389617919922, 811.6333618164062, 177.6389(...TRUNCATED)
"{\"preproc_blocks\": [{\"type\": \"text\", \"bbox\": [71, 63, 292, 157], \"lines\": [{\"bbox\": [72(...TRUNCATED)
2302.06555
15
"models. Transactions on Machine Learning Re-\nsearch. Survey Certification.\n\nDaniel Williams. 201(...TRUNCATED)
"<p>models. Transactions on Machine Learning Re-\nsearch. Survey Certification.</p>\n<p>Daniel Willi(...TRUNCATED)
"[{\"type\": \"text\", \"coordinates\": [82, 64, 292, 90], \"content\": \"models. Transactions on Ma(...TRUNCATED)
"[{\"type\": \"text\", \"coordinates\": [82, 65, 291, 77], \"content\": \"models. Transactions on Ma(...TRUNCATED)
[]
"[{\"type\": \"inline\", \"coordinates\": [405, 713, 412, 723], \"content\": \"d\", \"caption\": \"\(...TRUNCATED)
[]
[595.2760009765625, 841.8900146484375]
"[{\"type\": \"text\", \"text\": \"\", \"page_idx\": 15}, {\"type\": \"text\", \"text\": \"Daniel Wi(...TRUNCATED)
"[{\"category_id\": 1, \"poly\": [196.83363342285156, 1242.7059326171875, 815.453125, 1242.705932617(...TRUNCATED)
"{\"preproc_blocks\": [{\"type\": \"text\", \"bbox\": [82, 64, 292, 90], \"lines\": [{\"bbox\": [82,(...TRUNCATED)
2302.06555
16
"Table 9: Evaluation of POS impact on $$\\mathrm{OPT}_{30\\mathrm{B}}$$ and\ndifferent CLIP models u(...TRUNCATED)
"<p>Table 9: Evaluation of POS impact on $$\\mathrm{OPT}_{30\\mathrm{B}}$$ and\ndifferent CLIP model(...TRUNCATED)
"[{\"type\": \"table\", \"coordinates\": [73, 62, 290, 130], \"content\": \"\", \"block_type\": \"ta(...TRUNCATED)
"[{\"type\": \"text\", \"coordinates\": [70, 278, 237, 291], \"content\": \"Table 9: Evaluation of P(...TRUNCATED)
"[{\"coordinates\": [315, 68, 515, 394], \"index\": 64.75, \"caption\": \"CLIP-VIT-L-14=CLIP-VIT-B-3(...TRUNCATED)
"[{\"type\": \"block\", \"coordinates\": [100, 359, 261, 394], \"content\": \"\", \"caption\": \"\"}(...TRUNCATED)
[]
[595.2760009765625, 841.8900146484375]
"[{\"type\": \"table\", \"img_path\": \"images/9a0ce02070608ec6599098646e6557628d297e7bb73a01bd6fc88(...TRUNCATED)
"[{\"category_id\": 5, \"poly\": [203.42465209960938, 172.62571716308594, 807.4441528320312, 172.625(...TRUNCATED)
"{\"preproc_blocks\": [{\"type\": \"table\", \"bbox\": [73, 62, 290, 130], \"blocks\": [{\"type\": \(...TRUNCATED)
2302.06555
17
"[{\"type\": \"image\", \"coordinates\": [73, 233, 503, 585], \"content\": \"\", \"block_type\": \"i(...TRUNCATED)
[]
"[{\"coordinates\": [73, 233, 503, 585], \"index\": 2.0, \"caption\": \"Figure 10: LMs converge towa(...TRUNCATED)
[]
[]
[595.2760009765625, 841.8900146484375]
"[{\"type\": \"image\", \"img_path\": \"images/9c55adebd9dead1bc3d270751220f510890c0064642d5816db5ba(...TRUNCATED)
"[{\"category_id\": 3, \"poly\": [204.68212890625, 650.011962890625, 1397.868896484375, 650.01196289(...TRUNCATED)
"{\"preproc_blocks\": [{\"type\": \"image\", \"bbox\": [73, 233, 503, 585], \"blocks\": [{\"type\": (...TRUNCATED)
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
24