|
Image Memorability Prediction with Vision |
|
Transformers |
|
Thomas Hagen1,� and Thomas Espeseth1,2 |
|
1Department of Psychology, University of Oslo, Oslo, Norway |
|
2Department of Psychology, Oslo New University College, Oslo, Norway |
|
Behavioral studies have shown that the memorability of images |
|
is similar across groups of people, suggesting that memorability |
|
is a function of the intrinsic properties of images, and is unre- |
|
lated to people’s individual experiences and traits. Deep learn- |
|
ing networks can be trained on such properties and be used |
|
to predict memorability in new data sets. Convolutional neu- |
|
ral networks (CNN) have pioneered image memorability predic- |
|
tion, but more recently developed vision transformer (ViT) mod- |
|
els may have the potential to yield even better predictions. In |
|
this paper, we present the ViTMem, a new memorability model |
|
based on ViT, and evaluate memorability predictions obtained |
|
by it with state-of-the-art CNN-derived models. Results showed |
|
that ViTMem performed equal to or better than state-of-the- |
|
art models on all data sets. Additional semantic level analyses |
|
revealed that ViTMem is particularly sensitive to the seman- |
|
tic content that drives memorability in images. We conclude |
|
that ViTMem provides a new step forward, and propose that |
|
ViT-derived models can replace CNNs for computational pre- |
|
diction of image memorability. Researchers, educators, adver- |
|
tisers, visual designers and other interested parties can leverage |
|
the model to improve the memorability of their image material. |
|
memorability | vision transformers | psychology | semantic information |
|
Introduction |
|
Everyone knows that our memories depend on the experi- |
|
ences we have had, facts we have encountered, and the abil- |
|
ities we have to remember them. Combinations of these fac- |
|
tors differ between individuals and give rise to unique memo- |
|
ries in each of us. However, a complementary perspective on |
|
memory focuses on the material that is (to be) remembered |
|
rather than the individual that does the remembering. In one |
|
central study, Isola et al. (1) presented more than 2000 scene |
|
images in a continuous repeat-detection task. The partici- |
|
pants were asked to respond whenever they saw an identical |
|
repeat. The results revealed that the memorability score (per- |
|
cent correct detections) varied considerably between images. |
|
Most importantly, by running a consistency analysis in which |
|
Spearman’s rank correlation was calculated on the memo- |
|
rability scores from random splits of the participant group, |
|
Isola and colleagues (1) were able to show that the memora- |
|
bility score ranking was consistent across participants – some |
|
images were memorable and some were forgettable. These |
|
results indicate that the degree to which an image was cor- |
|
rectly detected depended on properties intrinsic to the image |
|
itself, not the traits of the observers. This is important be- |
|
cause it shows that one can use the memorability scores in a |
|
stimulus set to predict memory performance in a new group |
|
of participants. |
|
These results have been replicated and extended in a num- |
|
ber of studies, revealing that similar findings are obtained |
|
with different memory tasks (2), different retention times |
|
(1, 2), different contexts (3), and independent of whether en- |
|
coding is intentional or incidental (4). However, although |
|
image memorability has proven to be a robust and reliable |
|
phenomenon, it has not been straightforward to pinpoint the |
|
image properties that drive it. What seems clear though, is |
|
that memorability is multifaceted (5, 6). One way to char- |
|
acterize the underpinnings of memorability is to investigate |
|
the contribution from processes at different levels of the vi- |
|
sual processing stream. For example, at the earliest stages of |
|
processing of a visual scene, visual attributes such as local |
|
contrast, orientation, and color are coded. At an intermedi- |
|
ate level, contours are integrated, surfaces, shapes, and depth |
|
cues are segmented, and foreground and background are dis- |
|
tinguished. At a higher level, object recognition is conducted |
|
through matching with templates stored in long term mem- |
|
ory. |
|
Positive correlations between brightness and high contrast of |
|
objects with memorability has been found (7), but in general, |
|
low-level visual factors such as color, contrast, and spatial |
|
frequency do not predict memorability well (5, 8, 9). This |
|
is consistent with results showing that perceptual features |
|
are typically not retained in long term visual memory (10). |
|
In contrast to the low-level features, the evidence for a re- |
|
lation between intermediate to high level semantic features |
|
and memorability is much stronger. For example, images that |
|
contain people, faces, body parts, animals, and food are often |
|
associated with high memorability, whereas the opposite is |
|
a typical finding for objects like buildings and furniture and |
|
images of landscapes and parks (3, 7, 11, 12). Other inter- |
|
mediate to high level features such as object interaction with |
|
the context or other objects, saliency factors, and image com- |
|
position also contribute to memorability (5). Furthermore, |
|
although memorability is not reducible to high-level features |
|
such as aesthetics (1, 12), interestingness (1, 13), or popu- |
|
larity (12), emotions, particularly of negative valence, seem |
|
to predict higher memorability (9, 12). Finally, memorabil- |
|
ity seems to be relatively independent of cognitive control, |
|
attention, or priming (14). |
|
Overall, the available evidence indicates that memorability |
|
seems to capture intermediate- to high-level properties of |
|
semantics, such as objects or actions, and image composi- |
|
tion, such as layout and clutter, rather than low-level fea- |
|
Hagen et al. |
|
| |
|
January 23, 2023 |
|
| |
|
1–7 |
|
arXiv:2301.08647v1 [cs.CV] 20 Jan 2023 |
|
|
|
tures (5, 15). This fits well with the central role of semantic |
|
categories in organizing cognition and memory (16). Gen- |
|
erally, the priority of semantic-level information enables us |
|
to quickly understand novel scenes and predict future events |
|
(17). For example, when inspecting a novel scene or an im- |
|
age, we do not primarily focus on low-level perceptual fea- |
|
tures or pixels, but prioritize more abstract visual schemas |
|
involving spatial regions, objects, and the relation between |
|
them (18). Also, when people are asked to indicate which |
|
regions of an image helps them recognize an image, there is |
|
high consistency between people’s responses (18). Similarly, |
|
fixation map data from eye-tracking have shown that there is |
|
a positive correlation between fixation map consistency and |
|
scene memorability, and this relation is associated with the |
|
presence of meaningful objects (3, 7, 19). Bylinskii et al. |
|
(5) suggest that these properties most efficiently signal infor- |
|
mation of high utility to our species, for example, emotions, |
|
social aspects, animate objects (e.g., faces, gestures, interac- |
|
tions), unexpected events, and tangible objects. |
|
Memorability prediction. The finding that the memorabil- |
|
ity of an image is governed by properties intrinsic to the im- |
|
age itself, not only implies that one can predict memory per- |
|
formance in a new set of participants, as described above, |
|
but also that one can predict the memorability of a novel set |
|
of images (i.e., memorability is an “image computable” fea- |
|
ture). Given the availability of computational algorithms and |
|
high-quality training sets of sufficient size, one can predict |
|
memorability in novel sets of images for future (or already |
|
conducted) behavioral or neuroimaging studies. Such mem- |
|
orability prediction could also be valuable in a number of ap- |
|
plied settings (e.g., within education, marketing and human- |
|
computer interaction). |
|
Memorability researchers have employed computer vision |
|
models such as convolutional neural networks (CNNs) from |
|
early on (12), and advancements in the field have allowed |
|
researchers to predict image memorability with increasing |
|
precision (20–22). The inductive bias (the assumptions of |
|
the learning algorithms used to generalize to unseen data) of |
|
CNNs is inspired by knowledge about the primate visual sys- |
|
tem, and activations in the networks layers have, with some |
|
success, been used to explain neural activations (23). How- |
|
ever, some vulnerabilities of CNNs have been noted. For ex- |
|
ample, CNNs appear to depend more on image texture than |
|
biological vision systems do (24), and have problems with |
|
recognizing images based on the shape of objects (e.g., when |
|
texture is suppressed or removed). However, this vulnera- |
|
bility is reduced when the model’s shape bias is increased |
|
through training on shape representations (25). |
|
The LaMem train/test splits is a well-established benchmark |
|
for memorability prediction (12). The original MemNet (12), |
|
which is based on AlexNet (26), achieved a Spearman rank |
|
correlation of 0.64 on this benchmark. |
|
There have been |
|
several improvements on this benchmark, the leading ap- |
|
proaches utilize image captioning to enhance memorability |
|
predictions. That is, a CNN produces a textual description of |
|
the image, which is then used to provide more high-level se- |
|
mantic information which is embedded into a semantic vec- |
|
tor space before being combined with CNN image features |
|
in a multi-layered perceptron network. Squalli-Houssaini et |
|
al. (21) used this approach to reach a Spearman correlation |
|
of 0.72, with a mean squared error (MSE) of approximately |
|
0.0092 (22). Leonardi et al. (22) used the captioning ap- |
|
proach with dual ResNet50s and a soft attention mechanism |
|
to reach a rank correlation of 0.687 with an MSE of 0.0079. |
|
The ResMem model (20), which is a CNN-based residual |
|
neural network architecture (ResNet), uses LaMem, but also |
|
takes advantage of a more recently published dataset named |
|
MemCat (11). This is a data set containing 10,000 images |
|
based on categories of animals, food, landscape, sports and |
|
vehicles. This data set also has a higher split half correla- |
|
tion than LaMem. Needell and Bainbridge (20) argue that |
|
the LaMem dataset on its own is lacking in generalizability |
|
due to poor sampling of naturalistic images. That is, the im- |
|
ages are more intended as artistic renderings designed to at- |
|
tract an online audience. Hence by combining MemCat with |
|
LaMem this should potentially yield a more generalizable |
|
model. Moreover, the increased size of the combined dataset |
|
might help in driving the model performance further than pre- |
|
vious models based on LaMem. The authors of ResMem |
|
also noted the importance of semantic information and struc- |
|
tured their approach to utilize semantic representations from |
|
a ResNet model in order to improve predictions. An added |
|
benefit of ResMem is that it is shared on the python pack- |
|
age index, which makes it easily accessible to researchers in |
|
diverse fields. |
|
Vision transformers. Vision transformers (ViT) have re- |
|
cently been shown to provide similar or better performance |
|
than CNNs in a variety of computer vision tasks (27). This |
|
architecture was first introduced in the natural language pro- |
|
cessing field (28) for capturing long-range dependencies in |
|
text. This architecture leads to superior speed/performance |
|
balance relativ to ResNet architectures (29). Moreover, ViTs |
|
have been shown to produce errors that are more similar to |
|
human errors (30), suggesting that they could take similar |
|
information into account (see also (31)). A reason for this |
|
may be that ViTs are likely to take more of the global context |
|
into account and be more dependent on the shape of objects |
|
rather than their texture (30). While it is not entirely clear |
|
why such properties may yield better predictions of image |
|
memorability, it could still help inform the discourse on the |
|
visual characteristics that are relevant as well as potentially |
|
yielding a better model for predicting image memorability. |
|
Hence, we set out to investigate if vision transformers can |
|
yield better predictions of memorability than the state-of- |
|
the-art in image memorability prediction. In particular, we |
|
aimed to (i) benchmark a model based on ViT against the |
|
well-established LaMem train/test splits (12), (ii) train a ViT |
|
against the combined LaMem and MemCat data sets (20) to |
|
benchmark against the ResMem model (20), (iii) train a final |
|
ViT model against a more diverse and deduplicated data set, |
|
(iv) validate the final ViT model against additional indepen- |
|
dent data sets and (v) inspect semantic level distributions of |
|
memorability scores for behavioral and predicted data. |
|
2 |
|
| |
|
Hagen et al. |
|
| |
|
ViTMem |
|
|
|
Methods |
|
As our model is based on ViT to predict memorability we |
|
named it ViTMem. |
|
Because it has been shown that low- |
|
level visual features are less important for image memorabil- |
|
ity prediction, it would seem appropriate to use image aug- |
|
mentations in training our ViTMem model to reduce over- |
|
fitting. This approach have also been used by others (22), |
|
although not to the extent done here. |
|
The augmentations |
|
used consisted of horizontal flipping, sharpen, blur, motion |
|
blur, random contrast, hue saturation value, CLAHE, shift |
|
scale rotate, perspective, optical distortion and grid distortion |
|
(32). For training all models we used PyTorch, the ADAM |
|
optimizer and mean squared error (squared L2 norm) for the |
|
loss function. Images were input as batches of 32 in RGB |
|
and resized to 256x256 pixels before applying augmentations |
|
with a probability of 0.7 and center cropping to 224x224 pix- |
|
els. For creating ViTMem we used transfer learning on a |
|
vision transformer (27) model pretrained on ImageNet 1k |
|
(vit_base_patch16_224_miil) (33). |
|
The final classification |
|
layer was reduced to a single output and a sigmoid activation |
|
function. |
|
As we aim to provide an accessible model to the re- |
|
search community, it is also necessary to compare against |
|
the publicly available ResMem model. Unfortunately, the |
|
authors of ResMem did not publish their held-out test |
|
set, hence it is difficult to make a balanced compari- |
|
son between the currently published ResMem model and |
|
any competing models. |
|
We propose to do 10 train/test |
|
splits that can be used by future researchers (available |
|
at https://github.com/brainpriority/vitmem_data). Moreover, |
|
ResMem was not benchmarked on LaMem, hence a fair com- |
|
parison can only be made on the combined LaMem and |
|
MemCat data set. |
|
For the semantic level analysis, we chose to use image cap- |
|
tioning (34) as this provides an efficient method for deriv- |
|
ing semantic properties from images at scale. Importantly, |
|
as the image captioning model was trained on human image |
|
descriptions, it is likely to extract content that humans find |
|
meaningful in images, and in particular objects and contexts |
|
that are relevant for conveying such meanings. Hence, nouns |
|
derived from such descriptions are likely to be representative |
|
portions of the content that would convey meaning to humans |
|
observing the images. |
|
Data Sources. For the large-scale image memorability |
|
(LaMem) benchmark we used the LaMem dataset (12). The |
|
image set used by ResMem is a combination of the image sets |
|
LaMem (12) and MemCat (11). LaMem containing 58,741 |
|
and MemCat 10,000 images, for a total of 68,741 images. |
|
ResMem is reported to have used a held-out test set with 5000 |
|
images, hence we randomly selected 5000 images as our test |
|
set for our 10 train/test splits for this combined data set. For |
|
our final model we aimed to clean up the data and combine |
|
more of the available data sets on image memorability. As |
|
number of duplicated images within and between data sets is |
|
unknown and duplicated images may interfere with perfor- |
|
mance measures, we aimed to deduplicate the data for this |
|
model. Duplicated images were identified by simply deriv- |
|
ing embeddings from an off-the-shelf CNN model, and then |
|
visually inspecting the most similar embeddings. Our analy- |
|
sis of the data sets LaMem and MemCat showed that LaMem |
|
have 229 duplicated images while MemCat have 4. More- |
|
over, 295 of the images in LaMem is also in MemCat. We |
|
aimed to build a larger and more diverse data set by com- |
|
bining more sources, and for this we chose CVPR2011 (9) |
|
and FIGRIM (3). CVPR2011 had 6 internal duplicates, 651 |
|
duplicates against LaMem, 78 against MemCat og 9 against |
|
FIGRIM. FIGRIM had 20 duplicates against MemCat and 70 |
|
against LaMem. All identified duplicates were removed be- |
|
fore merging the data sets. As the images from FIGRIM and |
|
CVPR2011 were cropped, we obtained the original images |
|
before including them in the data set. This resulted in a data |
|
set with 71,658 images. For this data set we performed a 10% |
|
split for the test set. |
|
Results |
|
Results on LaMem data set. On the LaMem data set |
|
the ViTMem model reached an average Spearman rank |
|
correlation of 0.711 and an MSE of 0.0076 (see Table 1). |
|
Here we compare our performance to measures obtained by |
|
MemNet (12), Squalli-Houssaini et al. (21) and Leonardi et |
|
al. (22). |
|
Table 1. Comparison of model performance on LaMem data set |
|
Model |
|
MSE Loss ↓ |
|
Spearman ρ ↑ |
|
MemNet |
|
Unknown |
|
0.640 |
|
Squalli-Houssaini et al. |
|
0.0092 |
|
0.720 |
|
Leonardi et al. |
|
0.0079 |
|
0.687 |
|
ViTMem |
|
0.0076 |
|
0.711 |
|
Results on the combined LaMem and MemCat data |
|
set. Training on 10 train/test splits on the combined data |
|
set the results showed that ViTMem performed better than |
|
the ResMem model (see Table 2). The average across splits |
|
showed a Spearman rank correlation of 0.77 and an MSE of |
|
0.005. |
|
Table 2. Model performance on LaMem and MemCat combiend dataset |
|
Model |
|
MSE Loss ↓ |
|
Spearman ρ ↑ |
|
ResMem |
|
0.009 |
|
0.67 |
|
ViTMem |
|
0.005 |
|
0.77 |
|
Results on combined and cleaned data set. To assess |
|
model performance on the larger and cleaned data set, we |
|
made a train/test split and then performed repeated k-fold |
|
cross validation with 10 train/test splits on the training set. |
|
This resulted in a mean MSE loss of 0.006 and a mean |
|
Spearman rank correlation of 0.76 (see Table 3). In order |
|
to provide a model for the community we used the full data |
|
Hagen et al. |
|
| |
|
ViTMem |
|
| |
|
3 |
|
|
|
set to train the final model (ViTMem Final Model), which |
|
is published on the python package index as version 1.0.0. |
|
This was trained on the full training set and tested on its |
|
corresponding test set. The results showed a Spearman rank |
|
correlation of 0.77 and an MSE of 0.006 (see Table 3). The |
|
train/test splits are available on github. |
|
Table 3. Model performance on combined and cleaned data set |
|
Model |
|
MSE Loss ↓ |
|
Spearman ρ ↑ |
|
ViTMem |
|
0.006 |
|
0.76 |
|
ViTMem Final Model |
|
0.006 |
|
0.77 |
|
Validation on independent data sets. To further validate |
|
our model, we used memorability scores from an indepen- |
|
dent data set by Dubey and colleagues named PASCAL-S |
|
(7, 35) consisting of 852 images and cropped objects from |
|
the same images. ViTMem achieved a Spearman correlation |
|
of 0.44 on the images and 0.21 on the objects. In compar- |
|
ison ResMem achieved a correlation of 0.36 on the images |
|
and 0.14 on the objects. Validating against the THINGS data |
|
set (15), which consists of 26,106 images with memorabil- |
|
ity scores, achieved a Spearman rank correlation of 0.30 for |
|
ViTMem and 0.22 for ResMem. |
|
Semantic level analysis. In order to better understand how |
|
the model predictions relate to the semantic content of the |
|
images, we performed image captioning (34) on the com- |
|
bined LaMem and MemCat data set and the Places205 data |
|
set (36). We extracted nouns from the resulting image de- |
|
scriptions and averaged behavioral or predicted memorability |
|
scores for each noun (37). That is, the memorability for each |
|
image was assigned to each noun derived from the image cap- |
|
tioning procedure. For the combined LaMem and MemCat |
|
data set we averaged behavioral memorability scores over |
|
nouns (see Figure 1), while for the Places205 data set we |
|
averaged predicted memorability scores from the ViTMem |
|
model (see Figure 2). A general interpretation of the visu- |
|
alizations in Figure 1 and 2 is that they appear to reveal a |
|
dimension from nouns usually observed outdoors to more in- |
|
door related nouns and ending with nouns related to animals, |
|
and in particular, humans. This would appear to reflect the |
|
distributions observed in previous work (9, 15), and hence |
|
help to validate the model in terms of the image content it |
|
is sensitive to. To further investigate how well memorability |
|
associated with nouns were similar across the models we se- |
|
lected nouns occurring more than the 85th percentile in each |
|
set (654 nouns for LaMem and MemCat, 2179 nouns for |
|
Places205), this resulted in 633 matched nouns across sets. |
|
Analysis of these showed a Spearman ranked correlation of |
|
0.89 and a R2 of 0.79, p<0.001 (see Figure 3). This analysis |
|
indicates that nouns from image captioning is a strong pre- |
|
dictor of image memorability and that the ViTMem model is |
|
able to generalize the importance of such aspects from the |
|
training set to a new set of images. |
|
Discussion |
|
Using vision transformers we have improved on the state- |
|
of-the-art in image memorability prediction. Results showed |
|
that ViTMem performed equal to or better than state-of- |
|
the-art models on LaMem, and better than ResMem on the |
|
LaMem and MemCat hybrid data set. In addition, we assem- |
|
bled a new deduplicated hybrid data set and benchmarked |
|
the ViTMem model against this before training a final model. |
|
The model was further validated on additional data sets, and |
|
performed better than ResMem on these as well. Finally, |
|
we ran a semantic level analysis by using image captioning |
|
on the hybrid data set. |
|
We ranked the behavioral memo- |
|
rability scores on the images, labeled with nouns extracted |
|
from the captioning procedure. The results revealed that im- |
|
ages labeled by nouns related to landscapes, cities, buildings |
|
and similar, were ranked lowest, whereas images labeled by |
|
nouns related to animate objects and food, were ranked high- |
|
est. This finding is consistent with known category effects |
|
on memorability (3, 7, 11, 12, 15) and suggests that the la- |
|
bels extracted from captioning procedure is strongly related |
|
to factors that drive memorability for those images. Subse- |
|
quently, we predicted memorability scores on images from a |
|
novel data set (Places205), ran the image captioning proce- |
|
dure, and ranked the predicted memorability scores on the |
|
images, labeled with nouns extracted from the captioning |
|
procedure. Visual inspection of the results revealed that the |
|
ranks were similar across samples and methods. This impres- |
|
sion was confirmed by a strong correlation between matching |
|
pairs of nouns and 79% explained variance, suggesting that |
|
ViTMem captures the semantic content that drives memora- |
|
bility in images. |
|
The use of image augmentations in training the ViTMem |
|
model in combination with state-of-the-art performance sug- |
|
gest that such augmentations are not disrupting the ability |
|
of the model to predict image memorability and hence may |
|
further support the importance of semantic level properties |
|
in image memorability. That is, the augmentations modify |
|
a range of low-level image properties but mostly leave the |
|
semantic content intact. |
|
In comparison with ResMem, which relies on a CNN-based |
|
residual neural network architecture, ViTMem is based on |
|
vision transformers which integrate information in a more |
|
global manner (30). As images are compositions of several |
|
semantically identifiable objects or parts of objects, a more |
|
holistic approach may be more apt at delineating the relative |
|
relevance of objects given their context. That is, we speculate |
|
that a broader integration of image features allows for a more |
|
complete evaluation of its constituent features in relation to |
|
each other. Hence, if semantic content is important for pre- |
|
dicting image memorability, the model may have weighed the |
|
importance of semantic components in relation to each other |
|
to a larger degree than models based on CNNs. |
|
ViTMem code and train/test sets are shared on github |
|
(https://github.com/brainpriority/), and a python package |
|
named vitmem is available on the python package index (see |
|
supplementary Sup. Note 1 for a tutorial). Researchers and |
|
interested parties can use the model to predict memorability |
|
4 |
|
| |
|
Hagen et al. |
|
| |
|
ViTMem |
|
|
|
Memorability |
|
c() |
|
0.56 |
|
0.58 |
|
0.60 |
|
0.62 |
|
0.64 |
|
0.66 |
|
0.68 |
|
0.70 |
|
0.72 |
|
0.74 |
|
0.76 |
|
0.78 |
|
0.80 |
|
0.82 |
|
0.84 |
|
0.86 |
|
0.88 |
|
0.90 |
|
mountains |
|
skyline |
|
clouds |
|
sunset |
|
fireplace |
|
view |
|
cloud |
|
dresser |
|
pine |
|
bedroom |
|
houses |
|
stream |
|
church |
|
highway |
|
waterfall |
|
house |
|
hotel |
|
sky |
|
boat |
|
wave |
|
water |
|
park |
|
reflection |
|
building |
|
walls |
|
tree |
|
people |
|
lights |
|
temple |
|
smoke |
|
flowers |
|
power |
|
rock |
|
group |
|
bus |
|
store |
|
lot |
|
truck |
|
fire |
|
center |
|
market |
|
game |
|
walking |
|
bench |
|
court |
|
person |
|
guitar |
|
police |
|
motorcycle |
|
food |
|
men |
|
show |
|
picture |
|
stem |
|
sign |
|
ground |
|
link |
|
women |
|
stuffed |
|
toy |
|
phone |
|
bride |
|
plate |
|
bag |
|
girl |
|
cards |
|
wedding |
|
shoe |
|
pair |
|
scarf |
|
hands |
|
hand |
|
shape |
|
neck |
|
face |
|
cut |
|
toothbrush |
|
half |
|
banana |
|
smile |
|
makeup |
|
necklace |
|
teeth |
|
pepper |
|
valley |
|
mountain |
|
dining |
|
lake |
|
trees |
|
buildings |
|
river |
|
hill |
|
city |
|
boats |
|
rocks |
|
fog |
|
lobby |
|
ocean |
|
tables |
|
middle |
|
kitchen |
|
area |
|
bridge |
|
construction |
|
field |
|
office |
|
room |
|
woods |
|
road |
|
clock |
|
photo |
|
steel |
|
street |
|
stove |
|
surfboard |
|
light |
|
dirt |
|
window |
|
side |
|
fence |
|
train |
|
bed |
|
museum |
|
door |
|
bird |
|
mirror |
|
flower |
|
grass |
|
course |
|
blue |
|
video |
|
row |
|
car |
|
couple |
|
table |
|
top |
|
line |
|
man |
|
bug |
|
case |
|
dog |
|
floor |
|
gas |
|
boy |
|
girls |
|
cell |
|
piece |
|
camera |
|
woman |
|
knife |
|
arms |
|
baby |
|
board |
|
gold |
|
head |
|
hair |
|
sunglasses |
|
persons |
|
shirt |
|
tie |
|
feet |
|
nose |
|
chocolate |
|
word |
|
beard |
|
snake |
|
tattoo |
|
blood |
|
bikini |
|
Fig. 1. Average behavioral image memorability scores for nouns that were extracted from images in the LaMem and MemCat data sets. The nouns shown are those that |
|
occurred most frequently or that are more frequent in the English language (38). |
|
Memorability |
|
c() |
|
0.52 |
|
0.54 |
|
0.56 |
|
0.58 |
|
0.60 |
|
0.62 |
|
0.64 |
|
0.66 |
|
0.68 |
|
0.70 |
|
0.72 |
|
0.74 |
|
0.76 |
|
0.78 |
|
0.80 |
|
0.82 |
|
0.84 |
|
0.86 |
|
0.88 |
|
0.90 |
|
badlands |
|
rim |
|
stormy |
|
glacier |
|
mountain |
|
sun |
|
town |
|
hill |
|
fireplace |
|
houses |
|
sunset |
|
snow |
|
slope |
|
desert |
|
dusk |
|
rocks |
|
couches |
|
city |
|
cabinets |
|
steeple |
|
university |
|
street |
|
place |
|
hotel |
|
highway |
|
cathedral |
|
formation |
|
center |
|
building |
|
beach |
|
tree |
|
home |
|
way |
|
stone |
|
lot |
|
fire |
|
christmas |
|
lighthouse |
|
space |
|
monument |
|
desk |
|
people |
|
crowd |
|
boat |
|
wall |
|
inside |
|
airport |
|
music |
|
van |
|
museum |
|
model |
|
round |
|
stage |
|
statue |
|
baseball |
|
auditorium |
|
party |
|
classroom |
|
tent |
|
stand |
|
row |
|
court |
|
store |
|
picture |
|
pink |
|
bus |
|
shelf |
|
bowling |
|
sale |
|
men |
|
bars |
|
family |
|
gym |
|
fish |
|
boy |
|
motel |
|
woman |
|
ring |
|
rack |
|
soldier |
|
girl |
|
ties |
|
dresses |
|
words |
|
name |
|
dancing |
|
suit |
|
wrestlers |
|
arms |
|
mannequin |
|
cookies |
|
cream |
|
shirt |
|
wife |
|
cupcakes |
|
chocolate |
|
bikini |
|
hillside |
|
clouds |
|
mountains |
|
valley |
|
cloud |
|
farm |
|
village |
|
snowy |
|
square |
|
waves |
|
vineyard |
|
island |
|
view |
|
mansion |
|
smoke |
|
castle |
|
living |
|
coast |
|
lawn |
|
area |
|
church |
|
house |
|
tower |
|
clock |
|
field |
|
road |
|
rain |
|
wave |
|
sink |
|
top |
|
state |
|
water |
|
chairs |
|
bed |
|
room |
|
side |
|
birds |
|
dock |
|
leaves |
|
park |
|
supplies |
|
force |
|
station |
|
table |
|
play |
|
post |
|
cross |
|
market |
|
desks |
|
photos |
|
group |
|
image |
|
library |
|
game |
|
line |
|
school |
|
video |
|
dog |
|
food |
|
star |
|
crib |
|
show |
|
clothes |
|
book |
|
floor |
|
children |
|
man |
|
heart |
|
baby |
|
display |
|
sign |
|
roller |
|
women |
|
class |
|
football |
|
girls |
|
case |
|
hands |
|
team |
|
desserts |
|
face |
|
shirts |
|
suits |
|
logo |
|
hair |
|
plate |
|
pastries |
|
head |
|
grave |
|
meat |
|
tie |
|
bread |
|
donuts |
|
mouth |
|
dance |
|
dress |
|
dancer |
|
Fig. 2. Average ViTMem predicted image memorability scores for nouns that were extracted from images in the Places205 data set. The nouns shown are those that occurred |
|
most frequently or that are more frequent in the English language (38). |
|
0.6 |
|
0.7 |
|
0.8 |
|
0.9 |
|
0.6 |
|
0.7 |
|
0.8 |
|
0.9 |
|
Memorability for LaMem & MemCat Nouns (Behavioral) |
|
Memorability for Places205 Nouns (ViTMem) |
|
Fig. 3. Average memorability scores for images with matching nouns in different |
|
data sets. The y-axis shows average predicted memorability scores from ViTMem |
|
on the Places205 data set. |
|
The x-axis shows average behavioral memorability |
|
scores on the combined LaMem and MemCat data set. |
|
in existing or novel stimuli and employ them in research or |
|
applied settings. The ViTMem model will allow researchers |
|
to more precisely predict image memorability. The release |
|
of ViTMem follows up ResMem in providing an accessible |
|
method for predicting image memorability. This is impor- |
|
tant for studies aiming to control for how easily an image can |
|
be remembered. This will for example allow experimental |
|
psychologists and neuroscientists to better control their re- |
|
search. Similarly, educators, advertisers and visual designers |
|
can leverage the model to improve the memorability of their |
|
content. |
|
Despite state-of-the-art performance in memorability predic- |
|
tion, improvements may still be possible to achieve. Previous |
|
works have shown benefits of pretraining their networks on |
|
data sets of places and objects prior to fine tuning for memo- |
|
rability prediction (39). Moreover, ViTMem do not take im- |
|
age captioning into account, which have been successfully |
|
done with CNNs (21, 22). Hence there is potentially more |
|
to be gained from incorporating image semantics and/or pre- |
|
training on data sets of objects and places. In addition, ViT- |
|
Mem is only based on the "base" configuration of the avail- |
|
able ViT models. Model performance may still increase by |
|
adopting the “large” or “huge” configurations of the model. |
|
We conclude that ViTMem can be used to predict memora- |
|
bility for images at a level that is equal to or better than state- |
|
of-the-art models, and we propose that vision transformers |
|
provide a new step forward in the computational prediction |
|
of image memorability. |
|
Hagen et al. |
|
| |
|
ViTMem |
|
| |
|
5 |
|
|
|
References |
|
1. |
|
Phillip Isola, Jianxiong Xiao, Devi Parikh, Antonio Torralba, and Aude Oliva. What makes a |
|
photograph memorable? IEEE transactions on pattern analysis and machine intelligence, |
|
36(7):1469–1482, 2013. |
|
2. |
|
Lore Goetschalckx, Pieter Moors, and Johan Wagemans. Image memorability across longer |
|
time intervals. Memory, 26(5):581–588, 2018. |
|
3. |
|
Zoya Bylinskii, Phillip Isola, Constance Bainbridge, Antonio Torralba, and Aude Oliva. In- |
|
trinsic and extrinsic effects on image memorability. Vision research, 116:165–178, 2015. |
|
4. |
|
Lore Goetschalckx, Jade Moors, and Johan Wagemans. Incidental image memorability. |
|
Memory, 27(9):1273–1282, 2019. |
|
5. |
|
Zoya Bylinskii, Lore Goetschalckx, Anelise Newman, and Aude Oliva. Memorability: An |
|
image-computable measure of information utility. In Human Perception of Visual Informa- |
|
tion, pages 207–239. Springer, 2022. |
|
6. |
|
Nicole C Rust and Vahid Mehrpour. Understanding image memorability. Trends in cognitive |
|
sciences, 24(7):557–568, 2020. |
|
7. |
|
Rachit Dubey, Joshua Peterson, Aditya Khosla, Ming-Hsuan Yang, and Bernard Ghanem. |
|
What makes an object memorable? In Proceedings of the ieee international conference on |
|
computer vision, pages 1089–1097, 2015. |
|
8. |
|
Wilma A Bainbridge, Daniel D Dilks, and Aude Oliva. Memorability: A stimulus-driven per- |
|
ceptual neural signature distinctive from memory. NeuroImage, 149:141–152, 2017. |
|
9. |
|
Phillip Isola, Devi Parikh, Antonio Torralba, and Aude Oliva. Understanding the intrinsic |
|
memorability of images. Advances in neural information processing systems, 24, 2011. |
|
10. |
|
Timothy F Brady, Talia Konkle, and George A Alvarez. A review of visual memory capacity: |
|
Beyond individual items and toward structured representations. Journal of vision, 11(5): |
|
4–4, 2011. |
|
11. |
|
Lore Goetschalckx and Johan Wagemans. Memcat: a new category-based image set quan- |
|
tified on memorability. PeerJ, 7:e8169, 2019. |
|
12. |
|
Aditya Khosla, Akhil S. Raju, Antonio Torralba, and Aude Oliva. Understanding and predict- |
|
ing image memorability at a large scale. In International Conference on Computer Vision |
|
(ICCV), 2015. |
|
13. |
|
Michael Gygli, Helmut Grabner, Hayko Riemenschneider, Fabian Nater, and Luc Van Gool. |
|
The interestingness of images. In Proceedings of the IEEE international conference on |
|
computer vision, pages 1633–1640, 2013. |
|
14. |
|
Wilma A Bainbridge. The resiliency of image memorability: A predictor of memory separate |
|
from attention and priming. Neuropsychologia, 141:107408, 2020. |
|
15. |
|
Max A. Kramer, Martin N. Hebart, Chris I. Baker, and Wilma A. Bainbridge. The features |
|
underlying the memorability of objects. bioRxiv, 2022. doi: 10.1101/2022.04.29.490104. |
|
16. |
|
Eleanor Rosch, Carolyn B Mervis, Wayne D Gray, David M Johnson, and Penny Boyes- |
|
Braem. Basic objects in natural categories. Cognitive psychology, 8(3):382–439, 1976. |
|
17. |
|
Douglas L Medin and John D Coley. Concepts and categorization. Perception and cognition |
|
at century’s end: Handbook of perception and cognition, pages 403–439, 1998. |
|
18. |
|
Erdem Akagunduz, Adrian G Bors, and Karla K Evans. Defining image memorability using |
|
the visual memory schema. IEEE transactions on pattern analysis and machine intelligence, |
|
42(9):2165–2178, 2019. |
|
19. |
|
Muxuan Lyu, Kyoung Whan Choe, Omid Kardan, Hiroki P Kotabe, John M Henderson, and |
|
Marc G Berman. Overt attentional correlates of memorability of scene images and their |
|
relationships to scene semantics. Journal of Vision, 20(9):2–2, 2020. |
|
20. |
|
Coen D Needell and Wilma A Bainbridge. Embracing new techniques in deep learning for |
|
estimating image memorability. Computational Brain & Behavior, pages 1–17, 2022. |
|
21. |
|
Hammad Squalli-Houssaini, Ngoc QK Duong, Marquant Gwenaëlle, and Claire-Hélène De- |
|
marty. Deep learning for predicting image memorability. In 2018 IEEE international con- |
|
ference on acoustics, speech and signal processing (ICASSP), pages 2371–2375. IEEE, |
|
2018. |
|
22. |
|
Marco Leonardi, Luigi Celona, Paolo Napoletano, Simone Bianco, Raimondo Schettini, |
|
Franco Manessi, and Alessandro Rozza. Image memorability using diverse visual features |
|
and soft attention. In International Conference on Image Analysis and Processing, pages |
|
171–180. Springer, 2019. |
|
23. |
|
Daniel LK Yamins, Ha Hong, Charles F Cadieu, Ethan A Solomon, Darren Seibert, and |
|
James J DiCarlo. Performance-optimized hierarchical models predict neural responses in |
|
higher visual cortex. Proceedings of the national academy of sciences, 111(23):8619–8624, |
|
2014. |
|
24. |
|
Nicholas Baker, Hongjing Lu, Gennady Erlikhman, and Philip J Kellman. Local features |
|
and global shape information in object classification by deep convolutional neural networks. |
|
Vision research, 172:46–61, 2020. |
|
25. |
|
Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A Wichmann, |
|
and Wieland Brendel. Imagenet-trained cnns are biased towards texture; increasing shape |
|
bias improves accuracy and robustness. arXiv preprint arXiv:1811.12231, 2018. |
|
26. |
|
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep |
|
convolutional neural networks. |
|
Advances in neural information processing systems, 25, |
|
2012. |
|
27. |
|
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, |
|
Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, |
|
et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv |
|
preprint arXiv:2010.11929, 2020. |
|
28. |
|
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N |
|
Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural |
|
information processing systems, 30, 2017. |
|
29. |
|
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. |
|
Identity mappings in deep |
|
residual networks. In European conference on computer vision, pages 630–645. Springer, |
|
2016. |
|
30. |
|
Shikhar Tuli, Ishita Dasgupta, Erin Grant, and Thomas L Griffiths. Are convolutional neural |
|
networks or transformers more like human vision? arXiv preprint arXiv:2105.07197, 2021. |
|
31. |
|
Nicholas Baker and James H Elder. Deep learning models fail to capture the configural |
|
nature of human shape perception. Iscience, 25(9):104913, 2022. |
|
32. |
|
Alexander Buslaev, Vladimir I Iglovikov, Eugene Khvedchenya, Alex Parinov, Mikhail |
|
Druzhinin, and Alexandr A Kalinin. Albumentations: fast and flexible image augmentations. |
|
Information, 11(2):125, 2020. |
|
33. |
|
Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, and Lihi Zelnik-Manor. Imagenet-21k pretrain- |
|
ing for the masses. arXiv preprint arXiv:2104.10972, 2021. |
|
34. |
|
Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, |
|
Chang Zhou, Jingren Zhou, and Hongxia Yang. |
|
Unifying architectures, tasks, and |
|
modalities through a simple sequence-to-sequence learning framework. |
|
arXiv preprint |
|
arXiv:2202.03052, 2022. |
|
35. |
|
Yin Li, Xiaodi Hou, Christof Koch, James M Rehg, and Alan L Yuille. The secrets of salient |
|
object segmentation. In Proceedings of the IEEE conference on computer vision and pattern |
|
recognition, pages 280–287, 2014. |
|
36. |
|
Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Aude Oliva. Learning |
|
deep features for scene recognition using places database. Advances in neural information |
|
processing systems, 27, 2014. |
|
37. |
|
Steven Loria et al. textblob v0.17.1, October 2021. |
|
38. |
|
Robyn Speer. rspeer/wordfreq: v3.0, September 2022. |
|
39. |
|
Shay Perera, Ayellet Tal, and Lihi Zelnik-Manor. Is image memorability prediction solved? |
|
In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition |
|
workshops, pages 0–0, 2019. |
|
6 |
|
| |
|
Hagen et al. |
|
| |
|
ViTMem |
|
|
|
Supplementary Note 1: How to use the vitmem python package |
|
Python needs to be installed on a computer before pip can be used to install the vitmem package. |
|
To install vitmem from a command prompt run: |
|
pip install vitmem |
|
To predict image memorability for an image named "image.jpg", run the following in a python interpreter: |
|
from vitmem import ViTMem |
|
model = ViTMem() |
|
memorability = model("image.jpg") |
|
print(f"Predicted memorability: {memorability}") |
|
Hagen et al. |
|
| |
|
ViTMem |
|
| |
|
7 |
|
|
|
|