Datasets:
audio
audioduration (s) 1.6
7
| label
class label 37
classes |
---|---|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
11m01pt
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
15m03it
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
|
0f01pts00
|
Dataset Card for Affordances and Speech Dataset
This dataset was created for the affordances and speech experiments published in:
Language bootstrapping: learning word meanings from perception-action association.
Giampiero Salvi , Luis Montesano, Alexandre Bernardino, José Santos-Victor
IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) (Volume: 42, Issue: 3, June 2012)
DOI: 10.1109/TSMCB.2011.2172420
http://ieeexplore.ieee.org/document/6082460/
Open access version: https://arxiv.org/abs/1711.09714
The goal of the experiments was to explore learning methods that would ground the meaning of spoken words to the perception-action associations learned by a robot through experimentation.
The code for the experiments can be found at https://github.com/giampierosalvi/AffordancesAndSpeech
The recordings started in 2007, where later updated on 2010-01-28, and were finally uploaded to huggingface on 2025-09-22.
Dataset Details
Dataset Description
The main audio
directory contains audio recordings (.wav
) with the corresponding orthographic transcription (.lab
). The recordings are divided into
words
: for isolated wordssentences
: for full sentencesinstructions
: instructions to the robot The files are further divided into directory corresponding to recording sessions with the following format:{f|m}xx{langid}{s|w|i}yy
, where:{f|m}
: indicates the speaker's genderxx
: is the speaker IDlangid
: is a two-letter code indicating the speaker's mother tongue{s|w|i}
: indicates weather the recording contains a sentence, word, or instructionyy
: is the recording session ID. For example, f03pts00 is a sentence recorded by a Portuguese mother tongue female speaker.
Information about the recording sessions is stored in the file SESINFO.txt
.
Besides the format described above, each recording also contains the experiment ID that is useful to recover the affordance parameters stored in affordance_data.csv
.
For example, m03its17_213.wav
contains a recording corresponding to the experiment number 213 made by an Italian mother tongue male speaker in the recording session number 17. The corresponding affordance parameters are, in this case:
- Action: touch
- Color: green2
- Shape: box
- ObjVel: slow
- ObjHandVel: slow
- HandVel: slow
- Contact: long
The affordance_data.csv
file contains affordance parameters for every experiment carried out by the robot.
In total there are 254 experiments.
Each experiment is narrated by more than one speaker, so the number of spoken descriptions is a multiple of 254.
The file full.dic
contains a pronunciation dictionary for each word in the material expressed in a modified version of the SAMPA code for British English.
The list of symbols together with their meaning and the TIMA code to use them in LaTeX are described in the file sampa_latex_en.pdf
in the column HTK.
- Curated by: Giampiero Salvi
- Funded by [optional]: FCT (Portugal), Vetenskapsrådet (Sweden)
- Shared by [optional]: Giampiero Salvi
- Language(s) (NLP): English
- License: cc-by-nc-sa-4.0
Dataset Sources [optional]
- Repository: [More Information Needed]
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
Uses
The dataset can be used to reproduce the human-robot interaction experiments described in the publication above, or to extend them by proposing new learning methods.
Direct Use
Out-of-Scope Use
Dataset Structure
[More Information Needed]
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Data Collection and Processing
Recordings took place in two occasions:
- Sessions 00-16 were recorded at IST, Lisbon, Portugal. Each speaker was free to use their computer and their headset in their office. This produced recordings of poor quality that turned out to be difficult to use at the time, but could still be useful today.
- Sessions 17-21 were recorded at KTH, Stockholm, Sweden in a soundproof room, using a TASCAM US-122 sound card and a high quality balanced close microphone. Sessions 17-21 were the ones used in the publication mentioned at the beginning.
Who are the source data producers?
Giampiero Salvi
Annotations [optional]
Giampiero Salvi
Annotation process
[More Information Needed]
Who are the annotators?
Giampiero Salvi
Personal and Sensitive Information
Speakers have been anonymised and no personal information is included in the data.
Bias, Risks, and Limitations
There is a slight over-representation of male speakers in the data.
Recommendations
Citation [optional]
Language bootstrapping: learning word meanings from perception-action association.
Giampiero Salvi , Luis Montesano, Alexandre Bernardino, José Santos-Victor
IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) (Volume: 42, Issue: 3, June 2012)
DOI: 10.1109/TSMCB.2011.2172420
http://ieeexplore.ieee.org/document/6082460/
Open access version: https://arxiv.org/abs/1711.09714
BibTeX:
@ARTICLE{6082460,
author={Salvi, Giampiero and Montesano, Luis and Bernardino, Alexandre and Santos-Victor, José},
journal={IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)},
title={Language Bootstrapping: Learning Word Meanings From Perception–Action Association},
year={2012},
volume={42},
number={3},
pages={660-671},
keywords={Speech;Robot sensing systems;Speech recognition;Context;Computational modeling;Humans;Affordances;automatic speech recognition;Bayesian networks;cognitive robotics;grasping;humanoid robots;language;unsupervised learning},
doi={10.1109/TSMCB.2011.2172420}
}
APA:
G. Salvi, L. Montesano, A. Bernardino and J. Santos-Victor, "Language Bootstrapping: Learning Word Meanings From Perception–Action Association," in IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 42, no. 3, pp. 660-671, June 2012, doi: 10.1109/TSMCB.2011.2172420.
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Dataset Card Authors [optional]
Giampiero Salvi
Dataset Card Contact
Giampiero Salvi
- Downloads last month
- 164