Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- -9FQT4oBgHgl3EQfKjXJ/content/tmp_files/2301.13261v1.pdf.txt +1536 -0
- -9FQT4oBgHgl3EQfKjXJ/content/tmp_files/load_file.txt +0 -0
- -dE0T4oBgHgl3EQfxAFs/content/tmp_files/2301.02640v1.pdf.txt +627 -0
- -dE0T4oBgHgl3EQfxAFs/content/tmp_files/load_file.txt +0 -0
- -dE1T4oBgHgl3EQfUgPr/vector_store/index.pkl +3 -0
- .gitattributes +69 -0
- 09AyT4oBgHgl3EQf1PmO/content/tmp_files/2301.00732v1.pdf.txt +925 -0
- 09AyT4oBgHgl3EQf1PmO/content/tmp_files/load_file.txt +0 -0
- 0dFLT4oBgHgl3EQfoy-j/content/tmp_files/2301.12133v1.pdf.txt +1061 -0
- 0dFLT4oBgHgl3EQfoy-j/content/tmp_files/load_file.txt +0 -0
- 0tFKT4oBgHgl3EQfOC19/content/tmp_files/2301.11757v1.pdf.txt +1391 -0
- 0tFKT4oBgHgl3EQfOC19/content/tmp_files/load_file.txt +0 -0
- 0tFST4oBgHgl3EQfVziF/vector_store/index.faiss +3 -0
- 2NA0T4oBgHgl3EQfM_8t/vector_store/index.faiss +3 -0
- 2dFAT4oBgHgl3EQfDhxB/vector_store/index.pkl +3 -0
- 2tE1T4oBgHgl3EQf5gWl/vector_store/index.faiss +3 -0
- 39AyT4oBgHgl3EQfo_il/content/2301.00518v1.pdf +3 -0
- 39AyT4oBgHgl3EQfo_il/vector_store/index.pkl +3 -0
- 39E2T4oBgHgl3EQf6Aj8/content/tmp_files/2301.04197v1.pdf.txt +856 -0
- 39E2T4oBgHgl3EQf6Aj8/content/tmp_files/load_file.txt +402 -0
- 4NFRT4oBgHgl3EQfozeT/content/tmp_files/2301.13611v1.pdf.txt +1176 -0
- 4NFRT4oBgHgl3EQfozeT/content/tmp_files/load_file.txt +0 -0
- 5NE4T4oBgHgl3EQfbwwx/vector_store/index.faiss +3 -0
- 5NE4T4oBgHgl3EQfbwwx/vector_store/index.pkl +3 -0
- 5tAyT4oBgHgl3EQfcfeN/content/tmp_files/2301.00284v1.pdf.txt +1421 -0
- 5tAyT4oBgHgl3EQfcfeN/content/tmp_files/load_file.txt +0 -0
- 6NAyT4oBgHgl3EQf2fnD/content/2301.00753v1.pdf +3 -0
- 6NAyT4oBgHgl3EQf2fnD/vector_store/index.pkl +3 -0
- 6tAzT4oBgHgl3EQfgPwz/vector_store/index.pkl +3 -0
- 7NAzT4oBgHgl3EQf-f4D/content/2301.01933v1.pdf +3 -0
- 7NAzT4oBgHgl3EQf-f4D/vector_store/index.pkl +3 -0
- AtFQT4oBgHgl3EQfMjZB/content/tmp_files/2301.13268v1.pdf.txt +1264 -0
- AtFQT4oBgHgl3EQfMjZB/content/tmp_files/load_file.txt +0 -0
- B9E1T4oBgHgl3EQfVwR_/content/2301.03106v1.pdf +3 -0
- B9E1T4oBgHgl3EQfVwR_/vector_store/index.faiss +3 -0
- B9E1T4oBgHgl3EQfVwR_/vector_store/index.pkl +3 -0
- BdE0T4oBgHgl3EQfPwDB/content/tmp_files/2301.02184v1.pdf.txt +2231 -0
- BdE0T4oBgHgl3EQfPwDB/content/tmp_files/load_file.txt +0 -0
- BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf +3 -0
- CNAyT4oBgHgl3EQfR_eC/content/tmp_files/2301.00077v1.pdf.txt +1276 -0
- CNAyT4oBgHgl3EQfR_eC/content/tmp_files/load_file.txt +0 -0
- CNE3T4oBgHgl3EQfUQpw/vector_store/index.pkl +3 -0
- CtFJT4oBgHgl3EQfAiym/vector_store/index.pkl +3 -0
- E9FKT4oBgHgl3EQfaS5z/content/tmp_files/2301.11807v1.pdf.txt +901 -0
- E9FKT4oBgHgl3EQfaS5z/content/tmp_files/load_file.txt +0 -0
- ENE4T4oBgHgl3EQf6g5Z/content/tmp_files/2301.05332v1.pdf.txt +2902 -0
- ENE4T4oBgHgl3EQf6g5Z/content/tmp_files/load_file.txt +0 -0
- G9FJT4oBgHgl3EQfty2e/content/2301.11619v1.pdf +3 -0
- G9FJT4oBgHgl3EQfty2e/vector_store/index.pkl +3 -0
- G9FLT4oBgHgl3EQfHS_T/content/2301.11996v1.pdf +3 -0
-9FQT4oBgHgl3EQfKjXJ/content/tmp_files/2301.13261v1.pdf.txt
ADDED
@@ -0,0 +1,1536 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Published as a conference paper at ICLR 2023
|
2 |
+
EMERGENCE OF MAPS
|
3 |
+
IN THE MEMORIES
|
4 |
+
OF BLIND NAVIGATION AGENTS
|
5 |
+
Erik Wijmans1,2∗Manolis Savva2,3 Irfan Essa1,4 Stefan Lee5 Ari S. Morcos2 Dhruv Batra1,2
|
6 |
+
1Georgia Institute of Technology
|
7 |
+
2FAIR, Meta AI
|
8 |
+
3Simon Fraser University
|
9 |
+
4Google Research Atlanta 5Oregon State University
|
10 |
+
ABSTRACT
|
11 |
+
Animal navigation research posits that organisms build and maintain internal spa-
|
12 |
+
tial representations, or maps, of their environment. We ask if machines – specifi-
|
13 |
+
cally, artificial intelligence (AI) navigation agents – also build implicit (or ‘men-
|
14 |
+
tal’) maps. A positive answer to this question would (a) explain the surprising
|
15 |
+
phenomenon in recent literature of ostensibly map-free neural-networks achieving
|
16 |
+
strong performance, and (b) strengthen the evidence of mapping as a fundamental
|
17 |
+
mechanism for navigation by intelligent embodied agents, whether they be biolog-
|
18 |
+
ical or artificial. Unlike animal navigation, we can judiciously design the agent’s
|
19 |
+
perceptual system and control the learning paradigm to nullify alternative naviga-
|
20 |
+
tion mechanisms. Specifically, we train ‘blind’ agents – with sensing limited to
|
21 |
+
only egomotion and no other sensing of any kind – to perform PointGoal navi-
|
22 |
+
gation (‘go to ∆x, ∆y’) via reinforcement learning. Our agents are composed of
|
23 |
+
navigation-agnostic components (fully-connected and recurrent neural networks),
|
24 |
+
and our experimental setup provides no inductive bias towards mapping. Despite
|
25 |
+
these harsh conditions, we find that blind agents are (1) surprisingly effective nav-
|
26 |
+
igators in new environments (∼95% success); (2) they utilize memory over long
|
27 |
+
horizons (remembering ∼1,000 steps of past experience in an episode); (3) this
|
28 |
+
memory enables them to exhibit intelligent behavior (following walls, detecting
|
29 |
+
collisions, taking shortcuts); (4) there is emergence of maps and collision detection
|
30 |
+
neurons in the representations of the environment built by a blind agent as it nav-
|
31 |
+
igates; and (5) the emergent maps are selective and task dependent (e.g. the agent
|
32 |
+
‘forgets’ exploratory detours). Overall, this paper presents no new techniques for
|
33 |
+
the AI audience, but a surprising finding, an insight, and an explanation.
|
34 |
+
1
|
35 |
+
INTRODUCTION
|
36 |
+
Decades of research into intelligent animal navigation posits that organisms build and maintain inter-
|
37 |
+
nal spatial representations (or maps)1 of their environment, that enables the organism to determine
|
38 |
+
and follow task-appropriate paths (Tolman, 1948; O’keefe & Nadel, 1978; Epstein et al., 2017).
|
39 |
+
Hamsters, wolves, chimpanzees, and bats leverage prior exploration to determine and follow short-
|
40 |
+
cuts they may never have taken before (Chapuis & Scardigli, 1993; Peters, 1976; Menzel, 1973;
|
41 |
+
Toledo et al., 2020; Harten et al., 2020). Even blind mole rats and animals rendered situationally-
|
42 |
+
blind in dark environments demonstrate shortcut behaviors (Avni et al., 2008; Kimchi et al., 2004;
|
43 |
+
Maaswinkel & Whishaw, 1999). Ants forage for food along meandering paths but take near-optimal
|
44 |
+
return trips (M¨uller & Wehner, 1988), though there is some controversy about whether insects like
|
45 |
+
ants and bees are capable of forming maps (Cruse & Wehner, 2011; Cheung et al., 2014).
|
46 |
+
Analogously, mapping and localization techniques have long played a central role in enabling non-
|
47 |
+
biological navigation agents (or robots) to exhibit intelligent behavior (Thrun et al., 2005; Institute,
|
48 |
+
∗Correspondence to [email protected].
|
49 |
+
1Throughout this work, we use ‘maps’ to refer to a spatial representation of the environment that enables
|
50 |
+
intelligent navigation behavior like taking shortcuts. We provide a detailed discussion and contrast w.r.t. a
|
51 |
+
‘cognitive map’ as defined by O’keefe & Nadel (1978) in Apx. B.1.
|
52 |
+
1
|
53 |
+
arXiv:2301.13261v1 [cs.AI] 30 Jan 2023
|
54 |
+
|
55 |
+
Published as a conference paper at ICLR 2023
|
56 |
+
1972; Ayache & Faugeras, 1988; Smith et al., 1990). More recently, the machine learning commu-
|
57 |
+
nity has produced a surprising phenomenon – neural-network models for navigation that curiously
|
58 |
+
do not contain any explicit mapping modules but still achieve remarkably high performance (Savva
|
59 |
+
et al., 2019; Wijmans et al., 2020; Kadian et al., 2020; Chattopadhyay et al., 2021; Khandelwal
|
60 |
+
et al., 2022; Partsey et al., 2022; Reed et al., 2022). For instance, Wijmans et al. (2020) showed that
|
61 |
+
a simple ‘pixels-to-actions’ architecture (using a CNN and RNN) can navigate to a given point in
|
62 |
+
a novel environment with near-perfect accuracy; Partsey et al. (2022) further generalized this result
|
63 |
+
to more realistic sensors and actuators. Reed et al. (2022) showed a similar general purpose archi-
|
64 |
+
tecture (a transformer) can perform a wide variety of embodied tasks, including navigation. The
|
65 |
+
mechanisms explaining this ability remain unknown. Understanding them is both of scientific and
|
66 |
+
practical importance due to safety considerations involved with deploying such systems.
|
67 |
+
In this work, we investigate the following question – is mapping an emergent phenomenon? Specif-
|
68 |
+
ically, do artificial intelligence (AI) agents learn to build internal spatial representations (or ‘mental’
|
69 |
+
maps) of their environment as a natural consequence of learning to navigate?
|
70 |
+
The specific task we study is PointGoal navigation (Anderson et al., 2018), where an AI agent is
|
71 |
+
introduced into a new (unexplored) environment and tasked with navigating to a relative location –
|
72 |
+
‘go 5m north, 2m west relative to start’2. This is analogous to the direction and distance of foraging
|
73 |
+
locations communicated by the waggle dance of honey bees (Von Frisch, 1967).
|
74 |
+
Unlike animal navigation studies, experiments with AI agents allow us to precisely isolate map-
|
75 |
+
ping from alternative mechanisms proposed for animal navigation – the use of visual land-
|
76 |
+
marks (Von Frisch, 1967), orientation by the arrangement of stars (Lockley, 1967), gradients of
|
77 |
+
olfaction or other senses (Ioal`e et al., 1990). We achieve this isolation by judiciously designing
|
78 |
+
the agent’s perceptual system and the learning paradigm such that these alternative mechanisms are
|
79 |
+
rendered implausible. Our agents are effectively ‘blind’; they possess a minimal perceptual system
|
80 |
+
capable of sensing only egomotion, i.e. change in the agent’s location and orientation as the it moves
|
81 |
+
– no vision, no audio, no olfactory, no haptic, no magnetic, or any other sensing of any kind. This
|
82 |
+
perceptual system is deliberately impoverished to isolate the contribution of memory, and is inspired
|
83 |
+
by blind mole rats, who perform localization via path integration and use the Earth’s magnetic field
|
84 |
+
as a compass (Kimchi et al., 2004). Further still, our agents are composed of navigation-agnostic,
|
85 |
+
generic, and ubiquitous architectural components (fully-connected layers and LSTM-based recur-
|
86 |
+
rent neural networks), and our experimental setup provides no inductive bias towards mapping – no
|
87 |
+
map-like or spatial structural components in the agent, no mapping supervision, no auxiliary tasks,
|
88 |
+
nothing other than a reward for making progress towards a goal.
|
89 |
+
Surprisingly, even under these deliberately harsh conditions, we find the emergence of map-like
|
90 |
+
spatial representations in the agent’s non-spatial unstructured memory, enabling it to not only suc-
|
91 |
+
cessfully navigate to the goal but also exhibit intelligent behavior (like taking shortcuts, following
|
92 |
+
walls, detecting collisions) similar to aforementioned animal studies, and predict free-space in the
|
93 |
+
environment. Essentially, we demonstrate an ‘existence proof’ or an ontogenetic developmental ac-
|
94 |
+
count for the emergence of mapping without any previous predisposition. Our results also explain
|
95 |
+
the aforementioned surprising finding in recent literature – that ostensibly map-free neural-network
|
96 |
+
achieve strong autonomous navigation performance – by demonstrating that these ‘map-free’ sys-
|
97 |
+
tems in fact learn to construct and maintain map-like representations of their environment.
|
98 |
+
Concretely, we ask and answer following questions:
|
99 |
+
1) Is it possible to effectively navigate with just egomotion sensing? Yes. We find that our ‘blind’
|
100 |
+
agents are highly effective in navigating new environments – reaching the goal with 95.1%±1.3%
|
101 |
+
success rate. And they traverse moderately efficient (though far from optimal) paths, reaching
|
102 |
+
62.9%±1.6% of optimal path efficiency. We stress that these are novel testing environments, the
|
103 |
+
agent has not memorized paths within a training environment but has learned efficient navigation
|
104 |
+
strategies that generalize to novel environments, such as emergent wall-following behavior.
|
105 |
+
2) What mechanism explains this strong performance by ‘blind’ agents? Memory. We find that
|
106 |
+
memoryless agents completely fail at this task, achieving nearly 0% success. More importantly,
|
107 |
+
we find that agents with memory utilize information stored over a long temporal and spatial hori-
|
108 |
+
zon and that collision-detection neurons emerge within this memory. Navigation performance as
|
109 |
+
a function of the number of past actions/observations encoded in the agent’s memory does not
|
110 |
+
2The description in English is purely for explanatory purposes; the agent receives relative goal coordinates.
|
111 |
+
2
|
112 |
+
|
113 |
+
Published as a conference paper at ICLR 2023
|
114 |
+
saturate till one thousand steps (corresponding to the agent traversing 89.1±0.66 meters), suggest-
|
115 |
+
ing that the agent ‘remembers’ a long history of the episode.
|
116 |
+
3) What information does the memory encode about the environment? Implicit maps. We perform
|
117 |
+
an AI rendition of Menzel (1973)’s experiments, where a chimpanzee is carried by a human and
|
118 |
+
shown the location of food hidden in the environment. When the animal is set free to collect the
|
119 |
+
food, it does not retrace the demonstrator’s steps but takes shortcuts to collect the food faster.
|
120 |
+
Analogously, we train a blind agent to navigate from a source location (S) to a target location
|
121 |
+
(T). After it has finished navigating, we transplant its constructed episodic memory into a second
|
122 |
+
‘probe’-agent (which is also blind). We find that this implanted-memory probe-agent performs
|
123 |
+
dramatically better in navigating from S to T (and T to S) than it would without the memory
|
124 |
+
transplant. Similar to the chimpanzee, the probe agent takes shortcuts, typically cutting out
|
125 |
+
backtracks or excursions that the memory-creator had undertaken as it tried to work its way
|
126 |
+
around the obstacles. These experiments provide compelling evidence that blind agents learn to
|
127 |
+
build and use implicit map-like representations of their environment solely through learning to
|
128 |
+
navigate. Intriguingly further still, we find that surprisingly detailed metric occupancy maps of
|
129 |
+
the environment (indicating free-space) can be explicitly decoded from the agent’s memory.
|
130 |
+
4) Are maps task-dependent? Yes. We find that the emergent maps are a function of the navigation
|
131 |
+
goal. Agents ‘forget’ excursions and detours, i.e. their episodic memory only preserves the
|
132 |
+
features of the environment relevant to navigating to their goal. This, in part, explains why
|
133 |
+
transplanting episodic memory from one agent to another leads it to take shortcuts – because the
|
134 |
+
excursion and detours are simply forgotten.
|
135 |
+
Overall, our experiments and analyses demonstrate that ‘blind’ agents solve PointGoalNav by
|
136 |
+
combining information over long time horizons to build detailed maps of their environment, solely
|
137 |
+
through the learning signals imposed by goal-driven navigation. In biological systems, convergent
|
138 |
+
evolution of analogous structures that cannot be attributed to a common ancestor (e.g. eyes in
|
139 |
+
vertebrates and jellyfish (Kozmik et al., 2008)) is often an indicator that the structure is a natural
|
140 |
+
response to the ecological niche and selection pressures. Analogously, our results suggest that
|
141 |
+
mapping may be a natural solution to the problem of navigation by intelligent embodied agents,
|
142 |
+
whether they be biological or artificial. We now describe our findings for each question in detail.
|
143 |
+
2
|
144 |
+
BLIND AGENTS ARE EFFECTIVE NAVIGATORS
|
145 |
+
We train navigation agents for PointGoalNav in virtualized 3D replicas of real houses utilizing
|
146 |
+
the AI Habitat simulator (Savva et al., 2019; Szot et al., 2021) and Gibson (Xia et al., 2018) and
|
147 |
+
Matterport3D (Chang et al., 2017) datasets.
|
148 |
+
The agent is physically embodied as an cylinder
|
149 |
+
with a diameter 0.2m and height 1.5m. In each episode, the agent is randomly initialized in the
|
150 |
+
environment, which establishes an episodic agent-centric coordinate system. The goal location
|
151 |
+
is specified in cartesian coordinates (xg, yg, zg) in this system.
|
152 |
+
The agent has four actions –
|
153 |
+
move forward (0.25 meters), turn left (10◦), turn right (10◦), and stop (to signal reaching the
|
154 |
+
goal), and allowed a maximum of 2,000 steps to reach the specified goal. It is equipped with an
|
155 |
+
egomotion sensor providing it relative position (∆x, ∆y, ∆z) and relative ‘heading’ (or yaw angle)
|
156 |
+
∆θ between successive steps, which is integrated to keep track of the agent’s location and heading
|
157 |
+
relative to start [xt, yt, zt, θt]. This is sometimes referred to as a ‘GPS+Compass’ sensor in this
|
158 |
+
literature (Savva et al., 2019; Wijmans et al., 2020).
|
159 |
+
We use two task-performance dependent metrics: i) Success, defined as whether or not the agent
|
160 |
+
predicted the stop action within 0.2 meters of the target, and ii) Success weighted by inverse Path
|
161 |
+
Length (SPL) (Anderson et al., 2018), defined as success weighted by the efficiency of agent’s path
|
162 |
+
compared to the oracle path (the shortest path). Given the high success rates we observe, SPL can be
|
163 |
+
roughly interpreted as efficiency of the path taken compared to the oracle path – e.g. an SPL of 95%
|
164 |
+
means the agent took a path 95% as efficient as the oracle path while an SPL of 50% means the agent
|
165 |
+
took a path 50% as efficient. Note that performance is evaluated in previously unseen environments
|
166 |
+
to evaluate whether agents can generalize, not just memorize.
|
167 |
+
The agent’s policy is instantiated as a long short-term memory (LSTM) (Hochreiter & Schmidhuber,
|
168 |
+
1997) recurrent neural network – formally, given current observations ot = [xg, yg, zg, xt, yt, zt, θt],
|
169 |
+
(ht, ct) = LSTM(ot, (ht−1, ct−1)). We refer to this (ht, ct) as the agent’s internal memory repre-
|
170 |
+
sentation. Note that only contains information gathered during the current navigation episode. We
|
171 |
+
train our agents for this task using a reinforcement learning (Sutton & Barto, 1992) algorithm called
|
172 |
+
DD-PPO (Wijmans et al., 2020). The reward has a term for making progress towards the goal and
|
173 |
+
3
|
174 |
+
|
175 |
+
Published as a conference paper at ICLR 2023
|
176 |
+
GPS+Compass
|
177 |
+
(A)
|
178 |
+
(B)
|
179 |
+
1
|
180 |
+
3
|
181 |
+
2
|
182 |
+
4
|
183 |
+
6
|
184 |
+
5
|
185 |
+
Forward — Collided
|
186 |
+
Forward — No Collision
|
187 |
+
Turn — No Collision
|
188 |
+
(C)
|
189 |
+
Agent
|
190 |
+
Bug — Always Right
|
191 |
+
Bug — Always Left
|
192 |
+
Clairvoyant Bug
|
193 |
+
Figure 1: (A) PointGoal navigation. An agent is initialized in a novel environment (bluesquare)
|
194 |
+
and task with navigation to a point specified relative to the start location (red square). We study
|
195 |
+
‘blind’ agents, equipped with just an egomotion sensor (called GPS+Compass in this literature).
|
196 |
+
(B) ‘Blind’ agent vs. bug. Our learned ‘blind’ agent compared to 2 variants and an oracle equipped
|
197 |
+
variant of the Bug algorithm (Lumelsky & Stepanov, 1987). The Bug algorithm initially orients
|
198 |
+
itself towards the goal and then proceeds towards the goal. Upon hitting a wall, it follows along the
|
199 |
+
wall until it reaches the other side. The oracle version is told whether wall-following left or right
|
200 |
+
is optimal, providing an upper-bound on Bug algorithm performance.
|
201 |
+
(C) t-SNE of the agent’s
|
202 |
+
internal representation for collisions. We find 4 overall clusters corresponding to the previous
|
203 |
+
action taken and whether or not that action led to a collision.
|
204 |
+
for successfully reaching it. Neither the training procedure nor agent architecture contain explicit
|
205 |
+
inductive biases towards mapping or planning relative to a map. Apx. A.1 describes training details.
|
206 |
+
Agent
|
207 |
+
Success
|
208 |
+
SPL
|
209 |
+
1 Blind
|
210 |
+
95.1±1.3 62.9±1.6
|
211 |
+
2 Clairvoyant Bug
|
212 |
+
100±0.0
|
213 |
+
46.0
|
214 |
+
3 Sighted (Depth)
|
215 |
+
94.0
|
216 |
+
83.0
|
217 |
+
(Ramakrishnan et al., 2021)
|
218 |
+
Table 1: PointGoalNav performance agents
|
219 |
+
on PointGoalNav. We find that blind agents
|
220 |
+
are surprisingly effective (success) though
|
221 |
+
not efficient (SPL) navigators.
|
222 |
+
They have
|
223 |
+
similar success as an agent equipped with a
|
224 |
+
Depth camera and higher SPL than a clair-
|
225 |
+
voyant version of the ‘Bug’ algorithm.
|
226 |
+
Surprisingly, we find that agents trained under this
|
227 |
+
impoverished sensing regime are able to navigate
|
228 |
+
with near-perfect efficacy – reaching the goal with
|
229 |
+
95.1%±1.3% success rate (Table 1), even in situa-
|
230 |
+
tions where the agent must take hundreds of actions
|
231 |
+
and traverse over 25m. This performance is simi-
|
232 |
+
lar in success rate (95.1 vs 94.0)3 to a sighted agent
|
233 |
+
(equipped with a depth camera) trained on a larger
|
234 |
+
dataset (HM3D) (Ramakrishnan et al., 2021). The
|
235 |
+
paths taken by the blind agent are moderately ef-
|
236 |
+
ficient but (as one might expect) far less so than a
|
237 |
+
sighted agent (62.9 vs 83.0 SPL).
|
238 |
+
At this point, it might be tempting to believe that this
|
239 |
+
is an easy navigation problem, but we urge the reader
|
240 |
+
to fight hindsight bias. We contend that the SPL of
|
241 |
+
this blind agent is surprisingly high given the impoverished sensor suite. To put this SPL in context,
|
242 |
+
we compare it with ‘Bug algorithms’ (Lumelsky & Stepanov, 1987), which are motion planning
|
243 |
+
algorithms inspired by insect navigation, involving an agent equipped with only a localization sensor.
|
244 |
+
In these algorithms, the agent first orients itself towards the goal and then travels directly towards
|
245 |
+
it until it encounters a wall, in which case it follows along the wall along one of two directions of
|
246 |
+
travel. The primary challenge for Bug algorithms is determining whether to go left or right upon
|
247 |
+
reaching a wall. To provide an upper bound on performance, we implement a ‘clairvoyant’ Bug
|
248 |
+
algorithm agent with an oracle that tells it whether left or right is optimal. Even with the additional
|
249 |
+
privileged information, the ‘clairvoyant’ Bug agent achieves an SPL of 46%, which is considerably
|
250 |
+
less efficient than the ‘blind’ agent. Fig. 1b shows an example of the path our blind agent takes
|
251 |
+
compared to 3 variants of the Bug algorithm. This shows that blind navigation agents trained with
|
252 |
+
reinforcement learning are highly efficient at navigating in previously unseen environments given
|
253 |
+
their sensor suite.
|
254 |
+
2.1
|
255 |
+
EMERGENCE OF WALL-FOLLOWING BEHAVIOR AND COLLISION-DETECTION NEURONS
|
256 |
+
Fig. 1b shows the blind agent exhibiting wall-following behavior (also see blue paths in Fig. A6
|
257 |
+
and videos in supplement). This behavior is remarkably consistent; the agent spends the majority
|
258 |
+
3It may seem like the blind agent outperforms the sighted agent, but the mean performance of Ramakrishnan
|
259 |
+
et al. (2021) is within our error bars.
|
260 |
+
4
|
261 |
+
|
262 |
+
Published as a conference paper at ICLR 2023
|
263 |
+
of an episode near a wall. This is surprising because it is trained to navigate to the target location
|
264 |
+
as quickly as possible, thus, it would be rewarded for traveling in straighter paths (that avoid walls).
|
265 |
+
We hypothesize that this strategy emerges due to two factors. 1) The agent is blind, it has no
|
266 |
+
way to determine where the obstacles are in the environment besides ‘bumping’ into them. 2) The
|
267 |
+
environment is unknown to the agent. While this is clearly true for testing environments it is also
|
268 |
+
functionally true for training environments because the coordinate system is episodic, every episode
|
269 |
+
uses a randomly-instantiated coordinate system based on how the agent was spawned; and the since
|
270 |
+
the agent is blind, it cannot perform visual localization.
|
271 |
+
We test both hypotheses. To test (2), we provide an experiment in Apx. C.1 showing that when
|
272 |
+
the agent is trained in a single environment with a consistent global coordinate system, it learns to
|
273 |
+
memorize the shortest paths in this environment and wall-following does not emerge. Consequently,
|
274 |
+
this agent is unable to navigate in new environment, achieving 100% success on train and 0% on test.
|
275 |
+
To test (1), we analyze whether the agent is capable of detecting collisions. Note that the agent is
|
276 |
+
not equipped with a collision sensor. In principle, the agent can infer whether it collided – if tries
|
277 |
+
to move forward and the resulting egomotion is atypical, then it is likely that a collision happened.
|
278 |
+
This leads us to ask – does the agent’s memory contain information about collisions? We train
|
279 |
+
a linear classifier that uses the (frozen) internal representation (ht+1, ct+1) to predict if action at
|
280 |
+
resulted in a collision (details in Apx. A.5). The classifier achieves 98% accuracy on held-out data.
|
281 |
+
As comparison, random guessing on this 2-class problem would achieve 50%. This shows the
|
282 |
+
agent’s memory not only predicts its collisions, but also that collision-vs-not are linearly separable in
|
283 |
+
internal-representation space, which strongly suggests that the agent has learned a collision sensor.
|
284 |
+
Next, we examine how collisions are structured in the agent’s internal representation by identifying
|
285 |
+
the subspace that is used for collisions. Specifically, we re-train the linear classifier with an ℓ1-
|
286 |
+
weight penalty to encourage sparsity. We then select the top 10 neurons (from 3072) with the largest
|
287 |
+
weight magnitude; this reduces dimensionality by 99.7% while still achieving 96% collision-vs-not
|
288 |
+
accuracy. We use t-SNE (Van der Maaten & Hinton, 2008) and the techniques in Kobak & Berens
|
289 |
+
(2019) to create a 2-dimension visualization of the resulting 10-dimension space. We find 4 distinct
|
290 |
+
semantically-meaningful clusters (Fig. 1c). One cluster always fires for collisions, one for forward
|
291 |
+
actions that did not result in a collision, and the other two correspond to turning actions. Notice that
|
292 |
+
these exceedingly small number of dimensions and neurons essentially predict all collisions and
|
293 |
+
movement of the agent. We include videos in the supplementary materials.
|
294 |
+
3
|
295 |
+
MEMORY IS USED OVER LONG HORIZONS
|
296 |
+
10
|
297 |
+
0
|
298 |
+
10
|
299 |
+
1
|
300 |
+
10
|
301 |
+
2
|
302 |
+
10
|
303 |
+
3
|
304 |
+
Memory Length (log-scale)
|
305 |
+
0
|
306 |
+
20
|
307 |
+
40
|
308 |
+
60
|
309 |
+
80
|
310 |
+
100
|
311 |
+
Performance (Higher is better)
|
312 |
+
SPL
|
313 |
+
Success
|
314 |
+
Figure 2:
|
315 |
+
Navigation perfor-
|
316 |
+
mance vs. memory length. Agent
|
317 |
+
performance does not saturate until
|
318 |
+
memory can contain information
|
319 |
+
from hundreds of steps. A memory
|
320 |
+
of 103 steps is half the maximum
|
321 |
+
episode length.
|
322 |
+
Next, we examine how memory is utilized by asking if the
|
323 |
+
agent uses memory solely to remember short-term informa-
|
324 |
+
tion (e.g. did it collide in the last step?) or whether it also in-
|
325 |
+
cludes long-range information (e.g. did it collide hundreds of
|
326 |
+
steps ago?). To answer this question, we restrict the memory
|
327 |
+
capacity of our agent. Specifically, let k denote the memory
|
328 |
+
budget. At each time t, we take the previous k observations,
|
329 |
+
[ot−k+1, . . . , ot], and construct the internal representation
|
330 |
+
(ht, ct) via the recurrence (hi, ci) = LSTM(oi, (hi−1, ci−1))
|
331 |
+
for t − k < i ≤ t where (ht−k, ct−k) = (0, 0).
|
332 |
+
If the agent is only leveraging its memory for short-term stor-
|
333 |
+
age we would expect performance to saturate at a small value
|
334 |
+
of k. Instead, Fig. 2 shows that the agent leverages its memory
|
335 |
+
for significantly long term storage. When memoryless (k = 1),
|
336 |
+
the agent completely fail at the task, achieving nearly 0% suc-
|
337 |
+
cess. Navigation performance as a function of the memory
|
338 |
+
budget (k) does not saturate till one thousand steps. Recall
|
339 |
+
that the agent can move forward 0.25 meters or turn 10◦ at
|
340 |
+
each step.
|
341 |
+
The average distance traveled in 1000 steps is
|
342 |
+
89.1±0.66 meters, indicating that it remembers information over long temporal and spatial horizons.
|
343 |
+
In Apx. C.6 we train agents to operate at a specific memory budget. We find that a budget of k = 256,
|
344 |
+
the largest we are able to train, is not sufficient to achieve the performance of unbounded.
|
345 |
+
5
|
346 |
+
|
347 |
+
Published as a conference paper at ICLR 2023
|
348 |
+
Agent Network
|
349 |
+
Probe Network
|
350 |
+
LSTM
|
351 |
+
LSTM
|
352 |
+
oA
|
353 |
+
T-1
|
354 |
+
LSTM
|
355 |
+
LSTM
|
356 |
+
oA
|
357 |
+
T
|
358 |
+
hA
|
359 |
+
T-2
|
360 |
+
aA
|
361 |
+
T-2
|
362 |
+
aA
|
363 |
+
T-1
|
364 |
+
aA
|
365 |
+
T
|
366 |
+
hA
|
367 |
+
T
|
368 |
+
hP
|
369 |
+
2
|
370 |
+
oP
|
371 |
+
1
|
372 |
+
aP
|
373 |
+
1
|
374 |
+
aP
|
375 |
+
2
|
376 |
+
oP
|
377 |
+
2
|
378 |
+
S
|
379 |
+
T
|
380 |
+
Stop Gradient
|
381 |
+
(A)
|
382 |
+
SecondNav(S→T)
|
383 |
+
SecondNav(T→S)
|
384 |
+
Probe Type
|
385 |
+
Success
|
386 |
+
SPL
|
387 |
+
Success
|
388 |
+
SPL
|
389 |
+
1 AllZeroMemory
|
390 |
+
91.6±0.40 71.1±0.27
|
391 |
+
91.0±0.40
|
392 |
+
70.8±0.25
|
393 |
+
2 UntrainedAgentMemory
|
394 |
+
92.4±0.28 72.0±0.19
|
395 |
+
91.2±0.54
|
396 |
+
72.2±0.35
|
397 |
+
3 TrainedAgentMemory
|
398 |
+
96.2±0.23 85.0±0.16
|
399 |
+
96.0±0.16
|
400 |
+
84.8±0.22
|
401 |
+
(B)
|
402 |
+
Figure 3:
|
403 |
+
(A) Probe experiment. First, an agent navigates (blue path, blue LSTM) from start
|
404 |
+
(green sphere) to target (red sphere). After the agent navigates, we task a probe (purple LSTM) with
|
405 |
+
performing the same navigation episode with the additional information encapsulated in the agent’s
|
406 |
+
internal representation (or memory), hA
|
407 |
+
T. The probe is able to navigate more efficiently by taking
|
408 |
+
shortcuts (purple path). As denoted by the dashed line between the probe and agent networks, the
|
409 |
+
probe does not influence what the agent stores in its internal representation. Environment in the
|
410 |
+
image from the Replica Dataset (Straub et al., 2019).
|
411 |
+
(B) Agent memory transplant increases
|
412 |
+
probe efficiency (SPL). Results of our trained probe agent under three configurations – initialized
|
413 |
+
with an empty representation (AllZeroMemory), a representation of a random agent walked along
|
414 |
+
the trained agent’s path (UntrainedAgentMemory), and the final representation of the trained agent
|
415 |
+
(TrainedAgentMemory). 95% confidence interval reported over 5 agent-probe pairs.
|
416 |
+
4
|
417 |
+
MEMORY ENABLES SHORTCUTS
|
418 |
+
To investigate what information is encoded in the memory of our blind agents, we develop an exper-
|
419 |
+
imental paradigm based on ‘probe’ agents. A probe is a secondary navigation agent4 that is struc-
|
420 |
+
turally identical to the original (sensing, architecture, etc.), but parametrically augmented with the
|
421 |
+
primary agent’s constructed episodic memory representation (hT , cT ). The probe has no influence
|
422 |
+
on the agent, i.e. no gradients (or rewards) follow from probe to agent (please see training details
|
423 |
+
in Apx. A.2). We use this paradigm to examine whether the agent’s final internal representation
|
424 |
+
contains sufficient information for taking shortcuts in the environment.
|
425 |
+
As illustrated in Fig. 3A, the agent first navigates from source (S) to target (T). After the agent
|
426 |
+
reaches T, a probe is initialized5 at S, its memory initialized with the agent’s final memory repre-
|
427 |
+
sentation, i.e. (h0, c0)probe = (hT , cT )agent, and tasked with navigating to T. We refer to this probe
|
428 |
+
task as SecondNav(S→T). All evaluations are conducted in environments not used for training the
|
429 |
+
agent nor the probe. Thus, any environmental information in the agent’s memory must have been
|
430 |
+
gathered during its trajectory (and not during any past exposure during learning). Similarly, all initial
|
431 |
+
knowledge the probe has of the environment must come from the agent’s memory (hT , cT )agent.
|
432 |
+
Our hypothesis is that the agent’s memory contains a spatial representation of the environment,
|
433 |
+
which the probe can leverage. If the hypothesis is true, we would expect the probe to navigate Sec-
|
434 |
+
ondNav(S→T) more efficiently than the agent (e.g. by taking shortcuts and cutting out exploratory
|
435 |
+
excursions taken by the agent). If not, we would expect the probe to perform on-par with the agent
|
436 |
+
since the probe is being trained on essentially the same task as the agent6. In our experiments, we
|
437 |
+
find that the probe is significantly more efficient than the agent – SPL of 62.9%±1.6% (agent) vs.
|
438 |
+
85.0%±1.6% (probe). It is worth stressing how remarkable the performance of the probe is – in a
|
439 |
+
new environment, a blind probe navigating without a map traverses a path that is within 15% of the
|
440 |
+
shortest path on the map. The best known sighted agents (equipped with an RGB camera, Depth
|
441 |
+
sensor, and egomotion sensor) achieve an SPL of 84% on this task (Ramakrishnan et al., 2021).
|
442 |
+
Essentially, the memories of a blind agent are as valuable as having vision!
|
443 |
+
Fig. 3A shows the difference in paths between the agent and probe (and videos showing more exam-
|
444 |
+
ples are available in the supplement). While the agent exhibits wall-following behavior, the probe
|
445 |
+
4To avoid confusion, we refer to this probe agent as ‘probe’ and the primary agent as ‘agent’ from this point.
|
446 |
+
5The probe’s heading at S is set to the agent’s final heading upon reaching T.
|
447 |
+
6We note that an argument can be made that if the agent’s memory is useless to the probe, then the probe is
|
448 |
+
being trained on a harder task since it must learn to navigate and ignore the agent’s memory. But this argument
|
449 |
+
would predict the probe’s performance to be lower not higher than the agent.
|
450 |
+
6
|
451 |
+
|
452 |
+
Published as a conference paper at ICLR 2023
|
453 |
+
B
|
454 |
+
A
|
455 |
+
12.4%
|
456 |
+
Non-navigable
|
457 |
+
Navigable
|
458 |
+
Ground Truth
|
459 |
+
Prediction
|
460 |
+
32.4%
|
461 |
+
Ground Truth
|
462 |
+
Prediction
|
463 |
+
B
|
464 |
+
A
|
465 |
+
Figure 4: Learning navigation improves map prediction from memory. (Left) Accuracy (In-
|
466 |
+
tersection over Union) distributions (via kernel density estimation) and means (dashed lines);
|
467 |
+
TrainedAgentMemory has a higher mean than UntrainedAgentMemory with p-value ≤ 10−5 (via
|
468 |
+
Wilcoxon signed-rank test (Wilcoxon, 1992)). (Right) Example ground truth and predicted occu-
|
469 |
+
pancy maps using TrainedAgentMemory (corresponding to (A) and (B) IoU points). Light grey
|
470 |
+
is non-navigable and dark grey is navigable. The agent path is drawn in light blue and navigates
|
471 |
+
from start (green) to target (red). We can see that when the agent travels close to one wall, the map
|
472 |
+
decoder predicts another wall parallel to it, indicating a corridor.
|
473 |
+
instead takes more direct paths and rarely performs wall following. Recall that the only difference in
|
474 |
+
the agent and probe is the contents of the initial hidden state – reward is identical (and available only
|
475 |
+
during training), training environments are identical (although the episodes are different), and eval-
|
476 |
+
uation episodes are identical – meaning that the environmental representation in the agent’s episodic
|
477 |
+
memory is what enables the probe to navigate more efficiently.
|
478 |
+
We further compare this result (which we denote as TrainedAgentMemory) with two control groups:
|
479 |
+
1) AllZeroMemory: An empty (all zeros) episodic memory to test for any systematic biases in the
|
480 |
+
probe tasks. This probe contains identical information at the start of an episode as the agent (i.e.
|
481 |
+
no information). 2) UntrainedAgentMemory: Episodic memory generated by an untrained agent
|
482 |
+
(i.e. with a random setting of neural network parameters) as it is walked along the trajectory of the
|
483 |
+
trained agent. This disentangles the agent’s structure from its parameters; and tests whether simply
|
484 |
+
being encoded by an LSTM (even one with random parameters) provides an inductive bias towards
|
485 |
+
building good environmental representations (Wieting & Kiela, 2019).
|
486 |
+
We find no evidence for this inductive bias – UntrainedAgentMemory performs no better than
|
487 |
+
AllZeroMemory (Fig. 3B, row 1 vs. 2). Furthermore, TrainedAgentMemory significantly outper-
|
488 |
+
forms both controls by +13 points SPL and +4 points Success (Fig. 3B, row 3 vs. 1 and 2). Taken
|
489 |
+
together, these two results indicate that the ability to construct useful spatial representations of the
|
490 |
+
environment from a trajectory is decidedly a learned behavior.
|
491 |
+
Next, we examine if there is any directional preference in the episodic memory constructed by
|
492 |
+
the agent. Our claim is that even though the agent navigates from S to T, if its memory indeed
|
493 |
+
contains map-like spatial representations, it should also support probes for the reverse task Second-
|
494 |
+
Nav(T→S). Indeed, we find that TrainedAgentMemory probe performs the same (within margin of
|
495 |
+
error) on both SecondNav(S→T) and SecondNav(T→S) (Fig. 3B right column) – indicating that
|
496 |
+
the memory is equally useful in both directions. In Apx. C.2 we demonstrate that the probe removes
|
497 |
+
excursions from the agent’s path and takes shortcuts through previously unseen parts of the envi-
|
498 |
+
ronment. Overall, these results provide compelling evidence that blind agents learn to build and use
|
499 |
+
implicit map-like representations that enable shortcuts and reasoning about previously untraversed
|
500 |
+
locations in the environment, solely through learning to navigate between two points.
|
501 |
+
5
|
502 |
+
LEARNING NAVIGATION IMPROVES METRIC MAP DECODING
|
503 |
+
Next, we tackle the question ‘Does the agent build episodic representations capable of decod-
|
504 |
+
ing metric maps (occupancy grids) of the environment?’. Formally, given the final representation
|
505 |
+
(hT , cT )agent, we train a separate decoding network to predict an allocentric top-down occupancy
|
506 |
+
grid (free-space vs not) of the environment. As with the probes, no gradients are propagated from
|
507 |
+
the decoder to the agent’s internal representation. We constrain the network to make predictions for
|
508 |
+
a location only if the agent reached within 2.5 meters of it (refer to Apx. A.3 for details). Note that
|
509 |
+
since the agents are ‘blind’ predictions about any unvisited location require reasoning about unseen
|
510 |
+
7
|
511 |
+
|
512 |
+
Published as a conference paper at ICLR 2023
|
513 |
+
Non-Excursion
|
514 |
+
Excursion
|
515 |
+
Predicted
|
516 |
+
Visited Chance
|
517 |
+
5
|
518 |
+
25
|
519 |
+
50
|
520 |
+
75
|
521 |
+
100
|
522 |
+
(A)
|
523 |
+
(B)
|
524 |
+
Figure 5: (A) Excursion prediction example. Qualitative example of the previously-visited loca-
|
525 |
+
tion decoder making systematic errors when decoding an excursion. Blue represents the confidence
|
526 |
+
of the decoder that the agent was previously at a given location; we can see that it is lower in the path
|
527 |
+
interval marked in red (excursion) than the rest.
|
528 |
+
(B) Remembrance of excursions. Performance
|
529 |
+
of decoders when predicting previous agent locations broken down into three categories. ‘Non-
|
530 |
+
excursion’ is all predictions where the current location of the agent and the prediction time step are
|
531 |
+
not part of an excursions. ‘Excursion’ is when the prediction time step is part of an excursion. ‘Exit’
|
532 |
+
is when the prediction time step is part of the last 10% of the excursion. X-axis is the distance into
|
533 |
+
the past and Y-axis is the relative error between the true and predicted locations.
|
534 |
+
space. As before, we compare the internal representation produced by TrainedAgentMemory to
|
535 |
+
internal representation produced by an agent with random parameters, UntrainedAgentMemory.
|
536 |
+
Fig. 4 shows the distribution of map-prediction accuracy, measured as interaction-over-union (IoU)
|
537 |
+
with the true occupancy grid. We find that TrainedAgentMemory enables uniformly more accurate
|
538 |
+
predictions than UntrainedAgentMemory– 32.5% vs 12.5% average IoU. The qualitative examples
|
539 |
+
show that the predictor is commonly able to make accurate predictions about unvisited locations, e.g.
|
540 |
+
when the agent travels close to one wall, the decoder predicts another parallel to it, indicating a cor-
|
541 |
+
ridor. These results show that the internal representation contains necessary information to decode
|
542 |
+
accurate occupancy maps, even for unseen locations. We note that the environment structural priors
|
543 |
+
are also necessary to prediction unseen locations. Thus agent memory is necessary but not sufficient.
|
544 |
+
In Apx. C.4, we conduct this analysis on ‘sighted’ navigation agents (equipped with a Depth camera
|
545 |
+
and egomotion sensor). Perhaps counter-intuitively, we do not find conclusive evidence that metric
|
546 |
+
maps can be decoded from the memory of sighted agents (despite their sensing suite being a strict
|
547 |
+
superset of blind agents). Our conjecture is that for higher-level strategies like map-building to
|
548 |
+
emerge, the learning problem must not admit ‘trivial’ solutions such as the ones deep reinforcement
|
549 |
+
learning is know to latch onto (Baker et al., 2020; Lehman et al., 2020; Kadian et al., 2020). We
|
550 |
+
believe that the minimal perception system used in our work served to create a challenging learning
|
551 |
+
problem, which in turn limited the possible ‘trivial’ solutions, thus inducing map-building.
|
552 |
+
6
|
553 |
+
MAPPING IS TASK-DEPENDENT: AGENT FORGETS EXCURSIONS
|
554 |
+
Given that the agent is memory-limited, it stands to reason that it might need to choose what informa-
|
555 |
+
tion to preserve and what to ‘forget’. To examine this, we attempt to decode the agent’s past positions
|
556 |
+
from its memory. Formally, given internal state at time t, (ht, ct), we train a prediction network fk(·)
|
557 |
+
to predict the agent’s location k steps in to the past, i.e. ˆst−k = fk(ht, ct)+st, k ∈ [1, 256]. Given
|
558 |
+
ground truth location st+k, we evaluate the decoder via relative L2 error ||ˆst+k−st+k||/||st+k−st||
|
559 |
+
(refer to Apx. A.4 for details). Qualitative analysis of past prediction results shows that the agent
|
560 |
+
forgets excursions7, i.e. excursions are harder to decode (see Fig. 5a). To quantify this, we man-
|
561 |
+
ually labelled excursions in 216 randomly sampled episodes in evaluation environments. Fig. 5b
|
562 |
+
shows that excursions are harder to decode than non-excursions, indicating that the agent does in-
|
563 |
+
deed forget excursions. Interestingly, we find that the exit of the excursion is considerably easier to
|
564 |
+
decode, indicating that the end of the excursion performs a similar function to landmarks in animal
|
565 |
+
and human navigation (Chan et al., 2012).
|
566 |
+
7We define an excursion as a sub-path that approximately forms a loop.
|
567 |
+
8
|
568 |
+
|
569 |
+
Published as a conference paper at ICLR 2023
|
570 |
+
In the appendix, we study several additional questions that could not be accommodated in the main
|
571 |
+
paper. In Apx. C.2 we further examine the probe’s performance. In Apx. C.3 we examine predicting
|
572 |
+
future agent locations. In Apx. C.5 we use agent’s hidden state as a world model.
|
573 |
+
7
|
574 |
+
RELATED WORK
|
575 |
+
Characterizing spatial representations.
|
576 |
+
Prior work has shown that LSTMs build grid-
|
577 |
+
cell (O’keefe & Nadel, 1978) representations of an environment when trained directly for path
|
578 |
+
integration within that environment (Banino et al., 2018; Cueva & Wei, 2018; Sorscher et al., 2020).
|
579 |
+
In contrast, our work provides no direct supervision for path integration, localization, or mapping.
|
580 |
+
Banino et al. (2018) demonstrated that these maps aid in navigation by training a navigation agent
|
581 |
+
that utilizes this cognitive map. In contrast, we show that LSTMs trained for navigation learn to
|
582 |
+
build spatial representations in novel environments. Whether or not LSTMs trained under this
|
583 |
+
setting also utilize grid-cells is a question for future work. Bruce et al. (2018) demonstrated that
|
584 |
+
LSTMs learn localization when trained for navigation in a single environment. We show that they
|
585 |
+
learn mapping when given location and trained in many environments. Huynh et al. (2020) proposed
|
586 |
+
a spatial memory architecture and demonstrated that a spatial representation emerges when trained
|
587 |
+
on a localization task. We show that spatial representations emerge in non-spatial neural networks
|
588 |
+
trained for navigation. Dwivedi et al. (2022) examined what navigation agents learn about their
|
589 |
+
environments. We provided a detailed account of emergent mapping in larger environments, over
|
590 |
+
longer time horizons, and show the emergence of intelligent behavior and mapping in blind agents,
|
591 |
+
which is not the focus of prior work.
|
592 |
+
‘Map-free’ navigation agents. Learned agents that navigate without an explicit mapping module
|
593 |
+
(called ‘map-free’ or ‘pixels-to-actions’) have shown strong performance on a variety of tasks (Savva
|
594 |
+
et al., 2019; Wijmans et al., 2020; Kadian et al., 2020; Chattopadhyay et al., 2021; Khandelwal et al.,
|
595 |
+
2022; Partsey et al., 2022; Reed et al., 2022). In this work, we do not provide any novel techniques
|
596 |
+
nor make any experimental advancement in the efficacy of such (sighted) agents. However, we make
|
597 |
+
two key findings. First, that blind agents are highly effective navigators for PointGoalNav, exhibit-
|
598 |
+
ing similar efficacy as sighted agents. Second, we begin to explain how ‘map-free’ navigation agents
|
599 |
+
perform their task: they build implicit maps in their memory, although the story is a bit nuanced
|
600 |
+
due to the results in Apx. C.4; we suspect this understanding might be extended in future work.
|
601 |
+
8
|
602 |
+
OUTLOOK: LIMITATIONS, REPRODUCIBILITY
|
603 |
+
In this work, we have shown that ‘blind’ AI navigation agents – agents with similar perception as
|
604 |
+
blind mole rats – are capable of performing goal-driven navigation to a high degree of performance.
|
605 |
+
We then showed that these AI navigation agents learn to build map-like representations (supporting
|
606 |
+
the ability to take shortcuts, follow walls, and predict free-space and collisions) of their environ-
|
607 |
+
ment solely through learning goal-driven navigation. Our agents and training regime have no added
|
608 |
+
inductive bias towards map-building, be it explicit or implicit, implying that cognitive maps may
|
609 |
+
be a natural solution to the inductive biases imposed by navigation by intelligent embodied agents,
|
610 |
+
whether they be biological or artificial. In a similar manner, convergent evolution (Kozmik et al.,
|
611 |
+
2008), where two unrelated intelligent systems independently arrive at similar mechanisms, suggests
|
612 |
+
that the mechanism is a natural response of having to adapt to the environment and the task.
|
613 |
+
Our results also provide an explanation of the surprising success of map-free neural network nav-
|
614 |
+
igation agents by showing that these agents in fact learn to build map-like internal representations
|
615 |
+
with no learning signal other than goal driven navigation. This result establish a link between how
|
616 |
+
‘map-free’ systems navigate with analytic mapping-and-planning techniques (Thrun et al., 2005;
|
617 |
+
Institute, 1972; Ayache & Faugeras, 1988; Smith et al., 1990).
|
618 |
+
Our results and analyses also point towards future directions in AI navigation research. Specifically,
|
619 |
+
imbuing AI navigation agents with explicit (e.g. architectural design) or implicit (e.g. training regime
|
620 |
+
or auxiliary objectives) priors that bias agents towards learning an internal representation with the
|
621 |
+
features found here may improve their performance. Further, it may better equip them to learn more
|
622 |
+
challenging tasks such as rearrangement of an environment by moving objects (Batra et al., 2020).
|
623 |
+
We see several limitations and areas for future work. First, we examined ground-based navigation
|
624 |
+
agents operating in digitizations of real houses. This limits the agent a 2D manifold and induces
|
625 |
+
strong structural priors on environment layout. As such, it is unclear how our results generalize
|
626 |
+
9
|
627 |
+
|
628 |
+
Published as a conference paper at ICLR 2023
|
629 |
+
to a drone flying through a large forest. Second, we examined agents with a minimal perceptual
|
630 |
+
system. In the supplementary text, we attempted to decode occupancy grids (metric maps) from
|
631 |
+
Depth sensor equipped agents and did not find convincing evidence. Our conjecture is that for
|
632 |
+
higher-level strategies like map-building to emerge, the learning problem must not admit ‘trivial’
|
633 |
+
solutions. We believe that the minimal perception system used in our work also served to create
|
634 |
+
such a challenging learning problem. Third, our experiments do not study the effects of actuation
|
635 |
+
noise, which is an important consideration in both robot navigation systems and path integration
|
636 |
+
in biological systems. Fourth, we examine an implicit map-building mechanism (an LSTM), a
|
637 |
+
similar set of experiments could be performed for agents with a differentiable read/write map but
|
638 |
+
no direct mapping supervision. Fifth, our agents only explore their environment for a short period
|
639 |
+
of time (an episode) before their memory is reset. Animals and robots at deployment experience
|
640 |
+
their environment for significantly longer periods of time. Finally, we do not provide a complete
|
641 |
+
mechanistic account for how the agent learns to build its map or what else it stores in its memory.
|
642 |
+
Acknowledgements: We thank Abhishek Kadian for his help in implementing the first version of
|
643 |
+
the SecondNav(T→S) probe experiment. We thank Jitendra Malik for his feedback on the draft and
|
644 |
+
guidance. EW is supported in part by an ARCS fellowship. The Georgia Tech effort was supported
|
645 |
+
in part by NSF, ONR YIP, and ARO PECASE. The Oregon State effort is supported in part by
|
646 |
+
the DARPA Machine Common Sense program. The views and conclusions contained herein are
|
647 |
+
those of the authors and should not be interpreted as necessarily representing the official policies or
|
648 |
+
endorsements, either expressed or implied, of the U.S. Government, or any sponsor.
|
649 |
+
Reproducibility Statement: Implementation details of our analyses are provided in the appendix.
|
650 |
+
Our work builds on datasets and code that are already open-sourced, and our analysis code will be
|
651 |
+
open-sourced.
|
652 |
+
REFERENCES
|
653 |
+
Peter Anderson, Angel X. Chang, Devendra Singh Chaplot, Alexey Dosovitskiy, Saurabh Gupta,
|
654 |
+
Vladlen Koltun, Jana Kosecka, Jitendra Malik, Roozbeh Mottaghi, Manolis Savva, and
|
655 |
+
Amir Roshan Zamir. On evaluation of embodied navigation agents. CoRR, abs/1807.06757,
|
656 |
+
2018. URL http://arxiv.org/abs/1807.06757.
|
657 |
+
Reut Avni, Yael Tzvaigrach, and David Eilam. Exploration and navigation in the blind mole rat
|
658 |
+
(spalax ehrenbergi): global calibration as a primer of spatial representation. Journal of Experi-
|
659 |
+
mental Biology, 211(17):2817–2826, 2008.
|
660 |
+
Nicholas Ayache and Olivier D Faugeras. Building, registrating, and fusing noisy visual maps. The
|
661 |
+
International Journal of Robotics Research, 7(6):45–65, 1988.
|
662 |
+
Bowen Baker, Ingmar Kanitscheider, Todor Markov, Yi Wu, Glenn Powell, Bob McGrew, and Igor
|
663 |
+
Mordatch. Emergent tool use from multi-agent autocurricula. In Proceedings of the International
|
664 |
+
Conference on Learning Representations (ICLR), 2020.
|
665 |
+
Andrea Banino, Caswell Barry, Benigno Uria, Charles Blundell, Timothy Lillicrap, Piotr Mirowski,
|
666 |
+
Alexander Pritzel, Martin J. Chadwick, Thomas Degris, Joseph Modayil, Greg Wayne, Hubert
|
667 |
+
Soyer, Fabio Viola, Brian Zhang, Ross Goroshin, Neil Rabinowitz, Razvan Pascanu, Charlie Beat-
|
668 |
+
tie, Stig Petersen, Amir Sadik, Stephen Gaffney, Helen King, Koray Kavukcuoglu, Demis Hass-
|
669 |
+
abis, Raia Hadsell, and Dharshan Kumaran. Vector-based navigation using grid-like representa-
|
670 |
+
tions in artificial agents. Nature, 557(7705):429–433, 2018. doi: 10.1038/s41586-018-0102-6.
|
671 |
+
URL https://doi.org/10.1038/s41586-018-0102-6.
|
672 |
+
Dhruv Batra, Angel X Chang, Sonia Chernova, Andrew J Davison, Jia Deng, Vladlen Koltun, Sergey
|
673 |
+
Levine, Jitendra Malik, Igor Mordatch, Roozbeh Mottaghi, Manolis Savva, and Hao Su. Rear-
|
674 |
+
rangement: A challenge for embodied ai. In arXiv preprint arXiv:2011.01975, 2020.
|
675 |
+
Jake Bruce, Niko S¨underhauf, Piotr Mirowski, Raia Hadsell, and Michael Milford. Learning deploy-
|
676 |
+
able navigation policies at kilometer scale from a single traversal. Conference on Robot Learning
|
677 |
+
(CoRL), 2018.
|
678 |
+
Edgar Chan, Oliver Baumann, Mark A Bellgrove, and Jason B Mattingley. From objects to land-
|
679 |
+
marks: the function of visual location information in spatial navigation. Frontiers in psychology,
|
680 |
+
3:304, 2012.
|
681 |
+
10
|
682 |
+
|
683 |
+
Published as a conference paper at ICLR 2023
|
684 |
+
Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva,
|
685 |
+
Shuran Song, Andy Zeng, and Yinda Zhang. Matterport3d: Learning from rgb-d data in indoor
|
686 |
+
environments. In International Conference on 3D Vision (3DV), 2017. License: http://kaldir.
|
687 |
+
vc.in.tum.de/matterport/MP TOS.pdf.
|
688 |
+
Nicole Chapuis and Patricia Scardigli. Shortcut ability in hamsters (mesocricetus auratus): The
|
689 |
+
role of environmental and kinesthetic information. Animal Learning & Behavior, 21(3):255–265,
|
690 |
+
1993.
|
691 |
+
Prithvijit Chattopadhyay, Judy Hoffman, Roozbeh Mottaghi, and Ani Kembhavi. Robustnav: To-
|
692 |
+
wards benchmarking robustness in embodied navigation. In Proceedings of IEEE Conference on
|
693 |
+
Computer Vision and Pattern Recognition (CVPR), 2021.
|
694 |
+
Allen Cheung, Matthew Collett, Thomas S. Collett, Alex Dewar, Fred Dyer, Paul Graham, Michael
|
695 |
+
Mangan, Ajay Narendra, Andrew Philippides, Wolfgang St¨urzl, Barbara Webb, Antoine Wys-
|
696 |
+
trach, and Jochen Zeil. Still no convincing evidence for cognitive map use by honeybees. Pro-
|
697 |
+
ceedings of the National Academy of Sciences, 111(42):E4396–E4397, 2014. ISSN 0027-8424.
|
698 |
+
doi: 10.1073/pnas.1413581111. URL https://www.pnas.org/content/111/42/E4396.
|
699 |
+
Holk Cruse and R¨udiger Wehner. No need for a cognitive map: Decentralized memory for in-
|
700 |
+
sect navigation. PLOS Computational Biology, 7(3):1–10, 03 2011. doi: 10.1371/journal.pcbi.
|
701 |
+
1002009. URL https://doi.org/10.1371/journal.pcbi.1002009.
|
702 |
+
Christopher J. Cueva and Xue-Xin Wei. Emergence of grid-like representations by training recur-
|
703 |
+
rent neural networks to perform spatial localization. In Proceedings of the International Confer-
|
704 |
+
ence on Learning Representations (ICLR), 2018. URL https://openreview.net/forum?id=
|
705 |
+
B17JTOe0-.
|
706 |
+
Kshitij Dwivedi, Gemma Roig, Aniruddha Kembhavi, and Roozbeh Mottaghi. What do navigation
|
707 |
+
agents learn about their environment? In Proceedings of IEEE Conference on Computer Vision
|
708 |
+
and Pattern Recognition (CVPR), pp. 10276–10285, 2022.
|
709 |
+
Russell Epstein, E Z Patai, Joshua Julian, and Hugo Spiers. The cognitive map in humans: Spatial
|
710 |
+
navigation and beyond. Nature Neuroscience, 20:1504–1513, 10 2017. doi: 10.1038/nn.4656.
|
711 |
+
Charles R. Gallistel. Learning, development, and conceptual change.The organization of learning.
|
712 |
+
The MIT Press, 1990.
|
713 |
+
Priya Goyal, Piotr Doll´ar, Ross B. Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola,
|
714 |
+
Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: training ima-
|
715 |
+
genet in 1 hour. CoRR, abs/1706.02677, 2017. URL http://arxiv.org/abs/1706.02677.
|
716 |
+
Lee Harten, Amitay Katz, Aya Goldshtein, Michal Handel, and Yossi Yovel. The ontogeny of a
|
717 |
+
mammalian cognitive map in the real world. Science, 369(6500):194–197, 2020.
|
718 |
+
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-
|
719 |
+
nition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
|
720 |
+
2016.
|
721 |
+
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):
|
722 |
+
1735–1780, 1997.
|
723 |
+
Peter J Huber. Robust estimation of a location parameter. In The Annals of Mathematical Statistics,
|
724 |
+
pp. 73–101. JSTOR, 1964.
|
725 |
+
Tri Huynh, Michael Maire, and Matthew R. Walter. Multigrid neural memory. In Proceedings of
|
726 |
+
the International Conference on Machine Learning (ICML), pp. 4561–4571. PMLR, 2020.
|
727 |
+
Stanford Research Institute. Shakey: An experiment in robot planning and learning., 1972.
|
728 |
+
P Ioal`e, M Nozzolini, and F Papi. Homing pigeons do extract directional information from olfactory
|
729 |
+
stimuli. Behavioral Ecology and Sociobiology, 26(5):301–305, 1990.
|
730 |
+
11
|
731 |
+
|
732 |
+
Published as a conference paper at ICLR 2023
|
733 |
+
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by
|
734 |
+
reducing internal covariate shift. In Proceedings of the International Conference on Machine
|
735 |
+
Learning (ICML), 2015.
|
736 |
+
Lucia F Jacobs. The evolution of the cognitive map. Brain, behavior and evolution, 62(2):128–139,
|
737 |
+
2003.
|
738 |
+
Abhishek Kadian, Joanne Truong, Aaron Gokaslan, Alexander Clegg, Erik Wijmans, Stefan Lee,
|
739 |
+
Manolis Savva, Sonia Chernova, and Dhruv Batra. Are we making real progress in simulated
|
740 |
+
environments? measuring the sim2real gap in embodied visual navigation. In IEEE Robotics and
|
741 |
+
Automation Letters (RA-L), 2020.
|
742 |
+
Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Pe-
|
743 |
+
ter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In
|
744 |
+
Proceedings of the International Conference on Learning Representations (ICLR), 2017.
|
745 |
+
Apoorv Khandelwal, Luca Weihs, Roozbeh Mottaghi, and Aniruddha Kembhavi. Simple but effec-
|
746 |
+
tive: Clip embeddings for embodied ai. In Proceedings of IEEE Conference on Computer Vision
|
747 |
+
and Pattern Recognition (CVPR), pp. 14829–14838, 2022.
|
748 |
+
Tali Kimchi, Ariane S Etienne, and Joseph Terkel. A subterranean mammal uses the magnetic
|
749 |
+
compass for path integration. Proceedings of the National Academy of Sciences, 101(4):1105–
|
750 |
+
1109, 2004.
|
751 |
+
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of
|
752 |
+
the International Conference on Learning Representations (ICLR), 2015.
|
753 |
+
Dmitry Kobak and Philipp Berens. The art of using t-sne for single-cell transcriptomics. Nature
|
754 |
+
communications, 10(1):1–14, 2019.
|
755 |
+
Zbynek Kozmik, Jana Ruzickova, Kristyna Jonasova, Yoshifumi Matsumoto, Pavel Vopalensky,
|
756 |
+
Iryna Kozmikova, Hynek Strnad, Shoji Kawamura, Joram Piatigorsky, Vaclav Paces, et al. As-
|
757 |
+
sembly of the cnidarian camera-type eye from vertebrate-like components. Proceedings of the
|
758 |
+
National Academy of Sciences, 105(26):8989–8993, 2008.
|
759 |
+
Joel Lehman, Jeff Clune, Dusan Misevic, Christoph Adami, Lee Altenberg, Julie Beaulieu, Peter J
|
760 |
+
Bentley, Samuel Bernard, Guillaume Beslon, David M Bryson, et al. The surprising creativity of
|
761 |
+
digital evolution: A collection of anecdotes from the evolutionary computation and artificial life
|
762 |
+
research communities. Artificial Life, 26(2):274–306, 2020.
|
763 |
+
Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Doll´ar. Focal loss for dense
|
764 |
+
object detection. In Proceedings of IEEE International Conference on Computer Vision (ICCV),
|
765 |
+
pp. 2980–2988, 2017.
|
766 |
+
Rosanne Liu, Joel Lehman, Piero Molino, Felipe Petroski Such, Eric Frank, Alex Sergeev, and Jason
|
767 |
+
Yosinski. An intriguing failing of convolutional neural networks and the coordconv solution. In
|
768 |
+
Advances in Neural Information Processing Systems (NeurIPS), pp. 9605–9616, 2018.
|
769 |
+
Ronald Mathias Lockley. Animal navigation. Pan Books, 1967.
|
770 |
+
Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In Proceedings of the
|
771 |
+
International Conference on Learning Representations (ICLR), 2019.
|
772 |
+
Vladimir J Lumelsky and Alexander A Stepanov. Path-planning strategies for a point mobile au-
|
773 |
+
tomaton moving amidst unknown obstacles of arbitrary shape. Algorithmica, 2(1-4):403–430,
|
774 |
+
1987.
|
775 |
+
Hans Maaswinkel and Ian Q Whishaw. Homing with locale, taxon, and dead reckoning strategies
|
776 |
+
by foraging rats: sensory hierarchy in spatial navigation. Behavioural brain research, 99(2):
|
777 |
+
143–152, 1999.
|
778 |
+
Emil W Menzel. Chimpanzee spatial memory organization. Science, 182(4115):943–945, 1973.
|
779 |
+
12
|
780 |
+
|
781 |
+
Published as a conference paper at ICLR 2023
|
782 |
+
Martin M¨uller and R¨udiger Wehner. Path integration in desert ants, cataglyphis fortis. Proceedings
|
783 |
+
of the National Academy of Sciences, 85(14):5287–5290, 1988.
|
784 |
+
Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines.
|
785 |
+
In Proceedings of the International Conference on Machine Learning (ICML), 2010.
|
786 |
+
John O’keefe and Lynn Nadel. The hippocampus as a cognitive map. Oxford: Clarendon Press,
|
787 |
+
1978.
|
788 |
+
Ruslan Partsey, Erik Wijmans, Naoki Yokoyama, Oles Dobosevych, Dhruv Batra, and Oleksandr
|
789 |
+
Maksymets. Is mapping necessary for realistic pointgoal navigation? In Proceedings of IEEE
|
790 |
+
Conference on Computer Vision and Pattern Recognition (CVPR), pp. 17232–17241, 2022.
|
791 |
+
R. Peters. Cognitive maps in wolves and men. Environmental design research, 2:247–253, 1976.
|
792 |
+
Santhosh K Ramakrishnan, Aaron Gokaslan, Erik Wijmans, Oleksandr Maksymets, Alex Clegg,
|
793 |
+
John Turner, Eric Undersander, Wojciech Galuba, Andrew Westbury, Angel X Chang, et al.
|
794 |
+
Habitat-matterport 3d dataset (hm3d): 1000 large-scale 3d environments for embodied ai. Neural
|
795 |
+
Information Processing Systems – Benchmarks and Datasets, 2021.
|
796 |
+
Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gomez Colmenarejo, Alexander Novikov,
|
797 |
+
Gabriel Barth-Maron, Mai Gimenez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, et al.
|
798 |
+
A generalist agent. arXiv preprint arXiv:2205.06175, 2022.
|
799 |
+
Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain,
|
800 |
+
Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh, and Dhruv Batra. Habitat:
|
801 |
+
A Platform for Embodied AI Research. In Proceedings of IEEE International Conference on
|
802 |
+
Computer Vision (ICCV), 2019.
|
803 |
+
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel.
|
804 |
+
High-
|
805 |
+
dimensional continuous control using generalized advantage estimation. In Proceedings of the
|
806 |
+
International Conference on Learning Representations (ICLR), 2016.
|
807 |
+
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy
|
808 |
+
optimization algorithms. CoRR, abs/1707.06347, 2017.
|
809 |
+
Randall Smith, Matthew Self, and Peter Cheeseman. Estimating uncertain spatial relationships in
|
810 |
+
robotics. In Autonomous robot vehicles, pp. 167–193. Springer, 1990.
|
811 |
+
Ben Sorscher, Gabriel C. Mel, Samuel A. Ocko, Lisa Giocomo, and Surya Ganguli.
|
812 |
+
A uni-
|
813 |
+
fied theory for the computational and mechanistic origins of grid cells.
|
814 |
+
In bioRxiv preprint
|
815 |
+
bioRxiv:2020.12.29.424583, 2020. doi: 10.1101/2020.12.29.424583.
|
816 |
+
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.
|
817 |
+
Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning
|
818 |
+
Research (JMLR), 15(1):1929–1958, 2014.
|
819 |
+
Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans, Simon Green, Jakob J.
|
820 |
+
Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, Anton Clarkson, Mingfei Yan, Brian Budge,
|
821 |
+
Yajie Yan, Xiaqing Pan, June Yon, Yuyang Zou, Kimberly Leon, Nigel Carter, Jesus Briales, Tyler
|
822 |
+
Gillingham, Elias Mueggler, Luis Pesqueira, Manolis Savva, Dhruv Batra, Hauke M. Strasdat,
|
823 |
+
Renzo De Nardi, Michael Goesele, Steven Lovegrove, and Richard A. Newcombe. The replica
|
824 |
+
dataset: A digital replica of indoor spaces. CoRR, abs/1906.05797, 2019. URL http://arxiv.
|
825 |
+
org/abs/1906.05797.
|
826 |
+
Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 1992.
|
827 |
+
Andrew Szot, Alex Clegg, Eric Undersander, Erik Wijmans, Yili Zhao, John Turner, Noah Maestre,
|
828 |
+
Mustafa Mukadam, Devendra Chaplot, Oleksandr Maksymets, Aaron Gokaslan, Vladimir Von-
|
829 |
+
drus, Sameer Dharur, Franziska Meier, Wojciech Galuba, Angel Chang, Zsolt Kira, Vladlen
|
830 |
+
Koltun, Jitendra Malik, Manolis Savva, and Dhruv Batra. Habitat 2.0: Training home assis-
|
831 |
+
tants to rearrange their habitat. Advances in Neural Information Processing Systems (NeurIPS),
|
832 |
+
2021.
|
833 |
+
13
|
834 |
+
|
835 |
+
Published as a conference paper at ICLR 2023
|
836 |
+
Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic robotics (intelligent robotics and
|
837 |
+
autonomous agents), 2005.
|
838 |
+
Sivan Toledo, David Shohami, Ingo Schiffner, Emmanuel Lourie, Yotam Orchan, Yoav Bartan, and
|
839 |
+
Ran Nathan. Cognitive map–based navigation in wild bats revealed by a new high-throughput
|
840 |
+
tracking system. Science, 369(6500):188–193, 2020.
|
841 |
+
Edward C. Tolman. Cognitive maps in rats and men. Psychological Review, 55(4):189–208, 1948.
|
842 |
+
doi: 10.1037/h0061626.
|
843 |
+
Jonathan Tompson, Ross Goroshin, Arjun Jain, Yann LeCun, and Christoph Bregler. Efficient object
|
844 |
+
localization using convolutional networks. In Proceedings of IEEE Conference on Computer
|
845 |
+
Vision and Pattern Recognition (CVPR), pp. 648–656, 2015.
|
846 |
+
Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine
|
847 |
+
learning research, 9(11), 2008.
|
848 |
+
Karl Von Frisch. The dance language and orientation of bees. Harvard University Press, 1967.
|
849 |
+
John Wieting and Douwe Kiela. No training required: Exploring random encoders for sentence clas-
|
850 |
+
sification. In Proceedings of the International Conference on Learning Representations (ICLR),
|
851 |
+
2019.
|
852 |
+
Erik Wijmans, Abhishek Kadian, Ari Morcos, Stefan Lee, Irfan Essa, Devi Parikh, Manolis Savva,
|
853 |
+
and Dhruv Batra. DD-PPO: Learning near-perfect pointgoal navigators from 2.5 billion frames.
|
854 |
+
In Proceedings of the International Conference on Learning Representations (ICLR), 2020.
|
855 |
+
Frank Wilcoxon. Individual comparisons by ranking methods. In Breakthroughs in statistics, pp.
|
856 |
+
196–202. Springer, 1992.
|
857 |
+
Fei Xia, Amir R Zamir, Zhiyang He, Alexander Sax, Jitendra Malik, and Silvio Savarese. Gibson
|
858 |
+
env: Real-world perception for embodied agents. In Proceedings of IEEE Conference on Com-
|
859 |
+
puter Vision and Pattern Recognition (CVPR), 2018. License: https://storage.googleapis.
|
860 |
+
com/gibson material/Agreement%20GDS%2006-04-18.pdf.
|
861 |
+
A
|
862 |
+
METHODS AND MATERIALS
|
863 |
+
A.1
|
864 |
+
POINTGOAL NAVIGATION TRAINING
|
865 |
+
Task. In PointGoal Navigation, the agent is tasked with navigating to a point specified relative to
|
866 |
+
its initial location, i.e an input of (δx, δy) corresponds to going δx meters forward and δy meters
|
867 |
+
to the right. The agent succeeds if it predicts the stop action within 0.2 meters of the specified
|
868 |
+
point. The agent has access to 4 low-level actions – move forward (0.25 meters), turn left (10◦),
|
869 |
+
turn right (10◦), and stop. There is no noise in the agent’s actuations.
|
870 |
+
Sensors. The agent has access to solely an idealized GPS+Compass sensor that provides it heading
|
871 |
+
and position relative to the starting orientation and location at each time step. There is no noise in
|
872 |
+
the agent’s sensors.
|
873 |
+
Architecture. The agent is parameterized by a 3-layer LSTM (Hochreiter & Schmidhuber, 1997)
|
874 |
+
with a 512-d hidden dimension. At each time-step, the agent receives observations g (the location of
|
875 |
+
the goal relative to start), GPS (its current position relative to start), and compass (its current heading
|
876 |
+
relative to start). We also explicitly give the agent an indicator of if it is close to goal in the form
|
877 |
+
of min(||g − GPS||, 0.5) as we find the agent does not learn robust stopping logic otherwise. All
|
878 |
+
4 inputs are projected to 32-d using separated fully-connected layers. These are then concatenated
|
879 |
+
with a learned 32-d embedding of the previous action taken to form a 160-d input that is then given
|
880 |
+
to the LSTM. The output of the LSTM is then processed by a fully-connected layer to produce a
|
881 |
+
softmax distribution of the action space and an estimate of the value function.
|
882 |
+
Training Data. We construct our training data based on the Gibson (Xia et al., 2018) and Matter-
|
883 |
+
port3D dataset (Chang et al., 2017). We training on 411 scenes from Gibson and 72 from Matter-
|
884 |
+
port3D.
|
885 |
+
14
|
886 |
+
|
887 |
+
Published as a conference paper at ICLR 2023
|
888 |
+
Training Procedure. We train our agents using Proximal Policy Optimization (PPO) (Schulman
|
889 |
+
et al., 2017) with Generalized Advantage Estimation (GAE) (Schulman et al., 2016). We use Decen-
|
890 |
+
tralized Distributed PPO (DD-PPO) (Wijmans et al., 2020) to train on 16 GPUs. Each GPU/worker
|
891 |
+
collects 256 steps of experience from 16 agents (each in different scenes) and then performs 2
|
892 |
+
epochs of PPO with 2 mini-batchs per epoch. We use the Adam optimize (Kingma & Ba, 2015)
|
893 |
+
with a learning rate of 2.5 × 10−4. We set the discount factor γ to 0.99, the PPO clip to 0.2, and the
|
894 |
+
GAE hyper-parameter τ to 0.95. We train until convergence (around 2 billion steps of experience).
|
895 |
+
At every timestep, t, the agent is in state st and takes action at, and transitions to state st+. It
|
896 |
+
receives shaped reward in the form:
|
897 |
+
rt =
|
898 |
+
�2.5 · Success
|
899 |
+
if at is Stop
|
900 |
+
−∆geo dist(st, st+1) − λ
|
901 |
+
Otherwise
|
902 |
+
(1)
|
903 |
+
where ∆geo dist(st, st+1) is the change in geodesic (shortest path) distance to goal between st and
|
904 |
+
st+1 and λ=0.001 is a slack penalty encouraging shorter episodes.
|
905 |
+
Evaluation Procedure. We evaluate the agent in the 18 scenes from the Matterport3D test set.
|
906 |
+
We use the episodes from Savva et al. (Savva et al., 2019), which consist of 56 episodes per scene
|
907 |
+
(1008 in total). Episode range in distance from 1.2 to 30 meters. The ratio of geodesic distance to
|
908 |
+
euclidean distance between start and goal is restricted to be greater than or equal to 1.1, ensuring
|
909 |
+
that episodes are not simple straight lines. Note that reward is not available during evaluation.
|
910 |
+
The agent is evaluated under two metrics, Success, whether or not the agent called the stop action
|
911 |
+
with 0.2 meters of the goal and Success weighted by normalized inverse Path Length (SPL) (An-
|
912 |
+
derson et al., 2018). SPL is calculated as follows: given the agent’s path [s1, . . . , sT ] and the initial
|
913 |
+
geodesic distance to goal di for episode i, we first compute the length of the agent’s path
|
914 |
+
li =
|
915 |
+
T
|
916 |
+
�
|
917 |
+
t=2
|
918 |
+
||st − st−1||2
|
919 |
+
(2)
|
920 |
+
then SPL for episode i as
|
921 |
+
SPLi = Successi ·
|
922 |
+
di
|
923 |
+
min{di, li}
|
924 |
+
(3)
|
925 |
+
We then report SPL as the average of SPLi across all episodes.
|
926 |
+
A.2
|
927 |
+
PROBE TRAINING
|
928 |
+
Task. The probe task is to either navigate from start to goal again (SecondNav(S→T)) or navigate
|
929 |
+
from goal to start (SecondNav(T→S)). For SecondNav(S→T), the probe is initialized at the starting
|
930 |
+
location but with the agent’s final heading. For SecondNav(T→S), the probe is initialized with the
|
931 |
+
agent’s final heading and position. In both cases, the probe and the agent share the same coordinate
|
932 |
+
system – i.e. in SecondNav(T→S), the initial GPS and Compass readings for the probe are identical
|
933 |
+
the the final GPS and Compass readings for the agent. When the agent does not successfully reach
|
934 |
+
the goal, the probe task is necessarily undefined and we do not instantiate a probe.
|
935 |
+
Sensors, Architecture, Training Procedure, Training Data. The probe uses the same sensor suite,
|
936 |
+
architecture, training procedure, and training data as the agent, described in Section A.1
|
937 |
+
Note that no gradients (or rewards) follow from probe to agent. From the agent’s perspective, the
|
938 |
+
probe does not exist. From the probe’s perspective, the agent provides a dataset of initial locations
|
939 |
+
(or goals) and initial hidden states.
|
940 |
+
Evaluation Procedure. We evaluate the probe in a similar manner the agent except that any episode
|
941 |
+
which the agent is unable to complete (5%) is removed due to the probe task being undefined if the
|
942 |
+
agent is unable to complete the task. The agent reaches the goal 95% of the time, thus only 50 out of
|
943 |
+
1008 possible probe evaluation episodes are invalidated. The control probe type accounts for this.
|
944 |
+
We ignore the agent’s trajectory when computing SPL for the probe.
|
945 |
+
A.3
|
946 |
+
OCCUPANCY MAP DECODING
|
947 |
+
Task. We train a decoding network to predict the top-down occupancy map of the environment from
|
948 |
+
the final internal state of the agent (ht, ct). We limit the decoder to only predict within 2.5 meters
|
949 |
+
of any location the agent visited.
|
950 |
+
15
|
951 |
+
|
952 |
+
Published as a conference paper at ICLR 2023
|
953 |
+
Architecture. The map-decoder is constructed as follows: First the internal state (ht, ct) is concate-
|
954 |
+
nated into a 512×6-d vector. The vector is then passed to a 2-layer MLP with a hidden dimension of
|
955 |
+
512-d that produces a 4608-d vector. This 4608-d vector is then reshaped into a [128, 6, 6] feature-
|
956 |
+
map. The feature map is processed by a series of Coordinate Convolution (CoordConv) (Liu et al.,
|
957 |
+
2018) Coordinate Up-Convolution (CoordUpConv) layers decrease the channel-depth and increase
|
958 |
+
spatial resolution to [16, 96, 96]. Specifically, after an initial CoordConv with an output channel-
|
959 |
+
depth of 128, we use a series of 4 CoordUpConv-CoordConv layers where each CoordUpConv doubles
|
960 |
+
the spatial dimensions (quadruples spatial resolution) and each CoordConv reduces channel-depth
|
961 |
+
by half. We then use a final 1x1-Convolution to create a [2, 96, 96] tensor representing the non-
|
962 |
+
normalized log-probabilities of whether or not an given location is navigable or not.
|
963 |
+
Each CoordConv has kernel size 3, padding 1, and stride 1. CoordUpConv has kernel size 3, padding
|
964 |
+
0, and stride 2. Before all CoordConv and CoordUpConv, we use 2D Dropout (Srivastava et al., 2014;
|
965 |
+
Tompson et al., 2015) with a zero-out probability of 0.05. We use Batch Normalization layers (Ioffe
|
966 |
+
& Szegedy, 2015) and the ReLU activation function (Nair & Hinton, 2010) after all layers except
|
967 |
+
the terminal layer.
|
968 |
+
Training Data. We construct our training data by having a trained agent perform episodes of Point-
|
969 |
+
Goal navigation on the training dataset. Note that while evaluation is done utilizing the final hidden
|
970 |
+
state, we construct our training dataset by taking 30 time steps (evenly spaced) from the trajectory
|
971 |
+
and ensuring the final step is included.
|
972 |
+
Training Procedure. We train on 8 GPUs with a batch size of 128 per GPU (total batch size
|
973 |
+
of 1024). We use the AdamW optimizer (Kingma & Ba, 2015; Loshchilov & Hutter, 2019) with
|
974 |
+
an initial learning rate of 10−3 and linearly scale the learning rate to 1.6 × 10−2 over the first
|
975 |
+
5 epochs (Goyal et al., 2017) and use a weight-decay of 10−5. We use the validation dataset to
|
976 |
+
perform early-stopping. We use Focal Loss (Lin et al., 2017) (a weighted version of Cross Entropy
|
977 |
+
Loss) with γ = 2.0, αNotNavigable = 0.75, and αNavigable = 0.25 to handle the class imbalance.
|
978 |
+
Evaluation Data and Procedure. We construct our evaluation data using the validation dataset.
|
979 |
+
Note that the scenes in evaluation are novel to both the agent and the decoder. We evaluate the
|
980 |
+
predicted occupancy map from the final hidden state/final time step. We collect a total of 5,000
|
981 |
+
episodes.
|
982 |
+
A.4
|
983 |
+
PAST AND FUTURE POSITION PREDICTION
|
984 |
+
Task. We train a decoder to predict the change in agent location given the internal state at time t
|
985 |
+
(ht, ct). Specifically, let st be the agent’s position at time t where the coordinate system is defined
|
986 |
+
by the agent’s starting location (i.e. s0 = 0), and st+k be its position k steps into the future/past,
|
987 |
+
then the decoder is trained to model f((ht, ct)) = st+k − st.
|
988 |
+
Architecture. The decoder is a 3-layer MLP that produces a 3 dimensional output with hidden sizes
|
989 |
+
of 256 and 128. We use Batch Normalization (Ioffe & Szegedy, 2015) and the ReLU activation
|
990 |
+
function (Nair & Hinton, 2010) after all layers except the last.
|
991 |
+
Training Data. The training data is collected from executing a trained agent on episodes from the
|
992 |
+
training set. For each episode, we collect all possible pairs of st, st+k for a given value of k.
|
993 |
+
Training Procedure. We use the AdamW optimizer (Kingma & Ba, 2015; Loshchilov & Hutter,
|
994 |
+
2019) with a learning rate of 10−3, a weight decay of 10−4, and a batch size of 256. We use a
|
995 |
+
Smooth L1 Loss/Huber Loss (Huber, 1964) between the ground-truth change in position and the
|
996 |
+
predicted change in position. We use the validation set to perform early stopping.
|
997 |
+
Evaluation Procedure. We evaluate the trained decoded on held-out scenes. Note that the held-out
|
998 |
+
scenes are novel both to the agent and the decoder.
|
999 |
+
Visualization of Predictions. For visualization the predictions of past vitiation, we found it easier
|
1000 |
+
to train a second decoder that predicts all locations the agent visited previously on a 2D top down
|
1001 |
+
map given the internal state (ht, ct). This decoder shares the exact same architecture and train-
|
1002 |
+
ing procedure as the occupancy grid decoder. The decoder removes the temporal aspect from the
|
1003 |
+
prediction, so it is ill-suited for any time-dependent analysis, but produces clearer visualizations.
|
1004 |
+
Excursion Calibrated Analysis. To perform the excursions forgetting analysis, we use the excur-
|
1005 |
+
sion labeled episodes. We marked the end of the excursion as the last 10% of the steps that are part
|
1006 |
+
16
|
1007 |
+
|
1008 |
+
Published as a conference paper at ICLR 2023
|
1009 |
+
of the excursion. For a given point in time t, we classify that point into one of {Non-Excursion,
|
1010 |
+
Excursion, Exit}. We then examine how well this point is remembered by calculating the error of
|
1011 |
+
predicting the point t from t + k, i.e. how well can t be predicted when it is k steps into the past.
|
1012 |
+
When t is part of an excursions (both the excursion and the exit) we limit t + k to either be part of
|
1013 |
+
the same excursion or not part of an excursion. When t is not part of an excursion, t + k must also
|
1014 |
+
not be part of an excursion nor can there be any excursion in the range [t, t + k].
|
1015 |
+
A.5
|
1016 |
+
COLLISION PREDICTION LINEAR PROBE
|
1017 |
+
Task. The task of this probe is to predict of the previous action taken lead to a collision given the
|
1018 |
+
current hidden state. Specifically it seeks to learn a function Collidedt = f((ht, ct)) where (ht, ct)
|
1019 |
+
is the internal state at time t and Collidedt is whether or not the previous action, at−1 lead to a
|
1020 |
+
collision.
|
1021 |
+
Architecture. The architecture is logistic classifier that takes the concatentation of the internal state
|
1022 |
+
and produces logprob of Collidedt.
|
1023 |
+
Training Data. We construct our training data by having a trained agent perform episodes of Point-
|
1024 |
+
Goal navigation on the training set. We collect a total of 10 million samples and then randomly
|
1025 |
+
select 1 million for training. We then normalize each dimension independently by computing mean
|
1026 |
+
and standard deviation and then subtract mean and divide by standard deviation. This ensures that
|
1027 |
+
all dimensions have the same average magnitude.
|
1028 |
+
Training Procedure. We training on 1 GPU with a batch size of 256. We use the Adam opti-
|
1029 |
+
mizer (Kingma & Ba, 2015) with a learning rate of 5 × 10−4. We train for 20 epochs.
|
1030 |
+
Evaluation Data and Procedure. We construct our evaluation data using the same procedure as the
|
1031 |
+
training data, but on the validation dataset and collect 200,00 samples (which is then subsampled to
|
1032 |
+
20,000).
|
1033 |
+
Important Dimension Selection. To select which dimensions are important for predicting collsions,
|
1034 |
+
we re-train our probe with various L1 penalties. We sweep from 0 to 1000 and then select the penalty
|
1035 |
+
that results in the lowest number of significant dimensions without substantially reducing accuracy.
|
1036 |
+
We determine the number of significant dimensions by first ordering all dimensions by the L1 norm
|
1037 |
+
of the corresponding weight and then finding the smallest number of dimensions we can keep while
|
1038 |
+
maintaining 99% of the performance of keeping all dimensions for that classifier.
|
1039 |
+
The t-SNE manifold is computed using 20,000 samples. This is then randomly subsampled to 1,500
|
1040 |
+
for visualization.
|
1041 |
+
A.6
|
1042 |
+
DATA AND MATERIALS AVAILABILITY
|
1043 |
+
The Gibson (Xia et al., 2018) and Matterport3D (Chang et al., 2017) datasets can be acquired from
|
1044 |
+
their respective distributors. Habitat (Savva et al., 2019) is open source. Code to reproduce experi-
|
1045 |
+
ments will be made available.
|
1046 |
+
B
|
1047 |
+
ADDITIONAL DISCUSSIONS
|
1048 |
+
B.1
|
1049 |
+
RELATIONSHIP TO COGNITIVE MAPS
|
1050 |
+
Throughout the text, we use the term ‘map’ to mean a spatial representation that supports intelligent
|
1051 |
+
behaviors like taking shortcuts. Whether or not this term is distinct from the specific concept of a
|
1052 |
+
‘cognitive map’ is debated.
|
1053 |
+
Cognitive maps, as defined by O’keefe & Nadel (1978), imply a set of properties and are generally
|
1054 |
+
attached to a specific mechanism. The existence of a cognitive map requires that the agent be
|
1055 |
+
able to reach a desired goal in the environment from any starting location without being given that
|
1056 |
+
starting location, i.e. be able to navigate against a map. Further, cognitive maps refer to a specific
|
1057 |
+
mechanism – place cells and grid cells being present in the hippocampus. Other works have also
|
1058 |
+
studied ‘cognitive maps’ and not put such restrictions on its definition (Gallistel, 1990; Tolman,
|
1059 |
+
1948), however these broader definitions have been debated (Jacobs, 2003).
|
1060 |
+
Our work shows that the spatial information contained within the agent’s hidden state enables map-
|
1061 |
+
like properties – a secondary agent to take shortcuts through previously unexplored free space – and
|
1062 |
+
supports the decoding of a metric map. However, these do not fully cover the proprieties of O’keefe
|
1063 |
+
17
|
1064 |
+
|
1065 |
+
Published as a conference paper at ICLR 2023
|
1066 |
+
& Nadel (1978)’s definition nor do we make a mechanistic claim about how this information is
|
1067 |
+
stored in the neural network, though we do find the emergence of collision-detection neurons.
|
1068 |
+
C
|
1069 |
+
ADDITIONAL EXPERIMENTS
|
1070 |
+
C.1
|
1071 |
+
BLIND SHORTEST PATH NAVIGATION WITH TRUE STATE
|
1072 |
+
In the main text, we posited that blind agents learn wall-following as this an effective strategy for
|
1073 |
+
blind navigation in unknown environments. We posit that this is because the agent does not have ac-
|
1074 |
+
cess to true state (it does not know the current environment nor where it is in global coordinates). In
|
1075 |
+
this experiment we show that blind agents learn to take shortest paths, as opposed to wall-following,
|
1076 |
+
when trained in a single environment (implicitly informing the agent of the current environment)
|
1077 |
+
and uses the global coordinate system. 8
|
1078 |
+
We use an identical agent architecture and training procedure as outline for PointGoal navigation
|
1079 |
+
training in the Materials and Methods with two differences: 1) A single training and test environment
|
1080 |
+
and 2) usage of the global coordinates within the environment for both goal specific and the agent’s
|
1081 |
+
GPS+Compass sensor. We perform this experiment on 3 scenes, 1 from the Gibson val dataset and
|
1082 |
+
2 from Matterport3D val dataset. The average SPL during training is 99±0.1 showing that the blind
|
1083 |
+
agent learns shortest path navigation not wall-following. Figure A6 shows examples of an agent
|
1084 |
+
trained in a single scene with global coordinates and an agent trained in many scenes with episodic
|
1085 |
+
coordinates.
|
1086 |
+
These two settings, i) where the agent uses an episodic coordinate system and navigates in unknown
|
1087 |
+
environments, and ii) where the agent uses global coordinates and navigates in a known environment
|
1088 |
+
can be seen as the difference between a partially observable Markov decision process (POMDP) and
|
1089 |
+
a Markov decision process. In the POMDP case, the agent must learn a generalizable policy while
|
1090 |
+
it can overfit in the MDP case.
|
1091 |
+
C.2
|
1092 |
+
FURTHER ANALYSIS OF THE PROBE’S PERFORMANCE
|
1093 |
+
In the main text, we showed that the probe is indeed much more efficient than the agent, but how
|
1094 |
+
is this gain achieved? Our hypothesis is that the probe improves upon the agent’s path by taking
|
1095 |
+
shortcuts and eliminating excursions (representing an ‘out and back’). We define an excursion as a
|
1096 |
+
sub-path that approximately forms a loop. To quantify excursions, we manually annotate excursions
|
1097 |
+
in 216 randomly sampled episodes in evaluation environments. Of the labeled episodes, 62% have a
|
1098 |
+
least 1 excursion. On average, an episode has 0.95 excursions, and excursions have an average length
|
1099 |
+
of 101 steps (corresponding to 8.23 meters). Since excursions represent unnecessary portions of the
|
1100 |
+
trajectory, this indicates that the probe should be able improve upon the agent’s path by removing
|
1101 |
+
these excursions.
|
1102 |
+
We quantify this excursion removal via the normalized Chamfer distance between the agent’s path
|
1103 |
+
and the probe’s path. Formally, given the agent’s path Agent=[s(agent)
|
1104 |
+
1
|
1105 |
+
, . . . , s(agent)
|
1106 |
+
T
|
1107 |
+
] and the probe’s
|
1108 |
+
path Probe=[s(probe)
|
1109 |
+
1
|
1110 |
+
, . . . , s(probe)
|
1111 |
+
N
|
1112 |
+
] where s ∈ R3 is a point in the environment:
|
1113 |
+
PathDiff(Agent, Probe) = 1
|
1114 |
+
N
|
1115 |
+
N
|
1116 |
+
�
|
1117 |
+
i=1
|
1118 |
+
min
|
1119 |
+
1≤j≤T GeoDist(s(agent)
|
1120 |
+
i
|
1121 |
+
, s(probe)
|
1122 |
+
j
|
1123 |
+
),
|
1124 |
+
(4)
|
1125 |
+
where GeoDist(·, ·) indicates the geodesic distance (shortest traverseable path-length).
|
1126 |
+
Note that Chamfer distance is not symmetric. PathDiff(Probe, Agent) measures the average distance
|
1127 |
+
of a point on the probe path s(probe)
|
1128 |
+
j
|
1129 |
+
from the closest point on the agent path. A large PathDiff(Probe,
|
1130 |
+
Agent) indicates that the probe travels through novel parts of the environments (compared to the
|
1131 |
+
agent). Conversely, PathDiff(Agent, Probe) measures the average distance of a point on the agent
|
1132 |
+
path s(agent)
|
1133 |
+
i
|
1134 |
+
from the closest point on the probe path. A large
|
1135 |
+
�
|
1136 |
+
PathDiff(Agent, Probe) − PathD-
|
1137 |
+
iff(Probe, Agent)
|
1138 |
+
�
|
1139 |
+
gap indicates that agent path contains excursions while the probe does not; thus,
|
1140 |
+
8Recall that in the episodic coordinate system the origin is defined by the agent’s starting position and
|
1141 |
+
orientation. In the global coordinate system the origin is an arbitrary but consistent location (we simply use
|
1142 |
+
the origin for a given scene defined in the dataset). Thus in the global coordinate system the goal is specified
|
1143 |
+
as ‘Go to (x, y)’ where x and y are specified in the global coordinate system, not with respect to the agent’s
|
1144 |
+
current location.
|
1145 |
+
18
|
1146 |
+
|
1147 |
+
Published as a conference paper at ICLR 2023
|
1148 |
+
we refer to this gap as Excursion Removal. To visually understand why this is the case, consider
|
1149 |
+
the example agent and probe paths in Fig. A7. Point (C) lies on an excursion in the agent path.
|
1150 |
+
It contributes a term to PathDiff(Agent, Probe) but not to PathDiff(Probe, Agent) because (D) is
|
1151 |
+
closer to (E) than (C).
|
1152 |
+
On both SecondNav(S→T) and SecondNav(T→S), we find that as the efficiency of a probe in-
|
1153 |
+
creases, Excursion Removal also increases (Table A2, row 1 vs. 2, 2 vs. 3), confirming that the
|
1154 |
+
TrainedAgentMemory probe is more efficient because it removes excursions.
|
1155 |
+
We next consider if the TrainedAgentMemory probe also travels through previously unexplored
|
1156 |
+
space in addition to removing excursions. To quantify this, we report PathDiff(Probe, Agent) on
|
1157 |
+
episodes where agent SPL is less than average (less than 62.9%).9 If probes take the same path as
|
1158 |
+
the agent, we would expect this metric to be zero. If, however, probes travel through previously
|
1159 |
+
unexplored space to minimize travel distance, we would expect this metric to be significantly non-
|
1160 |
+
zero. Indeed, on SecondNav(S→T), we find the TrainedAgentMemory probe is 0.32 meters away
|
1161 |
+
on average from the closest point on the agent’s path (99% empirical bootstrap of the mean gives
|
1162 |
+
a range of (0.299, 0.341)). See Fig. A7 for a visual example. On SecondNav(T→S), this effect is
|
1163 |
+
slightly more pronounced, the TrainedAgentMemory probe is 0.55 meters away on average (99%
|
1164 |
+
empirical bootstrap of the mean gives a range of (0.52, 0.588)). Taken holistically, these results show
|
1165 |
+
that the probe is both more efficient than the agent and consistently travels through new parts of the
|
1166 |
+
environment (that the agent did not travel through). Thus, the spatial representation in the agent’s
|
1167 |
+
memory is not simply a ‘literal’ episodic summarization, but also contains anticipatory inferences
|
1168 |
+
about previously unexplored spaces being navigable (e.g. traveling along the hypotenuses instead of
|
1169 |
+
sides of a room).
|
1170 |
+
In the text above we reported free space inference only on episodes where the agent gets an SPL
|
1171 |
+
bellow average. In Fig. A12 we provide a plot of Free Space Inference vs. Agent SPL to show the
|
1172 |
+
impact of other cutoff points. In Fig. A13 we also provide a similar plot of Excursion Removal
|
1173 |
+
vs. Agent SPL. In both cases, as agent SPL increase, the probe is able to infer less free space or
|
1174 |
+
remove less excursions.
|
1175 |
+
C.3
|
1176 |
+
FUTURE VISITATION PREDICTION
|
1177 |
+
In the main text we examined what types of systematic errors are made when decoding past agent
|
1178 |
+
locations, here we provide addition analysis and look at predicting future observations as that will
|
1179 |
+
reveal if there are any idiosyncrasies in what can be predicted about future vs. what will happen in
|
1180 |
+
the future.
|
1181 |
+
Given ground truth location st+k, we evaluate the decoder via i) absolute L2 error ||ˆst+k−st+k|| and
|
1182 |
+
ii) relative L2 error ||ˆst+k − st+k||/||st+k − st||. To determine baseline (or chance) performance,
|
1183 |
+
we train a second set of decoders where instead of using the correct internal state (ht, ct) as the
|
1184 |
+
input, we randomly select an internal state from a different trajectory. This will evaluate if there are
|
1185 |
+
any inherent biases in the task.
|
1186 |
+
In Fig. A8, we find that the decoder is able to accurately predict where the agent has been, even for
|
1187 |
+
long time horizons – e.g. at 100 time steps in the past, relative error is 0.55 and absolute error is 1.0m,
|
1188 |
+
compared to relative error of 1.0 and absolute error of 3.2m for the chance baseline prediction. For
|
1189 |
+
short time horizons the decoder is also able to accurately predict where the agent will be in the future
|
1190 |
+
– e.g. at 10 time steps into the future, relative and absolute error are below chance. Interestingly, we
|
1191 |
+
see that for longer range future predictions, the decoder is worse than chance in relative error but on-
|
1192 |
+
par in absolute error. This apparent contradiction arises due to the decoders making (relatively) large
|
1193 |
+
systematic errors when the agent backtracks. In order for the decoder to predict backtracking, the
|
1194 |
+
agent would need to already know its future trajectory will be sub-optimal (i.e. lead to backtracking)
|
1195 |
+
but still take that trajectory. This is in contradiction with the objective the agent is trained for, to
|
1196 |
+
reach the goal as quickly as possible, and thus the agent would not take a given path if it knew it
|
1197 |
+
would lead to backtracking.
|
1198 |
+
9We restrict to a subset where the agent has relatively low SPL to improve dynamic range. When the agent
|
1199 |
+
has high SPL, there won’t be excursions to remove and this metric will naturally be low. In the supplementary
|
1200 |
+
text we provide plots of this metric vs. agent SPL.
|
1201 |
+
19
|
1202 |
+
|
1203 |
+
Published as a conference paper at ICLR 2023
|
1204 |
+
C.4
|
1205 |
+
EXTENSION TO SIGHTED NAVIGATION AGENTS
|
1206 |
+
In the main text we analyzed how ‘blind’ agents, those with limited perceptual systems, utilize their
|
1207 |
+
memory and found evidence that they build cognitive maps. Here, we extend our analysis to agents
|
1208 |
+
with rich perceptual systems, those equipped with a Depth camera and an egomotion sensor. Our
|
1209 |
+
primary experimental paradigm relies on showing that a probe is able to take shortcuts when given
|
1210 |
+
the agent’s memory. This experimental paradigm relies on the probe being able to take a shorter
|
1211 |
+
path than the agent. Navigation agents with vision can perform PointNav near-perfectly (Wijmans
|
1212 |
+
et al., 2020) and thus there isn’t room for improving, rendering this experiment infeasible. As a
|
1213 |
+
supplement to this experiment, we also show that a metric map (top-down occupancy grid) can be
|
1214 |
+
decoded from the agents memory. This procedure can also be applied to sighted agents.
|
1215 |
+
We use the ResNet50 (He et al., 2016) Gibson-2plus (Xia et al., 2018) pre-train model from Wijmans
|
1216 |
+
et al. (Wijmans et al., 2020) and train an occupancy grid decoder using the same procedure as in
|
1217 |
+
the main text. Note however we utilize only Gibson for training and the Gibson validation scenes
|
1218 |
+
as held-out data instead of Matterport3D as this agent was only trained on Gibson. As before, we
|
1219 |
+
compare performance from TrainedAgentMemory with UntrainedAgentMemory.
|
1220 |
+
We find mixed results.
|
1221 |
+
When measuring performance with Intersection-over-Union (IoU),
|
1222 |
+
UntrainedAgentMemory outperforms TrainedAgentMemory (40.1% vs. 42.9%). However, when
|
1223 |
+
measuring performance with average class balanced accuracy, TrainedAgentMemory outperforms
|
1224 |
+
UntrainedAgentMemory (61.8% vs. 53.1%). Fig. A9 and Fig. A10 show the corresponding distri-
|
1225 |
+
bution plots.
|
1226 |
+
Overall, this experiment does not provide convincing evidence either way to whether vision-
|
1227 |
+
equipped agents build metric maps in their memory. However, it does show that vision-equipped
|
1228 |
+
agents, if they do maintain a map of their environment, create one that is considerably more chal-
|
1229 |
+
lenging to decode. Further, we note this does not necessarily imply similarly mixed results as to
|
1230 |
+
whether or not vision agents maintain a still spatial but sparser representation, such as a topological
|
1231 |
+
graph, as their rich perception can fill in the details in the moment.
|
1232 |
+
C.5
|
1233 |
+
NAVIGATION FROM MEMORY ALONE
|
1234 |
+
In the main text we showed that agents learn to build map-like representations. A map-like repre-
|
1235 |
+
sentation of the environment, should, to a degree, support navigation with no external information,
|
1236 |
+
i.e. by dead reckoning. Given that the actions are deterministic, the probe should be able to perform
|
1237 |
+
either task without external inputs and only the agent’s internal representation and the previously
|
1238 |
+
taken action. The localization performed by the probe in this setting is similar to path integration,
|
1239 |
+
however, it must also be able to handle any collisions that occur when navigating.
|
1240 |
+
Fig. A11 shows performance vs. episode length for SecondNav(S→T) and SecondNav(T→S).
|
1241 |
+
There are two primary trends. For short navigation episodes (≤5m), the agent is able to complete
|
1242 |
+
the task often. We also find that under this setting, SecondNav(T→S) is an easier task. This is due
|
1243 |
+
to the information conveyed to the probe by its initial heading. In SecondNav(T→S), the probe can
|
1244 |
+
make progress by simply turning around and going forward, while in SecondNav(S→T), the final
|
1245 |
+
heading of the agent is not informative of which way the probe should navigate initially. Overall,
|
1246 |
+
these results show that the representation built by the agent is sufficient to navigate short distances
|
1247 |
+
with no external information.
|
1248 |
+
Experiment procedure. This experiment mirrors the probe experiment described in methods and
|
1249 |
+
materials with three differences: 1) The input from the GPS+Compass sensor is zero-ed out. 2)
|
1250 |
+
The change in distance to goal shaping in the reward is normalized by the distance from initial state
|
1251 |
+
to goal. We find that the prediction of the value function suffers considerably otherwise. 3) An
|
1252 |
+
additional reward signal as to whether or not the last action taken decreased the angle between the
|
1253 |
+
probe’s current heading and the direction along the shortest path to goal is added. We find the probe
|
1254 |
+
has challenges learning to turn around on the SecondNav(T→S) task otherwise (as it almost always
|
1255 |
+
starts facing 180◦ in the wrong direction).
|
1256 |
+
Let hgt
|
1257 |
+
t be the heading along the shortest path to goal from the probe’s current position st, ht be the
|
1258 |
+
probe’s current heading, then AngularDistance(hgt
|
1259 |
+
t , ht) is the error in the probe’s heading. The full
|
1260 |
+
20
|
1261 |
+
|
1262 |
+
Published as a conference paper at ICLR 2023
|
1263 |
+
reward for this probe is then
|
1264 |
+
rt(st, at, st+1) =
|
1265 |
+
�
|
1266 |
+
�
|
1267 |
+
�
|
1268 |
+
�
|
1269 |
+
�
|
1270 |
+
�
|
1271 |
+
�
|
1272 |
+
2.5 · Success
|
1273 |
+
if at is Stop
|
1274 |
+
−10.0 · ∆geo dist(st, st+1)/GeoDist(s0, g)
|
1275 |
+
−0.25 · ∆HeadingError(st, st+1)
|
1276 |
+
−λ
|
1277 |
+
Otherwise
|
1278 |
+
(5)
|
1279 |
+
C.6
|
1280 |
+
MEMORY LENGTH
|
1281 |
+
The method presented in the main text to examine memory length is post-hoc analysis performed
|
1282 |
+
on the ‘blind’ PointGoal Navigation agents and thus the agent is operating out-of-distribution. From
|
1283 |
+
the agent’s view, it is still performing a valid PointGoal navigation episode, just with a different
|
1284 |
+
starting location, but the agent may not have taken the same sequence of actions if started from that
|
1285 |
+
location. While we would still expect performance to stature with a small k if the memory length
|
1286 |
+
is indeed short, it is imprecise with measuring the exact memory length of the agent and does not
|
1287 |
+
answer what memory budget is required to perform the task.
|
1288 |
+
Here we examined training agents with a fixed memory length LSTM. Fig. A14 shows similar
|
1289 |
+
trends to those described in the main paper – performance increases as the memory budget increases
|
1290 |
+
– however performance is higher when the agent is trained for a given memory budget. Due to the
|
1291 |
+
increased compute needed to train the model (e.g. training a model with a memory length of 128 is
|
1292 |
+
128× more computationally costly), we where unable to train for a memory budget longer than 256.
|
1293 |
+
We also note the non-monotonicity in Fig. A14. We conjecture that this is a consequence of inducing
|
1294 |
+
the negative effects of large-batch optimization (Keskar et al., 2017) – training with a memory budget
|
1295 |
+
of k effectively increases the batch size by a factor of k. Keeping the batch size constant has its own
|
1296 |
+
drawbacks; reducing the number of parallel environments will harm data diversity and result in
|
1297 |
+
overfitting while reducing the rollout length increases the bias of the return estimate and makes
|
1298 |
+
credit assignment harder. Thus we kept number of environments and rollout length constant.
|
1299 |
+
D
|
1300 |
+
SUPPLEMENTARY VIDEOS
|
1301 |
+
Movies S1-3 Videos showing blind agent navigation with the location of the hidden state in the
|
1302 |
+
collision t-SNE space. Notice that the hidden state stays within a cluster throughout a series of
|
1303 |
+
actions.
|
1304 |
+
21
|
1305 |
+
|
1306 |
+
Published as a conference paper at ICLR 2023
|
1307 |
+
SecondNav(S→T)
|
1308 |
+
SecondNav(T→S)
|
1309 |
+
Probe Type
|
1310 |
+
Excursion Removal
|
1311 |
+
Excursion Removal
|
1312 |
+
1 AllZeroMemory
|
1313 |
+
0.21±0.017
|
1314 |
+
0.21±0.004
|
1315 |
+
2 UntrainedAgentMemory
|
1316 |
+
0.23±0.009
|
1317 |
+
0.25±0.009
|
1318 |
+
3 TrainedAgentMemory
|
1319 |
+
0.52±0.014
|
1320 |
+
0.51±0.011
|
1321 |
+
Table A2: Excursion removal result of our trained probe agent under three configurations – ini-
|
1322 |
+
tialized with an empty representation (AllZeroMemory), a representation of a random agent walked
|
1323 |
+
along the trained agent’s path (UntrainedAgentMemory), and the final representation of the trained
|
1324 |
+
agent (TrainedAgentMemory). 95% confidence interval reported over 5 agent-probe pairs.
|
1325 |
+
Navigable
|
1326 |
+
Not Navigable
|
1327 |
+
Agent Path
|
1328 |
+
Novel Scene, Episodic Coordinates
|
1329 |
+
Agent Path
|
1330 |
+
Known Scene, Global Coordinates
|
1331 |
+
Figure A6: True state trajectory comparison. Example trajectories of an agent with true state
|
1332 |
+
(trained for a specific environment and using global coordinates), green line, compared to an agent
|
1333 |
+
trained for many environments and using episodic coordinates, blue line. The later is what we
|
1334 |
+
examine in this work. Notice that the agent with true state take shortest path trajectories while the
|
1335 |
+
agent without true state instead exhibits strong wall-following behavior.
|
1336 |
+
22
|
1337 |
+
|
1338 |
+
30Published as a conference paper at ICLR 2023
|
1339 |
+
PathDiff(P,A)
|
1340 |
+
Probe Path
|
1341 |
+
Agent Path
|
1342 |
+
PathDiff(A,P) - PathDiff(P,A)
|
1343 |
+
Excursion Removal
|
1344 |
+
Free Space Inference
|
1345 |
+
A
|
1346 |
+
B
|
1347 |
+
E
|
1348 |
+
D
|
1349 |
+
C
|
1350 |
+
Figure A7: Two categories of probe shortcut. ‘Excursion Removal’ is when the probe removes
|
1351 |
+
excursions from the agent’s path. The dashed line shows the distance between the points in the
|
1352 |
+
excursion and the closest point in the probe’s path. ‘Free Space Inference’ occurs when the probe
|
1353 |
+
travels through previously unvisited locations in the environments. The dashed lines show the dis-
|
1354 |
+
tance between any points in the probe’s path and the closest point in the agent’s path.
|
1355 |
+
200
|
1356 |
+
100
|
1357 |
+
0
|
1358 |
+
100
|
1359 |
+
200
|
1360 |
+
Time Offset
|
1361 |
+
0
|
1362 |
+
2
|
1363 |
+
4
|
1364 |
+
6
|
1365 |
+
Error
|
1366 |
+
Absolute L2 Error
|
1367 |
+
200
|
1368 |
+
100
|
1369 |
+
0
|
1370 |
+
100
|
1371 |
+
200
|
1372 |
+
Time Offset
|
1373 |
+
0.5
|
1374 |
+
1.0
|
1375 |
+
1.5
|
1376 |
+
2.0
|
1377 |
+
Relative L2 Error
|
1378 |
+
Actual
|
1379 |
+
Chance
|
1380 |
+
Figure A8: Past and future prediction. Performance of decoders trained to predict where the agent
|
1381 |
+
was in the past/will be in the future. On the x-axis is how far into the past or future the decoder
|
1382 |
+
is predicting (positive values are future predictions and negative values are past predictions). The
|
1383 |
+
y-axis is either absolute or relative L2 error between the predicted location of the agent and the true
|
1384 |
+
location.
|
1385 |
+
23
|
1386 |
+
|
1387 |
+
Published as a conference paper at ICLR 2023
|
1388 |
+
0.0
|
1389 |
+
0.2
|
1390 |
+
0.4
|
1391 |
+
0.6
|
1392 |
+
0.8
|
1393 |
+
1.0
|
1394 |
+
Map Prediction Accuracy (IoU)
|
1395 |
+
UntrainedAgentMemory
|
1396 |
+
TrainedAgentMemory
|
1397 |
+
Figure A9: Map prediction accuracy (Intersection over Union) for Depth sensor equipped agents.
|
1398 |
+
0.2
|
1399 |
+
0.3
|
1400 |
+
0.4
|
1401 |
+
0.5
|
1402 |
+
0.6
|
1403 |
+
0.7
|
1404 |
+
0.8
|
1405 |
+
0.9
|
1406 |
+
Map Prediction Accuracy (Class Balanced Accuracy)
|
1407 |
+
UntrainedAgentMemory
|
1408 |
+
TrainedAgentMemory
|
1409 |
+
Figure A10: Map prediction accuracy (class balanced accuracy) for Depth sensor equipped agents.
|
1410 |
+
5
|
1411 |
+
10
|
1412 |
+
15
|
1413 |
+
20
|
1414 |
+
25
|
1415 |
+
30
|
1416 |
+
GeodesicDistance(Start, Goal)
|
1417 |
+
0
|
1418 |
+
10
|
1419 |
+
20
|
1420 |
+
30
|
1421 |
+
40
|
1422 |
+
50
|
1423 |
+
60
|
1424 |
+
70
|
1425 |
+
Performance (SPL; Higher is better)
|
1426 |
+
SecondNav(S
|
1427 |
+
T)
|
1428 |
+
5
|
1429 |
+
10
|
1430 |
+
15
|
1431 |
+
20
|
1432 |
+
25
|
1433 |
+
30
|
1434 |
+
GeodesicDistance(Start, Goal)
|
1435 |
+
SecondNav(T
|
1436 |
+
S)
|
1437 |
+
Figure A11: Memory-only probe performance. Performance (in SPL; higher is better) as a func-
|
1438 |
+
tion of geodesic distance from start to goal for the TrainedAgentMemory probe without inputs on
|
1439 |
+
SecondNav(S→T) and SecondNav(T→S). More information can be found under the ‘Navigation
|
1440 |
+
from memory alone’ header.
|
1441 |
+
24
|
1442 |
+
|
1443 |
+
Published as a conference paper at ICLR 2023
|
1444 |
+
20
|
1445 |
+
40
|
1446 |
+
60
|
1447 |
+
80
|
1448 |
+
Agent Performance (SPL; Higher is better)
|
1449 |
+
0.2
|
1450 |
+
0.4
|
1451 |
+
0.6
|
1452 |
+
0.8
|
1453 |
+
1.0
|
1454 |
+
1.2
|
1455 |
+
1.4
|
1456 |
+
Free Space Inference
|
1457 |
+
SecondNav(S
|
1458 |
+
T)
|
1459 |
+
20
|
1460 |
+
40
|
1461 |
+
60
|
1462 |
+
80
|
1463 |
+
Agent Performance (SPL; Higher is better)
|
1464 |
+
SecondNav(T
|
1465 |
+
S)
|
1466 |
+
Figure A12: Free Space Inference for the TrainedAgentMemory probe on both SecondNav(S→T)
|
1467 |
+
and SecondNav(T→S) as a function of agent SPL. We see that as agent SPL decreases, the probe is
|
1468 |
+
able to take paths that inference more free space.
|
1469 |
+
20
|
1470 |
+
40
|
1471 |
+
60
|
1472 |
+
80
|
1473 |
+
Agent Performance (SPL; Higher is better)
|
1474 |
+
0
|
1475 |
+
1
|
1476 |
+
2
|
1477 |
+
3
|
1478 |
+
Excursion Removal
|
1479 |
+
SecondNav(S
|
1480 |
+
T)
|
1481 |
+
20
|
1482 |
+
40
|
1483 |
+
60
|
1484 |
+
80
|
1485 |
+
Agent Performance (SPL; Higher is better)
|
1486 |
+
SecondNav(T
|
1487 |
+
S)
|
1488 |
+
Figure A13: Excursion Removal for the TrainedAgentMemory probe on both SecondNav(S→T)
|
1489 |
+
and SecondNav(T→S) as a function of agent SPL. We see that as agent SPL decreases, excursion
|
1490 |
+
removal increases since the probe is able to remove additional excursions.
|
1491 |
+
0
|
1492 |
+
50
|
1493 |
+
100
|
1494 |
+
150
|
1495 |
+
200
|
1496 |
+
250
|
1497 |
+
Memory Length
|
1498 |
+
0
|
1499 |
+
20
|
1500 |
+
40
|
1501 |
+
60
|
1502 |
+
80
|
1503 |
+
100
|
1504 |
+
Performance (higher is better)
|
1505 |
+
Metric
|
1506 |
+
SPL
|
1507 |
+
Success
|
1508 |
+
Figure A14: Performance vs. memory length for agents trained under a given memory length. Note
|
1509 |
+
that longer memory lengths are challenging to train for under this methodology as it induces the
|
1510 |
+
negative effects of large-batch optimization and is computationally expensive.
|
1511 |
+
25
|
1512 |
+
|
1513 |
+
Published as a conference paper at ICLR 2023
|
1514 |
+
A
|
1515 |
+
B
|
1516 |
+
D
|
1517 |
+
C
|
1518 |
+
Ground Truth
|
1519 |
+
12.4%
|
1520 |
+
32.4%
|
1521 |
+
Prediction
|
1522 |
+
Ground Truth
|
1523 |
+
Prediction
|
1524 |
+
D
|
1525 |
+
C
|
1526 |
+
A
|
1527 |
+
B
|
1528 |
+
Non-navigable
|
1529 |
+
Navigable
|
1530 |
+
Figure A15: Map prediction with poor examples. In the main text we shows qualitative examples
|
1531 |
+
for the average prediction and a good prediction. Here we show two additional examples: A, a very
|
1532 |
+
poor quality prediction. This shows that the decoder sometimes does make large mistakes. B, the
|
1533 |
+
average prediction for the UntrainedAgentMemory decoder. This shows the qualitative difference
|
1534 |
+
between the average UntrainedAgentMemory and TrainedAgentMemory prediction.
|
1535 |
+
26
|
1536 |
+
|
-9FQT4oBgHgl3EQfKjXJ/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
-dE0T4oBgHgl3EQfxAFs/content/tmp_files/2301.02640v1.pdf.txt
ADDED
@@ -0,0 +1,627 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
1
|
3 |
+
|
4 |
+
|
5 |
+
3D dose prediction for Gamma Knife radiosurgery
|
6 |
+
using deep learning and data modification
|
7 |
+
Binghao Zhang1, Aaron Babier1, Timothy C.Y. Chan1, Mark Ruschin2
|
8 |
+
|
9 |
+
1 Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Canada
|
10 |
+
2 Department of Radiation Oncology, Sunnybrook Health Sciences Centre, University of Toronto,
|
11 |
+
Toronto, Canada
|
12 |
+
|
13 |
+
E-mail: [email protected]
|
14 |
+
Abstract
|
15 |
+
Purpose: To develop a machine learning-based, 3D dose prediction methodology for Gamma
|
16 |
+
Knife (GK) radiosurgery. The methodology accounts for cases involving targets of any
|
17 |
+
number, size, and shape.
|
18 |
+
Methods: Data from 322 GK treatment plans was modified by isolating and cropping the
|
19 |
+
contoured MRI and clinical dose distributions based on tumor location, then scaling the
|
20 |
+
resulting tumor spaces to a standard size. An accompanying 3D tensor was created for each
|
21 |
+
instance to account for tumor size. The modified dataset for 272 patients was used to train
|
22 |
+
both a generative adversarial network (GAN-GK) and a 3D U-Net model (U-Net-GK).
|
23 |
+
Unmodified data was used to train equivalent baseline models. All models were used to
|
24 |
+
predict the dose distribution of 50 out-of-sample patients. Prediction accuracy was evaluated
|
25 |
+
using gamma, with criteria of 4%/2mm, 3%/3mm, 3%/1mm and 1%/1mm. Prediction quality
|
26 |
+
was assessed using coverage, selectivity, and conformity indices.
|
27 |
+
Results: The predictions resulting from GAN-GK and U-Net-GK were similar to their clinical
|
28 |
+
counterparts, with average gamma (4%/2mm) passing rates of 84.9 ± 15.3% and 83.1 ±
|
29 |
+
17.2%, respectively. In contrast, the gamma passing rate of baseline models were
|
30 |
+
significantly worse than their respective GK-specific models (p < 0.001) at all criterion levels.
|
31 |
+
The quality of GK-specific predictions was also similar to that of clinical plans.
|
32 |
+
Conclusion: Deep learning models can use GK-specific data modification to predict 3D dose
|
33 |
+
distributions for GKRS plans with a large range in size, shape, or number of targets. Standard
|
34 |
+
deep learning models applied to unmodified GK data generated poorer predictions.
|
35 |
+
|
36 |
+
Keywords: 3D-dose prediction, Gamma Knife, automated planning, knowledge-based planning
|
37 |
+
|
38 |
+
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
|
43 |
+
|
44 |
+
|
45 |
+
|
46 |
+
|
47 |
+
2
|
48 |
+
|
49 |
+
|
50 |
+
1. Introduction
|
51 |
+
Gamma Knife (GK) radiosurgery (GKRS) is a form of radiotherapy that precisely treats abnormalities within the
|
52 |
+
brain using narrow beams of radiation. GKRS is an effective treatment for a wide array of diseases including benign
|
53 |
+
tumors, malignant tumors, vascular abnormalities, and functional disorders [1]. Conventional processes to generate
|
54 |
+
GKRS treatment plans are time-consuming for clinicians, which has motivated several studies to explore new
|
55 |
+
approaches like inverse planning [2,3]. However, a major limitation of inverse planning is that it requires human
|
56 |
+
intervention to tune parameters and personalize the resulting treatment plans.
|
57 |
+
|
58 |
+
There exist automated planning methods for other modalities that can generate patient specific parameters for
|
59 |
+
inverse planning [4,5]. An integral part of these approaches is a machine learning (ML) method that produces dose
|
60 |
+
predictions using patient images. There is also a small set of models that incorporate additional patient features
|
61 |
+
(e.g., age, histology) to account for patient outcomes [4,5]. In general, automated planning approaches that use
|
62 |
+
predicted dose distributions are called knowledge-based planning (KBP) pipelines. A KBP pipeline is typically
|
63 |
+
presented as a two-stage process that leverages information from previous treatment plans to produce high-quality
|
64 |
+
treatment plans for new patients without human intervention. The first stage is a dose prediction model that learns
|
65 |
+
the relationship between dose and delineated medical images from previous plans. The second stage is an
|
66 |
+
optimization model that generates a treatment plan from the predicted dose distribution.
|
67 |
+
|
68 |
+
Many recent advances in KBP have focused on 3D dose prediction using neural networks [4,5]. These approaches
|
69 |
+
have primarily been developed and tested for intensity-modulated radiotherapy (IMRT) and volumetric modulated
|
70 |
+
arc therapy (VMAT) [6-9]. However, GKRS presents three unique challenges that necessitate a new approach for
|
71 |
+
dose prediction. First, there is a large range in treatment target size. Many large targets (e.g., post-operative
|
72 |
+
metastases or benign tumors) are up to 25 times the diameter of small targets (e.g., small intact brain metastases)
|
73 |
+
[10]. This variation in target size requires a prediction model that can adequately accommodate both the smallest
|
74 |
+
and largest targets. Second, GKRS cases can have a relatively large number of targets (e.g., more than 30) with
|
75 |
+
multiple dose prescription levels. As a result, the impact of dose to one target on another can vary drastically
|
76 |
+
between patients. Third, targets are often separated by large amounts of healthy brain tissue. A standard ML
|
77 |
+
approach that considers the whole treatment volume would require a low spatial resolution (i.e., large voxel
|
78 |
+
volumes) to accommodate computational memory limits associated with large neural networks, which would be
|
79 |
+
inadequate for GKRS because it must be planned with a high spatial resolution (i.e., small voxel volumes). These
|
80 |
+
factors further increase both the complexity and spatial resolution requirements of the model.
|
81 |
+
|
82 |
+
In this paper, we develop a novel GKRS dose prediction approach. This is an important first step towards creating
|
83 |
+
an automated GKRS planning pipeline since the quality of plans produced by such a pipeline is positively correlated
|
84 |
+
with the quality of the dose predictions [11]. Our approach accommodates any size, number, and shape of targets
|
85 |
+
without compromising the spatial resolution of the predicted dose. The proposed approach involves a novel GKRS-
|
86 |
+
specific data modification method, an upscaling step, and construction of a distance tensor to relate each target back
|
87 |
+
to its size. We demonstrate accuracy on a series of historically treated patient cases. Our high-quality predictions
|
88 |
+
could be used to estimate parameters for inverse optimization models that generate high-quality treatment plans
|
89 |
+
[6].
|
90 |
+
|
91 |
+
2. Methods
|
92 |
+
Our methods consisted of five main steps: (2.1) extracting clinical treatment plan data, (2.2) modifying plan
|
93 |
+
image data, (2.3) tailoring existing neural network models for GKRS, (2.4) training dose prediction models, and
|
94 |
+
(2.5) evaluating model dose predictions.
|
95 |
+
|
96 |
+
|
97 |
+
|
98 |
+
|
99 |
+
3
|
100 |
+
|
101 |
+
|
102 |
+
2.1 Data Extraction
|
103 |
+
This research ethics board approved study involved retrospective access to radiotherapy plans for 322 patients
|
104 |
+
who were treated at Sunnybrook Health Sciences Centre. From each plan, we extracted the MRI images, 3D dose
|
105 |
+
distributions, and target contours. All target contours were delineated for treatment by a radiation oncologist on
|
106 |
+
high-resolution MRIs. To visualize the heterogeneity of our dataset, we plotted the distribution of the target size,
|
107 |
+
number of isocentres, number of targets, and prescription dose in a histogram.
|
108 |
+
|
109 |
+
2.2 Data Processing
|
110 |
+
The data was processed for our GKRS dose prediction in four major ways, which are summarized in Figure 1 and
|
111 |
+
explained in the remainder of this section. Patient data was first processed into a format that was amenable for
|
112 |
+
computer vision models (e.g., consistent nomenclature, align data on a voxel grid). Most notably we converted
|
113 |
+
each target contour into a mask that labelled voxels in healthy tissue with 0 and voxels in targets with its
|
114 |
+
prescription dose (e.g., 25 Gy). These masks enabled our dose prediction models to handle plans with a wide
|
115 |
+
range of dose prescription levels that are common in GKRS. This standard pre-processing was applied to all our
|
116 |
+
data and the resulting dataset was used to train and test our baseline models. We developed three additional pre-
|
117 |
+
processing techniques for our GRKS specific approach.
|
118 |
+
|
119 |
+
Figure 1: An overview of our workflow and the data modification techniques used in this study. Our GK-specific data
|
120 |
+
modification includes transforming patient data with a novel tumor space transformation and an upscaling method. Then we
|
121 |
+
create a new feature that we call a distance tensor to quantify the distance between tissue and targets.
|
122 |
+
First, we developed tumor spaces, which were engineered to isolate small volumes surrounding targets.
|
123 |
+
Specifically, the tumor spaces were the smallest bounding box that contained at least one target surrounded by 1 cm
|
124 |
+
of padding. To ensure that the dosimetric interactions between close targets were captured, any targets within 1 cm
|
125 |
+
of each other were taken together in one tumor space, which is shown by the example in Figure 1. We sampled
|
126 |
+
these tumor spaces from the MRI, dose distribution, and target masks of each case to create a training set of 628
|
127 |
+
tumor spaces from 272 plans. Similarly, we created a testing set of 129 tumor spaces from the 50 plans in the test
|
128 |
+
set.
|
129 |
+
|
130 |
+
|
131 |
+
Tumor spacetransformation
|
132 |
+
ContouredMRl
|
133 |
+
image
|
134 |
+
MRIimage
|
135 |
+
Dose
|
136 |
+
Target
|
137 |
+
distribution
|
138 |
+
mask
|
139 |
+
GK-specific
|
140 |
+
prediction
|
141 |
+
models
|
142 |
+
Upscaling (128x128x64)
|
143 |
+
Standard pre-
|
144 |
+
processing
|
145 |
+
Upscaled
|
146 |
+
Upscaled
|
147 |
+
Upscaled
|
148 |
+
Distance
|
149 |
+
MRIimage
|
150 |
+
dose
|
151 |
+
targetmask
|
152 |
+
tensor
|
153 |
+
Clinicaldose
|
154 |
+
distribution
|
155 |
+
Baseline
|
156 |
+
prediction
|
157 |
+
models
|
158 |
+
|
159 |
+
4
|
160 |
+
|
161 |
+
|
162 |
+
Second, we developed an upscaling technique to ensure consistent dimensionality across tumor spaces.
|
163 |
+
Inconsistent dimensions normally present a challenge for computer vision models because the models are
|
164 |
+
initialized to expect data with predefined dimensions. To accommodate the range of tumor space dimensions, all
|
165 |
+
data was upscaled using spline interpolation to fit into a 128 x 128 x 64 voxel tensor. A 128 x 128 x 64 tensor size
|
166 |
+
was chosen to balance image detail and training time. The final upscaled tensors included the cropped MRI images,
|
167 |
+
dose distributions, and target masks within each respective tumor space.
|
168 |
+
|
169 |
+
Third, for each tumor space we engineered distance tensors, which were designed to account for the distance
|
170 |
+
between each voxel and its nearest target. Each element in the distance tensor represented a voxel and had a value
|
171 |
+
equal to the Euclidean distance 𝑑 between that voxel 𝑣 and its nearest target centroid 𝑡. The measure was
|
172 |
+
calculated with respect to all the target centroids 𝑡 ∈ 𝑇 within the patient. It was evaluated over all three spatial
|
173 |
+
dimensions, indexed by 𝑖. Specifically, the value of each element in the distance tensor was calculated as
|
174 |
+
|
175 |
+
|
176 |
+
𝑑 = min
|
177 |
+
t∈T √∑
|
178 |
+
(𝑣𝑖 − 𝑡𝑖)2
|
179 |
+
3
|
180 |
+
𝑖=1
|
181 |
+
.
|
182 |
+
|
183 |
+
2.3 Model Architectures
|
184 |
+
Our approach builds on the success of existing neural network models from the IMRT and VMAT literature
|
185 |
+
[6,7,12,13]. Specifically, we adapted the architectures used in previous dose prediction approaches to fit the data
|
186 |
+
size and structure of GKRS. Full details of the model architecture are presented in the accompanied supplement.
|
187 |
+
We implemented two types of models in this study, a U-Net and a generative adversarial network (GAN). The U-
|
188 |
+
Net used a standard 3D architecture to generate a 3D dose using contoured MRI images [14]. A mean squared error
|
189 |
+
loss function was used to train the U-Net. The GAN used a pix2pix architecture [14] to combine the same
|
190 |
+
architecture as our U-Net model with a discriminator, which is a second neural network within the GAN that
|
191 |
+
predicted the likelihood that a dose distribution was from a clinical plan or generated by the U-Net. Both neural
|
192 |
+
networks within the GAN were trained simultaneously such that predictions from the discriminator were used to
|
193 |
+
improve the dose produced by the U-Net model within the GAN via a typical GAN loss function. A binary cross
|
194 |
+
entropy loss function was used for the discriminator model.
|
195 |
+
|
196 |
+
2.4 Model Training and Prediction
|
197 |
+
The modified MRI images, target masks, 3D dose distributions, and distance tensors were used to train two GKRS
|
198 |
+
specific dose prediction models, one with a GAN architecture (GAN-GK) and another with only a 3D U-Net
|
199 |
+
architecture (U-Net-GK). To accommodate different prescription doses between cases, clinical dose distributions
|
200 |
+
were normalized relative to its nominal prescription dose prior to training. Baseline models for GAN (GAN-
|
201 |
+
Baseline) and 3D U-Net (U-Net-Baseline) were trained on patient data without GRKS specific processing. The
|
202 |
+
networks were developed in Python 3.7 using TensorFlow 1.12.3.
|
203 |
+
|
204 |
+
All models were trained using the same 272 plans in our training dataset. Each model was also trained for 200
|
205 |
+
epochs on a Nvidia 1080 Ti GPU with 12 GB of memory, which took approximately 6.5 and 3 days for the GAN
|
206 |
+
and U-Net models, respectively. Additionally, all optimization was done via gradient descent with using the Adam
|
207 |
+
optimizer with momentum parameters β1 = 0.5, β2 = 0.999, and a learning rate of 0.0002. These hyperparameters
|
208 |
+
were selected because they have been effective for a variety of other applications and additional tuning was
|
209 |
+
computationally expensive [14]. The model was trained with a batch size of eight, which was the largest size we
|
210 |
+
could use due to computational limitations.
|
211 |
+
|
212 |
+
|
213 |
+
|
214 |
+
|
215 |
+
5
|
216 |
+
|
217 |
+
|
218 |
+
Predicted 3D dose distributions for the 50 test plans were generated using each model. Dose predictions generated
|
219 |
+
by GAN-GK and U-Net-GK were scaled back to their original target size and prescription dose, and the predictions
|
220 |
+
for all tumor spaces in the patient were combined to recreate a full 3D dose distribution. A dose of zero was assigned
|
221 |
+
to all voxels that were excluded from all tumor spaces, and the average dose was used for voxels with overlapping
|
222 |
+
tumor spaces.
|
223 |
+
|
224 |
+
2.5 Analysis
|
225 |
+
To evaluate the accuracy of the dose distribution predictions relative to the clinical delivered dose, a global 3D
|
226 |
+
gamma analysis was used [15,16]. For this analysis, we used four agreement criteria that have been used in other
|
227 |
+
GKRS evaluations (4%/2 mm, 3%/3 mm, 3%/1 mm, and 1%/1 mm) [17-19]. A low-dose threshold equal to 5% of
|
228 |
+
the maximum dose was used to compute the gamma passing rate for each patient. A two-tailed Wilcoxon signed-
|
229 |
+
rank test was used to compare the gamma passing rate of the predictions made with and without data modification,
|
230 |
+
with p < 0.05 being considered significant.
|
231 |
+
|
232 |
+
Further analysis using a 4%/2 mm gamma passing rate was done to explore where the GKRS specific predictions
|
233 |
+
were most successful and to identify where future improvements are needed. For the purposes of this analysis, each
|
234 |
+
target was divided into three regions: i) the inside, which included all the voxels in the target mask; ii) the periphery,
|
235 |
+
which included all voxels within a two-voxel ring around each target; and iii) the outside, which included the
|
236 |
+
remaining voxels in the tumor space.
|
237 |
+
|
238 |
+
To evaluate prediction quality, the coverage, selectivity, and conformity indices [20] were calculated for each
|
239 |
+
target and compared to the same indices for the clinical doses. To compare the difference in quality between GKRS
|
240 |
+
specific predictions and their baseline counterparts, the absolute conformity index difference between predicted and
|
241 |
+
clinical plans was calculated and compared using a two-tailed Wilcoxon signed-rank test, with a significance level
|
242 |
+
of 0.05.
|
243 |
+
|
244 |
+
3. Results
|
245 |
+
3.1 Summary of Clinical Plan Data
|
246 |
+
|
247 |
+
Figure 2 summarizes the dataset that was used to train and test the models. There was a large range in the size
|
248 |
+
of the targets, number of isocenters per target, and prescription dose. The number of targets per patient ranged
|
249 |
+
from 1 to 26, and the types of targets included brain metastases (treated in 1 to 5 fractions) and acoustic neuromas
|
250 |
+
(treated in 1 fraction). There was a large range in target volumes (34 to 184750 voxels, 0.0085 cc to 46.1875 cc),
|
251 |
+
number of isocenters (1 to 57), and target dose prescriptions (4 to 27.5 Gy). Over 37% and 5% of all targets also
|
252 |
+
had diameters exceeding 2 cm and 4 cm, respectively.
|
253 |
+
|
254 |
+
|
255 |
+
|
256 |
+
|
257 |
+
6
|
258 |
+
|
259 |
+
|
260 |
+
|
261 |
+
Figure 2: Characteristics of the dataset used to train and test the model.
|
262 |
+
3.2 Accuracy of Predicted GK-specific 3D Dose Distributions
|
263 |
+
Figure 3 shows the distribution of the gamma passing rate of the predictions for various levels of gamma criteria
|
264 |
+
with respect to the clinical dose. Across all criteria levels, both the GAN-GK and U-Net-GK achieved gamma
|
265 |
+
passing rates that were significantly higher (i.e., better) than that of the GAN-Baseline (Z = -7.37, p < 0.001) and
|
266 |
+
U-Net-Baseline (Z = -7.33, p < 0.001). This result indicates that the GKRS specific approaches produce dose that
|
267 |
+
is more similar to clinical dose than standard baseline approaches. We also found that the performance of each
|
268 |
+
GKRS-specific approach was comparable. For example, compared to the clinical dose using the 4%/2mm gamma
|
269 |
+
criterion, the GAN-GK and U-Net-GK achieved average gamma passing rate of 84.9 ± 15.3% and 83.1 ± 17.2%,
|
270 |
+
respectively; with a 1%/1mm gamma criterion, which is much stricter than the 4%/2mm criterion, GAN-GK and
|
271 |
+
U-Net-GK both achieved much lower average passing rates of 25.2 ± 11.6% and 24.4 ± 11.3%, respectively.
|
272 |
+
|
273 |
+
|
274 |
+
|
275 |
+
|
276 |
+
7
|
277 |
+
|
278 |
+
|
279 |
+
|
280 |
+
Figure 3: The distribution of gamma passing rates for all models at four gamma criterion levels.
|
281 |
+
With regards to the GKRS specific predictions, the sub-analysis of gamma passing rate of both models showed
|
282 |
+
that the inside of target performed slightly better than the periphery on average, with 82.2 ± 19.5% of the voxels
|
283 |
+
passing compared to 79.8 ± 16.4%. The voxels outside of the target performed the best, with an average passing
|
284 |
+
rate of 91.6 ± 10.7%.
|
285 |
+
|
286 |
+
3.3 Quality of Predicted GK-specific 3D Dose Distributions
|
287 |
+
Table 1 shows the mean and standard deviation for the coverage index, selectivity index, conformity index, and
|
288 |
+
absolute conformity difference for the predictions with respect to the clinical dose. Overall, the GKRS specific
|
289 |
+
approach dominated their baseline alternatives in terms of the coverage, selectivity, and conformity indices. Both
|
290 |
+
the GAN-GK and U-Net-GK predicted doses with coverage, selectivity, and conformity indices that were within
|
291 |
+
8% of the clinical doses. This result implies that the predictions were very similar to the clinical doses in quality,
|
292 |
+
with an average absolute conformity difference of 0.086 ± 0.11 and 0.092 ± 0.11 for GAN-GK and U-Net-GK,
|
293 |
+
respectively. In contrast, the average conformity of baseline predictions was significantly worse than their
|
294 |
+
corresponding clinical plans, with an average absolute conformity difference of 0.177 ± 0.16 and 0.189 ± 0.17 for
|
295 |
+
GAN-Baseline and U-Net-Baseline, respectively.
|
296 |
+
|
297 |
+
|
298 |
+
|
299 |
+
Clinical
|
300 |
+
GAN-GK
|
301 |
+
U-Net-GK
|
302 |
+
GAN-Baseline
|
303 |
+
U-Net-Baseline
|
304 |
+
Coverage index
|
305 |
+
0.979 ± 0.02
|
306 |
+
0.952 ± 0.11
|
307 |
+
0.968 ± 0.12
|
308 |
+
0.863 ± 0.21
|
309 |
+
0.861 ± 0.22
|
310 |
+
Selectivity index
|
311 |
+
0.554 ± 0.22
|
312 |
+
0.597 ± 0.22
|
313 |
+
0.539 ± 0.21
|
314 |
+
0.527 ± 0.21
|
315 |
+
0.542 ± 0.18
|
316 |
+
|
317 |
+
|
318 |
+
|
319 |
+
8
|
320 |
+
|
321 |
+
|
322 |
+
Conformity
|
323 |
+
index
|
324 |
+
0.546 ± 0.22
|
325 |
+
0.560 ± 0.20
|
326 |
+
0.513 ± 0.20
|
327 |
+
0.452 ± 0.22
|
328 |
+
0.474 ± 0.23
|
329 |
+
Absolute
|
330 |
+
conformity
|
331 |
+
index difference
|
332 |
+
N/A
|
333 |
+
0.086 ± 0.11
|
334 |
+
0.092 ± 0.11
|
335 |
+
0.177 ± 0.16
|
336 |
+
0.189 ± 0.17
|
337 |
+
Table 1: Average and standard deviation in coverage index, selectivity index, conformity index, and absolute conformity
|
338 |
+
index difference (compared to clinical) for the 3D dose predictions of 50 out-of-sample patients.
|
339 |
+
|
340 |
+
3.4 Visual Comparison of GK-specific Predictions to Baseline Predictions
|
341 |
+
Figure 4 shows an example of predictions made using GK-specific models compared to predictions made using
|
342 |
+
baseline models. The example shows two sample patients (one in each row) to showcase the model performance in
|
343 |
+
different situations. The example highlights the impact of the data modification pipeline, which enables high
|
344 |
+
resolution dose predictions. In addition, predictions made using the baseline models often resulted in predictions
|
345 |
+
with unrealistically low dose to small targets, as seen in Figure 4f.
|
346 |
+
|
347 |
+
|
348 |
+
Figure 4: a-b) Clinical dose distributions. c) U-Net-GK dose prediction. d) GAN-GK dose prediction. e) U-Net-Baseline
|
349 |
+
dose prediction f) GAN-Baseline dose prediction. As can be seen, predictions made using baseline models are of much lower
|
350 |
+
resolution and sometimes result in low- or no-dose predictions.
|
351 |
+
4. Discussion
|
352 |
+
In this study, we present novel data modification techniques to facilitate 3D dose prediction for GKRS. We
|
353 |
+
demonstrated that separating the prediction of a full dose distribution into several smaller predictions enables deep
|
354 |
+
|
355 |
+
a)
|
356 |
+
C)
|
357 |
+
e)
|
358 |
+
25
|
359 |
+
20
|
360 |
+
15
|
361 |
+
10
|
362 |
+
b)
|
363 |
+
(p
|
364 |
+
f)
|
365 |
+
25
|
366 |
+
20
|
367 |
+
10
|
368 |
+
|
369 |
+
9
|
370 |
+
|
371 |
+
|
372 |
+
learning models to produce more accurate and reliable predictions than those obtained from off-the-shelf methods.
|
373 |
+
Of note, our novel methodology was effective on a heterogenous patient population with a large range of target
|
374 |
+
shapes and sizes. This approach serves as a necessary first step towards developing an KBP pipeline for GKRS that
|
375 |
+
can be adapted for use in any GKRS clinic.
|
376 |
+
|
377 |
+
Using the modified data, predictions from GAN-GK and U-Net-GK achieved gamma passing rates similar to or
|
378 |
+
better than those achieved by comparable models in other disease sites [6-8]. For example, a recent study that
|
379 |
+
developed approaches to predict 3D dose distributions of rectal cancer IMRT plans achieved gamma passing rates
|
380 |
+
between 81-90% with a gamma criterion of 3%/5mm [7], which is comparable to our GK-specific approaches that
|
381 |
+
achieved gamma passing rates of 83-85% with a gamma criterion of 4%/2mm. The similarity of the predictions
|
382 |
+
arising from GAN-GK and U-Net-GK to their clinical counterparts is encouraging given the ranges in target size,
|
383 |
+
shape, and quantity among the GKRS plans in our dataset.
|
384 |
+
|
385 |
+
While the prediction performs well with looser criteria, when distance-to-agreement and dose difference are
|
386 |
+
restricted to 1%/1mm the predictions are relatively poor with average gamma passing rates of 25.2 ± 11.6% and
|
387 |
+
24.4 ± 11.3% for GAN-GK and U-Net-GK, respectively. However, it seems that the primary factor for this fall in
|
388 |
+
passing rate is due to the stricter dose difference criteria. When the distance-to-agreement criteria is lowered from
|
389 |
+
3mm to 1mm, with a dose difference of 3%, the passing rate only experienced an average of 10.3% and 8.7% drop
|
390 |
+
for GAN-GK and U-Net-GK, respectively. These results indicate that the methodology can produce predictions
|
391 |
+
which are similar in shape to their clinical counterparts. This is good for GKRS where spatial resolution has
|
392 |
+
relatively high clinical relevance due to steep dose gradients and small targets. In contrast, while predictions appear
|
393 |
+
less likely to match the intensity on a voxel-by-voxel basis – likely due to the small voxel volumes coupled with
|
394 |
+
steep dose gradients – achieving a more accurate dose-agreement is less clinically important because dose is often
|
395 |
+
prescribed to an isodose line in the 50-60% range.
|
396 |
+
|
397 |
+
We included several gamma criteria to compliment similar studies in the GKRS literature that compare the
|
398 |
+
similarity of new dose distributions to their clinical counterparts. Our gamma analysis quantified the dosimetric
|
399 |
+
accuracy of predictions in terms of different spatial resolution by varying the spatial portion of the gamma criteria
|
400 |
+
between 1mm and 3mm and the dose portion between 1% and 4%. Across all gamma criteria, the predictions made
|
401 |
+
using GAN-GK and U-Net-GK perform significantly better than baseline predictions. The lower standard deviation
|
402 |
+
on the gamma passing rates of GAN-GK and U-Net-GK predictions also indicate greater consistency. Since better
|
403 |
+
dose predictions are more likely to lead to higher quality plans [11], the presented prediction methodology would
|
404 |
+
serve well as the first stage of a two-stage GKRS KBP pipeline.
|
405 |
+
|
406 |
+
Our novel approach for dose prediction is centred around GKRS-specific data modification. This focus is
|
407 |
+
different from many previous studies that focus on developing new architectures [6,7,9,12,13]. As the contributions
|
408 |
+
are focused on the data modification process, we did not fully explore other factors that can improve the predictions
|
409 |
+
such as hyperparameters tuning, tensor sizes, and training duration. The results of this study demonstrate that
|
410 |
+
existing dose prediction models can be tailored for GKRS by data modification alone. This enables us to leverage
|
411 |
+
approaches from the rich dose prediction literature that covers other sites and modalities [6,7,13,21-23]. Most of
|
412 |
+
those studies used a GAN or U-Net architecture. While our GAN model (i.e., GAN-GK) produced marginally better
|
413 |
+
predictions than the U-Net model (i.e., U-Net-GK), a result similar to previous studies [13], it also required more
|
414 |
+
than double the training time of the U-Net model (6.5 days versus 3). As such, training and cross-validation of a U-
|
415 |
+
Net model is more practical for future GKRS datasets.
|
416 |
+
|
417 |
+
There are several benefits to leveraging data modification techniques in the training process. First, the training
|
418 |
+
data can use all the pixels stored in the native treatment image without exceeding computational memory constraints.
|
419 |
+
This facilitates models that generate high-resolution dose predictions, as seen in Figure 3. Second, using tumor
|
420 |
+
spaces generates more unique data points for the training set. In our case, tumor spaces transformed our training
|
421 |
+
|
422 |
+
|
423 |
+
|
424 |
+
10
|
425 |
+
|
426 |
+
|
427 |
+
dataset of 272 plans into a set of 628 tumor spaces that were used to train our GK-specific models. We conjecture
|
428 |
+
that increasing the number of data points in the training set enabled the models to generalize better with higher-
|
429 |
+
quality predictions. Lastly, data modification provides flexibility for the shape of plan image data. Specifically, our
|
430 |
+
approach eschews the need for consistent dimensions because we crop and resize the data to consistent dimensions
|
431 |
+
using interpolation, which makes the approach adaptable to variations in data dimensions.
|
432 |
+
|
433 |
+
We opted to use a global gamma analysis to quantify our model in addition to traditional plan quality metrics
|
434 |
+
(e.g., tumor coverage, dose conformity) since the predicted 3D dose distribution is not only limited to targets.
|
435 |
+
Furthermore, in GKRS, metrics like coverage and conformity break down especially for small targets, as there are
|
436 |
+
only a few voxels, thus making the metrics sensitive to small perturbations. Since large dose fall off is common in
|
437 |
+
GKRS plans, global gamma was chosen instead of local gamma as it is less likely to exaggerate the errors in regions
|
438 |
+
with high gradient [24]. As seen in the sub-analysis, our model performs best at predicting dose to voxels outside
|
439 |
+
of the target area and worst on the periphery of the target as one would expect given the sharpness of the gradients
|
440 |
+
there. While the predictions within the tumor were only marginally better than the periphery, the variation of dose
|
441 |
+
within the tumor is usually not considered when evaluating treatment plans with the traditional plan quality metrics
|
442 |
+
[25]. On the other hand, the result of the sub-analysis indicates that additional tuning of the models should be done
|
443 |
+
to improve the predicted periphery dose, which would likely lead to an improvement to the coverage, specificity,
|
444 |
+
and conformity of the predicted doses.
|
445 |
+
|
446 |
+
This approach has three notable limitations. First, we used a heterogenous dataset comprised of clinical plans that
|
447 |
+
had a range in target sizes, prescription doses, number of isocenters, and number of targets (see Figure 2). For
|
448 |
+
example, 3.7% of the tumor spaces in the dataset contained more than one target. As a result, the model may be less
|
449 |
+
effective for patients with uncommon characteristics (e.g., patients with multiple nearby targets). Second, organs-
|
450 |
+
at-risk were not considered in the models. Including organ-at-risk contours in the future would likely improve the
|
451 |
+
prediction quality by directing more attention of the model towards important healthy tissue. Finally, all our training
|
452 |
+
and testing data was modified via spline interpolation, which makes the model quality dependent on the size of
|
453 |
+
interpolation errors. As a result, poorly interpolated data could have adverse effects that limit the model performance
|
454 |
+
in both the training and testing processes.
|
455 |
+
|
456 |
+
5. Conclusion
|
457 |
+
In this study, we developed a novel KBP method for GKRS, supported by a data modification pipeline that
|
458 |
+
transforms and upscales GKRS patient data for usage in machine learning-based 3D dose prediction. We
|
459 |
+
demonstrate that utilizing the augmented data enables standard neural network models to produce high quality dose
|
460 |
+
predictions for GKRS patients that are superior to existing state-of-the-art techniques. The resulting predictions
|
461 |
+
have the potential to support the development of high-quality treatment plans as part of an automated KBP pipeline.
|
462 |
+
|
463 |
+
6. Acknowledgements
|
464 |
+
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit
|
465 |
+
sectors.
|
466 |
+
|
467 |
+
|
468 |
+
|
469 |
+
|
470 |
+
|
471 |
+
|
472 |
+
|
473 |
+
|
474 |
+
|
475 |
+
|
476 |
+
11
|
477 |
+
|
478 |
+
|
479 |
+
|
480 |
+
References
|
481 |
+
[1] Faramand A, Lunsford DL. GAMMA KNIFE RADIOSURGERY: A Review of Epidemiology and Clinical
|
482 |
+
Practice 2020.; 2020.
|
483 |
+
|
484 |
+
[2] Levivier M, Carrillo RE, Charrier R, Martin A, Thiran J-P. A real-time optimal inverse planning for Gamma
|
485 |
+
Knife radiosurgery by convex optimization: description of the system and first dosimetry data. J Neurosurg.
|
486 |
+
2018;129(Suppl1):111-7. https://doi:10.3171/2018.7.GKS181572
|
487 |
+
|
488 |
+
[3] Sjölund J, Riad S, Hennix M, Nordström H. A linear programming approach to inverse planning in Gamma
|
489 |
+
Knife radiosurgery. Med Phys. 2019;46(4):1533-44. https://doi:10.1002/mp.13440
|
490 |
+
|
491 |
+
[4] Momin S, Fu Y, Lei Y, Roper J, Bradley J, Curran W, Liu T, Yang X. Knowledge-based radiation treatment
|
492 |
+
planning:
|
493 |
+
A
|
494 |
+
data-driven
|
495 |
+
method
|
496 |
+
survey.
|
497 |
+
J
|
498 |
+
Appl
|
499 |
+
Clin
|
500 |
+
Med
|
501 |
+
Phys.
|
502 |
+
2021;22(8):16-44.
|
503 |
+
https://doi:10.1002/acm2.13337
|
504 |
+
|
505 |
+
[5] Ge Y, Wu QJ. Knowledge-based planning for intensity-modulated radiation therapy: A review of data-driven
|
506 |
+
approaches. Med Phys. 2019;46(6):2760-75. https://doi:10.1002/mp.13526
|
507 |
+
|
508 |
+
[6] Mahmood R, Babier A, McNiven A, Diamant A, Chan TCY. Automated Treatment Planning in Radiation
|
509 |
+
Therapy
|
510 |
+
using
|
511 |
+
Generative
|
512 |
+
Adversarial
|
513 |
+
Networks.
|
514 |
+
Proc
|
515 |
+
Mach
|
516 |
+
Learn
|
517 |
+
Res.
|
518 |
+
2018;85:1-14.
|
519 |
+
http://arxiv.org/abs/1807.06489.
|
520 |
+
|
521 |
+
[7] Zhou J, Peng Z, Song Y, Chang Y, Pei X, Sheng L, Xu G. A method of using deep learning to predict three-
|
522 |
+
dimensional dose distributions for intensity-modulated radiotherapy of rectal cancer. J Appl Clin Med Phys.
|
523 |
+
2020;21(5):26-37. https://doi:10.1002/acm2.12849
|
524 |
+
|
525 |
+
[8] Chen X, Men K, Li Y, Yi J, Dai J. A feasibility study on an automated method to generate patient-specific dose
|
526 |
+
distributions for radiotherapy using deep learning. Med Phys. 2019;46(1):56-64. https://doi:10.1002/mp.13262
|
527 |
+
|
528 |
+
[9] Qi M, Li Y, Wu A, Jia Q, Guo F, Lu X, et al. Region-specific three-dimensional dose distribution prediction:
|
529 |
+
a feasibility study on prostate VMAT cases. J Radiat Res Appl Sci. 2020;13(1):485-95.
|
530 |
+
https://doi:10.1080/16878507.2020.1756185
|
531 |
+
|
532 |
+
[10] Nanda A, Bir S, Ambekar S, Bollam P. Long-term outcome of gamma knife radiosurgery for metastatic brain
|
533 |
+
tumors originating from lung cancer. Surg Neurol Int. 2014;5(9):396. https://doi:10.4103/2152-7806.140197
|
534 |
+
|
535 |
+
[11] Babier A, Mahmood R, Zhang B, Alves V, Barragán-Montero A, Beaudry J, et al. OpenKBP-Opt: an
|
536 |
+
international and reproducible evaluation of 76 knowledge-based planning pipelines. Phys. Med. Biol.
|
537 |
+
2022;67(18). https://doi:10.1088/1361-6560/ac8044
|
538 |
+
|
539 |
+
[12] Fan J, Wang J, Chen Z, Hu C, Zhang Z, Hu W. Automatic treatment planning based on three-dimensional dose
|
540 |
+
distribution
|
541 |
+
predicted
|
542 |
+
from
|
543 |
+
deep
|
544 |
+
learning
|
545 |
+
technique.
|
546 |
+
Med
|
547 |
+
Phys.
|
548 |
+
2019;46(1):370-81.
|
549 |
+
https://doi:10.1002/mp.13271
|
550 |
+
|
551 |
+
|
552 |
+
|
553 |
+
|
554 |
+
12
|
555 |
+
|
556 |
+
|
557 |
+
[13] Babier A, Mahmood R, McNiven AL, Diamant A, Chan TCY. Knowledge-based automated planning with
|
558 |
+
three-dimensional
|
559 |
+
generative
|
560 |
+
adversarial
|
561 |
+
networks.
|
562 |
+
Med
|
563 |
+
Phys.
|
564 |
+
2020;47(2):297-306.
|
565 |
+
https://doi:10.1002/mp.13896
|
566 |
+
|
567 |
+
[14] Isola P, Zhu JY, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. Proc -
|
568 |
+
30th
|
569 |
+
IEEE
|
570 |
+
Conf
|
571 |
+
Comput
|
572 |
+
Vis
|
573 |
+
Pattern
|
574 |
+
Recognition,
|
575 |
+
CVPR
|
576 |
+
2017.
|
577 |
+
2017;2017-Janua:5967-76.
|
578 |
+
https://doi:10.1109/CVPR.2017.632
|
579 |
+
|
580 |
+
[15] Low DA, Dempsey JF. Evaluation of the gamma dose distribution comparison method. Med Phys.
|
581 |
+
2003;30(9):2455-64. https://doi:10.1118/1.1598711
|
582 |
+
|
583 |
+
[16] Low DA, Harms WB, Mutic S, Purdy JA. A technique for the quantitative evaluation of dose distributions.
|
584 |
+
Med Phys. 1998;25(5):656-61. https://doi:10.1118/1.598248
|
585 |
+
|
586 |
+
[17] Gopishankar N, Wanatabe Y, Subbiah V. MRI-based polymer gel dosimetry for validating plans with multiple
|
587 |
+
matrices in Gamma Knife stereotactic radiosurgery. J. Appl. Clin. Med. Phys. 2011;12(2):133-45.
|
588 |
+
https://doi:10.112/jacmp.v12i2.3333
|
589 |
+
|
590 |
+
[18] Chung H, Park J, Chun K. Verification of dose profiles generated by the convolution algorithm of the gamma
|
591 |
+
knife radiosurgery planning system. Med Phys. 2017;44(9):4880-9. https://doi:10.1002/mp.12347
|
592 |
+
|
593 |
+
[19] Park J, Han J, Kim C, Oh C, Lee D, Suh T, Gyu D, Chung H. Application of the gamma evaluation method in
|
594 |
+
Gamma Knife film dosimetry. Med Phys. 2011;38(10):5778-87. https://doi:10.1118/1.3641644
|
595 |
+
|
596 |
+
[20] Torrens M, Chung C, Chung HT, Hanssens P, Jaffray D, Kemeny A, et al. Standardization of terminology in
|
597 |
+
stereotactic radiosurgery: Report from the Standardization Committee of the International Leksell Gamma
|
598 |
+
Knife Society: special topic. J Neurosurg. 2014;121(December):2-15. https://doi:10.3171/2014.7.gks141199
|
599 |
+
|
600 |
+
[21] Nguyen D, Jia X, Sher D, Lin M, Iqbal Z, Liu H, Jiang S. 3D radiotherapy dose prediction on head and neck
|
601 |
+
cancer patients with a hierarchically densely connected U-net deep learning architecture. Phys Med Biol.
|
602 |
+
2019;64(6). https://doi:10.1088/1361-6560/ab039b
|
603 |
+
|
604 |
+
[22] Lee MS, Hwang D, Kim JH, Lee JS. Deep-dose: a voxel dose estimation method using deep convolutional
|
605 |
+
neural network for personalized internal dosimetry. Sci Rep. 2019;9(1):1-9. https://doi:10.1038/s41598-019-
|
606 |
+
46620-y
|
607 |
+
|
608 |
+
[23] Kearney V, Chan JW, Wang T, Perry A, Descovich M, Morin O, et al. DoseGAN: a generative adversarial
|
609 |
+
network for synthetic dose prediction using attention-gated discrimination and generation. Sci Rep.
|
610 |
+
2020;10(1):1-8. https://doi:10.1038/s41598-020-68062-7
|
611 |
+
|
612 |
+
[24] Hussein M, Clark CH, Nisbet A. Challenges in calculation of the gamma index in radiotherapy - Towards good
|
613 |
+
practice. Phys. Med. 2017;36:1-11. https://doi:10.1016/j.ejmp.2017.03.001
|
614 |
+
|
615 |
+
[25] Menon SV, Paramu R, Bhasi S, Nair RK. Evaluation of Plan Quality Metrics in Stereotactic
|
616 |
+
Radiosurgery/Radiotherapy in the Treatment Plans of Arteriovenous Malformations. J Med Phys.
|
617 |
+
2018;43(4):214. https://doi:10.4103/JMP.JMP_25_18
|
618 |
+
|
619 |
+
|
620 |
+
|
621 |
+
|
622 |
+
13
|
623 |
+
|
624 |
+
|
625 |
+
|
626 |
+
|
627 |
+
|
-dE0T4oBgHgl3EQfxAFs/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
-dE1T4oBgHgl3EQfUgPr/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6c030f1c741a3b1a71b02af37b7335819567d0c0fb2991d31616f05b433736a1
|
3 |
+
size 124527
|
.gitattributes
CHANGED
@@ -8522,3 +8522,72 @@ u9E0T4oBgHgl3EQfsQGm/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -tex
|
|
8522 |
eNFIT4oBgHgl3EQfoyuB/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8523 |
ZtE5T4oBgHgl3EQfeA-9/content/2301.05616v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8524 |
s9E3T4oBgHgl3EQfNAmL/content/2301.04379v1.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8522 |
eNFIT4oBgHgl3EQfoyuB/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8523 |
ZtE5T4oBgHgl3EQfeA-9/content/2301.05616v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8524 |
s9E3T4oBgHgl3EQfNAmL/content/2301.04379v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8525 |
+
UdE5T4oBgHgl3EQfbQ_H/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8526 |
+
d9E4T4oBgHgl3EQfQgxo/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8527 |
+
JdFJT4oBgHgl3EQfwS0f/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8528 |
+
bNE4T4oBgHgl3EQfPQwx/content/2301.04971v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8529 |
+
XdFJT4oBgHgl3EQf5i00/content/2301.11670v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8530 |
+
ONE3T4oBgHgl3EQfZgo4/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8531 |
+
atE2T4oBgHgl3EQfFAam/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8532 |
+
7NAzT4oBgHgl3EQf-f4D/content/2301.01933v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8533 |
+
r9FKT4oBgHgl3EQfJS1Z/content/2301.11737v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8534 |
+
2NA0T4oBgHgl3EQfM_8t/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8535 |
+
xtAzT4oBgHgl3EQfQvsC/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8536 |
+
MdE1T4oBgHgl3EQfZQR8/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8537 |
+
cNFRT4oBgHgl3EQfSjci/content/2301.13529v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8538 |
+
bNE4T4oBgHgl3EQfPQwx/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8539 |
+
f9A0T4oBgHgl3EQfHv-A/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8540 |
+
B9E1T4oBgHgl3EQfVwR_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8541 |
+
2tE1T4oBgHgl3EQf5gWl/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8542 |
+
etAyT4oBgHgl3EQfw_lx/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8543 |
+
B9E1T4oBgHgl3EQfVwR_/content/2301.03106v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8544 |
+
sdE5T4oBgHgl3EQfKQ7c/content/2301.05465v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8545 |
+
r9FKT4oBgHgl3EQfJS1Z/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8546 |
+
n9FQT4oBgHgl3EQfqTb3/content/2301.13380v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8547 |
+
xdFKT4oBgHgl3EQfLC2B/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8548 |
+
pNFIT4oBgHgl3EQfvyuP/content/2301.11349v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8549 |
+
n9FQT4oBgHgl3EQfqTb3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8550 |
+
adAzT4oBgHgl3EQfZPz1/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8551 |
+
39AyT4oBgHgl3EQfo_il/content/2301.00518v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8552 |
+
TdE3T4oBgHgl3EQfEAmL/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8553 |
+
0tFST4oBgHgl3EQfVziF/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8554 |
+
xdFKT4oBgHgl3EQfLC2B/content/2301.11744v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8555 |
+
a9AyT4oBgHgl3EQfiviR/content/2301.00402v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8556 |
+
LdE2T4oBgHgl3EQfqAhW/content/2301.04034v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8557 |
+
x9FAT4oBgHgl3EQfAhwk/content/2301.08398v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8558 |
+
ktE3T4oBgHgl3EQf5wsL/content/2301.04783v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8559 |
+
O9FOT4oBgHgl3EQf4DQT/content/2301.12948v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8560 |
+
wdE2T4oBgHgl3EQfLgZh/content/2301.03714v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8561 |
+
v9E4T4oBgHgl3EQfXQwv/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8562 |
+
O9FOT4oBgHgl3EQf4DQT/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8563 |
+
stE0T4oBgHgl3EQfrgHc/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8564 |
+
G9FLT4oBgHgl3EQfHS_T/content/2301.11996v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8565 |
+
ktFST4oBgHgl3EQfIjgl/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8566 |
+
mtFLT4oBgHgl3EQffC-V/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8567 |
+
G9FJT4oBgHgl3EQfty2e/content/2301.11619v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8568 |
+
fNFAT4oBgHgl3EQf7x4q/content/2301.08746v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8569 |
+
ONE3T4oBgHgl3EQfZgo4/content/2301.04497v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8570 |
+
wdE2T4oBgHgl3EQfLgZh/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8571 |
+
5NE4T4oBgHgl3EQfbwwx/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8572 |
+
GtFKT4oBgHgl3EQfcC5D/content/2301.11814v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8573 |
+
PNE0T4oBgHgl3EQf1AIP/content/2301.02692v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8574 |
+
eNFKT4oBgHgl3EQfrS7d/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8575 |
+
etE1T4oBgHgl3EQfyQXE/content/2301.03432v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8576 |
+
fNFAT4oBgHgl3EQf7x4q/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8577 |
+
etE1T4oBgHgl3EQfyQXE/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8578 |
+
BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8579 |
+
6NAyT4oBgHgl3EQf2fnD/content/2301.00753v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8580 |
+
hNE2T4oBgHgl3EQfcQei/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8581 |
+
R9E0T4oBgHgl3EQfUgAs/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8582 |
+
mdE2T4oBgHgl3EQfzAi5/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8583 |
+
QdE2T4oBgHgl3EQfsAgY/content/2301.04054v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8584 |
+
ItE0T4oBgHgl3EQfiAFl/content/2301.02439v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8585 |
+
T9AyT4oBgHgl3EQf8fpt/content/2301.00857v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8586 |
+
lNE2T4oBgHgl3EQfIwaA/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8587 |
+
XdFJT4oBgHgl3EQf5i00/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8588 |
+
ltE0T4oBgHgl3EQfZAA9/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8589 |
+
G9FLT4oBgHgl3EQfHS_T/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
8590 |
+
HdE1T4oBgHgl3EQfXgSr/content/2301.03128v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8591 |
+
adFPT4oBgHgl3EQfvjWp/content/2301.13160v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8592 |
+
UdE0T4oBgHgl3EQf2gJy/content/2301.02713v1.pdf filter=lfs diff=lfs merge=lfs -text
|
8593 |
+
T9AyT4oBgHgl3EQf8fpt/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
09AyT4oBgHgl3EQf1PmO/content/tmp_files/2301.00732v1.pdf.txt
ADDED
@@ -0,0 +1,925 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
arXiv:2301.00732v1 [cs.CC] 2 Jan 2023
|
2 |
+
Improved NP-Hardness of Approximation for
|
3 |
+
Orthogonality Dimension and Minrank
|
4 |
+
Dror Chawin*
|
5 |
+
Ishay Haviv*
|
6 |
+
Abstract
|
7 |
+
The orthogonality dimension of a graph G over R is the smallest integer k for which one can
|
8 |
+
assign a nonzero k-dimensional real vector to each vertex of G, such that every two adjacent
|
9 |
+
vertices receive orthogonal vectors. We prove that for every sufficiently large integer k, it is
|
10 |
+
NP-hard to decide whether the orthogonality dimension of a given graph over R is at most k
|
11 |
+
or at least 2(1−o(1))·k/2. We further prove such hardness results for the orthogonality dimension
|
12 |
+
over finite fields as well as for the closely related minrank parameter, which is motivated by
|
13 |
+
the index coding problem in information theory. This in particular implies that it is NP-hard
|
14 |
+
to approximate these graph quantities to within any constant factor. Previously, the hardness
|
15 |
+
of approximation was known to hold either assuming certain variants of the Unique Games
|
16 |
+
Conjecture or for approximation factors smaller than 3/2. The proofs involve the concept of
|
17 |
+
line digraphs and bounds on their orthogonality dimension and on the minrank of their com-
|
18 |
+
plement.
|
19 |
+
1
|
20 |
+
Introduction
|
21 |
+
A graph G is said to be k-colorable if its vertices can be colored by k colors such that every two ad-
|
22 |
+
jacent vertices receive distinct colors. The chromatic number of G, denoted by χ(G), is the smallest
|
23 |
+
integer k for which G is k-colorable. As a fundamental and popular graph quantity, the chromatic
|
24 |
+
number has received a considerable amount of attention in the literature from a computational
|
25 |
+
perspective, as described below.
|
26 |
+
The problem of deciding whether a graph G satisfies χ(G) ≤ 3 is one of the classical twenty-
|
27 |
+
one NP-complete problems presented by Karp [26] in 1972. Khanna, Linial, and Safra [28] proved
|
28 |
+
that it is NP-hard to distinguish between graphs G that satisfy χ(G) ≤ 3 from those satisfying
|
29 |
+
χ(G) ≥ 5. This result, combined with the approach of Garey and Johnson [15] and with a result of
|
30 |
+
Stahl [39], implies that for every k ≥ 6, it is NP-hard to decide whether a graph G satisfies χ(G) ≤ k
|
31 |
+
or χ(G) ≥ 2k − 2. Brakensiek and Guruswami [6] proved that for every k ≥ 3, it is NP-hard to
|
32 |
+
distinguish between the cases χ(G) ≤ k and χ(G) ≥ 2k − 1, and the 2k − 1 bound was further
|
33 |
+
improved to 2k by Barto, Bul´ın, Krokhin, and Oprˇsal [4]. For large values of k, it was shown by
|
34 |
+
Khot [29] that it is NP-hard to decide whether a graph G satisfies χ(G) ≤ k or χ(G) ≥ kΩ(log k), and
|
35 |
+
the latter condition was strengthened to χ(G) ≥ 2k1/3 by Huang [24]. A substantial improvement
|
36 |
+
*School of Computer Science, The Academic College of Tel Aviv-Yaffo, Tel Aviv 61083, Israel. Research supported
|
37 |
+
by the Israel Science Foundation (grant No. 1218/20).
|
38 |
+
1
|
39 |
+
|
40 |
+
was recently obtained by Wrochna and ˇZivn´y [40], who proved that for every k ≥ 4, it is NP-
|
41 |
+
hard to decide whether a given graph G satisfies χ(G) ≤ k or χ(G) ≥ (
|
42 |
+
k
|
43 |
+
⌊k/2⌋). The proof of this
|
44 |
+
result combined the hardness result of [24] with the construction of line digraphs [20] and with a
|
45 |
+
result of Poljak and R¨odl [36]. Note that under certain variants of the Unique Games Conjecture,
|
46 |
+
stronger hardness results are known to hold, namely, hardness of deciding whether a given graph
|
47 |
+
G satisfies χ(G) ≤ k1 or χ(G) ≥ k2 for all integers k2 > k1 ≥ 3 [10] (see also [11]).
|
48 |
+
The present paper studies the computational complexity of algebraic variants of the chromatic
|
49 |
+
number of graphs. A k-dimensional orthogonal representation of a graph G = (V, E) over a field
|
50 |
+
F is an assignment of a vector uv ∈ Fk with ⟨uv, uv⟩ ̸= 0 to each vertex v ∈ V, such that for
|
51 |
+
every two adjacent vertices v and v′ it holds that ⟨uv, uv′⟩ = 0. Here, for two vectors x, y ∈ Fk,
|
52 |
+
we consider the standard inner product defined by ⟨x, y⟩ = ∑k
|
53 |
+
i=1 xiyi with operations over F. The
|
54 |
+
orthogonality dimension of G over F, denoted by ξF(G), is the smallest integer k for which G
|
55 |
+
admits a k-dimensional orthogonal representation over F (see Remark 2.2). It can be easily seen
|
56 |
+
that for every graph G and for every field F, it holds that ξF(G) ≤ χ(G). In addition, if F is a
|
57 |
+
fixed finite field or the real field R, it further holds that ξF(G) ≥ Ω(log χ(G)). Both bounds are
|
58 |
+
known to be tight in the worst case (see Claim 2.6 and [33, Chapter 10]). The study of orthogonal
|
59 |
+
representations and orthogonality dimension was initiated in the seminal work of Lov´asz [32] on
|
60 |
+
the ϑ-function and has found applications in various areas, e.g., information theory [32], graph
|
61 |
+
theory [34], and quantum communication complexity [9, Chapter 8.5].
|
62 |
+
The interest in the hardness of determining the orthogonality dimension of graphs dates back
|
63 |
+
to a paper of Lov´asz, Saks, and Schrijver [34], where it was noted that the problem seems difficult.
|
64 |
+
The aforementioned relations between the chromatic number and the orthogonality dimension
|
65 |
+
yield that hardness of deciding whether a graph G satisfies χ(G) ��� k1 or χ(G) ≥ k2 implies the
|
66 |
+
hardness of deciding whether it satisfies ξF(G) ≤ k1 or ξF(G) ≥ Ω(log k2), provided that F is
|
67 |
+
a finite field or R. It therefore follows from [10] that assuming certain variants of the Unique
|
68 |
+
Games Conjecture, it is hard to decide whether a graph G satisfies ξF(G) ≤ k1 or ξF(G) ≥ k2
|
69 |
+
for all integers k2 > k1 ≥ 3. This reasoning, however, does not yield NP-hardness results for the
|
70 |
+
orthogonality dimension (without additional complexity assumptions), even using the strongest
|
71 |
+
known NP-hardness results of the chromatic number. Yet, a result of Peeters [35] implies that for
|
72 |
+
every field F, it is NP-hard to decide if a given graph G satisfies ξF(G) ≤ 3, hence it is NP-hard
|
73 |
+
to approximate the orthogonality dimension of a graph over F to within any factor smaller than
|
74 |
+
4/3. Over the reals, the hardness of approximation for the orthogonality dimension was recently
|
75 |
+
extended in [16] to any factor smaller than 3/2.
|
76 |
+
Another algebraic quantity of graphs is the minrank parameter that was introduced in 1981 by
|
77 |
+
Haemers [19] in the study of the Shannon capacity of graphs. The minrank parameter was used
|
78 |
+
in [18, 19] to answer questions of Lov´asz [32] and was later applied by Alon [1], with a different
|
79 |
+
formulation, to disprove a conjecture of Shannon [38]. The minrank of a graph G over a field F,
|
80 |
+
denoted by minrkF(G), is closely related to the orthogonality dimension of the complement graph
|
81 |
+
G over F and satisfies minrkF(G) ≤ ξF(G). The difference between the two quantities comes,
|
82 |
+
roughly speaking, from the fact that the definition of minrank involves the notion of orthogonal bi-
|
83 |
+
representations rather than orthogonal representations (for the precise definitions, see Section 2.1).
|
84 |
+
The study of the minrank parameter is motivated by various applications in information theory
|
85 |
+
and in theoretical computer science. A prominent one is the well-studied index coding problem,
|
86 |
+
2
|
87 |
+
|
88 |
+
for which the minrank parameter perfectly characterizes the optimal length of its linear solutions,
|
89 |
+
as was shown by Bar-Yossef, Birk, Jayram, and Kol [3] (see Section 2.2).
|
90 |
+
Similarly to the situation of the orthogonality dimension, it was proved in [35] that for every
|
91 |
+
field F, it is NP-hard to decide if a given graph G satisfies minrkF(G) ≤ 3. It was further shown
|
92 |
+
by Dau, Skachek, and Chee [8] that it is NP-hard to decide whether a given digraph G satisfies
|
93 |
+
minrkF2(G) ≤ 2. Note that for (undirected) graphs, the minrank over any field is at most 2 if
|
94 |
+
and only if the complement graph is bipartite, a property that can be checked in polynomial time.
|
95 |
+
Motivated by the computational aspects of the index coding problem, Langberg and Sprintson [30]
|
96 |
+
related the minrank of a graph to the chromatic number of its complement and derived from [10]
|
97 |
+
that assuming certain variants of the Unique Games Conjecture, it is hard to decide whether a
|
98 |
+
given graph G satisfies minrkF(G) ≤ k1 or minrkF(G) ≥ k2, provided that k2 > k1 ≥ 3 and that F
|
99 |
+
is a finite field. Similar hardness results were obtained in [30] for additional settings of the index
|
100 |
+
coding problem, including the general (non-linear) index coding problem over a constant-size
|
101 |
+
alphabet.
|
102 |
+
1.1
|
103 |
+
Our Contribution
|
104 |
+
This paper provides improved NP-hardness of approximation results for the orthogonality dimen-
|
105 |
+
sion and for the minrank parameter over various fields. We start with the following result, which
|
106 |
+
is concerned with the orthogonality dimension over the reals.
|
107 |
+
Theorem 1.1. There exists a function f : N → N satisfying f(k) = 2(1−o(1))·k/2 such that for every
|
108 |
+
sufficiently large integer k, it is NP-hard to decide whether a given graph G satisfies
|
109 |
+
ξR(G) ≤ k
|
110 |
+
or
|
111 |
+
ξR(G) ≥ f(k).
|
112 |
+
Theorem 1.1 implies that it is NP-hard to approximate the orthogonality dimension of a graph
|
113 |
+
over the reals to within any constant factor. Previously, such NP-hardness result was known to
|
114 |
+
hold only for approximation factors smaller than 3/2 [16].
|
115 |
+
We proceed with the following result, which is concerned with the orthogonality dimension
|
116 |
+
and the minrank parameter over finite fields.
|
117 |
+
Theorem 1.2. For every finite field F, there exists a function f : N → N satisfying f(k) = 2(1−o(1))·k/2
|
118 |
+
such that for every sufficiently large integer k, the following holds.
|
119 |
+
1. It is NP-hard to decide whether a given graph G satisfies ξF(G) ≤ k or ξF(G) ≥ f(k).
|
120 |
+
2. It is NP-hard to decide whether a given graph G satisfies minrkF(G) ≤ k or minrkF(G) ≥ f(k).
|
121 |
+
Theorem 1.2 implies that over any finite field, it is NP-hard to approximate the orthogonality di-
|
122 |
+
mension and the minrank of a graph to within any constant factor. Let us stress that this hardness
|
123 |
+
result relies solely on the assumption P ̸= NP rather than on stronger complexity assumptions
|
124 |
+
and thus settles a question raised in [30]. Prior to this work, it was known that it is NP-hard to
|
125 |
+
approximate the minrank of graphs to within any factor smaller than 4/3 [35] and the minrank of
|
126 |
+
digraphs over F2 to within any factor smaller than 3/2 [8].
|
127 |
+
A central component of the proofs of Theorems 1.1 and 1.2 is the notion of line digraphs,
|
128 |
+
introduced in [20], that was first used in the context of hardness of approximation by Wrochna
|
129 |
+
3
|
130 |
+
|
131 |
+
and ˇZivn´y [40] (see also [17]). It was shown in [21, 36] that the chromatic number of any graph
|
132 |
+
is exponential in the chromatic number of its line digraph. This result was iteratively applied
|
133 |
+
by the authors of [40] to improve the NP-hardness of the chromatic number from the k vs. 2k1/3
|
134 |
+
gap of [24] to their k vs. (
|
135 |
+
k
|
136 |
+
⌊k/2⌋) gap. The main technical contribution of the present work lies
|
137 |
+
in analyzing the orthogonality dimension of line digraphs and the minrank parameter of their
|
138 |
+
complement. We actually show that on line digraphs, these graph parameters are quadratically
|
139 |
+
related to the chromatic number (see Theorems 3.5, 3.7, and 3.13). This allows us to derive our
|
140 |
+
hardness results from the hardness of the chromatic number given in [40], where the obtained gaps
|
141 |
+
are only quadratically weaker. We further discuss some limitations of our approach, involving an
|
142 |
+
analogue of Sperner’s theorem for subspaces due to Kalai [25].
|
143 |
+
We finally show that our approach might be useful for proving hardness results for the general
|
144 |
+
(non-linear) index coding problem over a constant-size alphabet, for which no NP-hardness result
|
145 |
+
is currently known. It was shown by Langberg and Sprintson [30] that for an instance of the index
|
146 |
+
coding problem represented by a graph G, the length of an optimal solution is at most χ(G) and
|
147 |
+
at least Ω(log log χ(G)). It thus follows that an NP-hardness result for the chromatic number with
|
148 |
+
a double-exponential gap would imply an NP-hardness result for the general index coding prob-
|
149 |
+
lem. However, no such NP-hardness result is currently known for the chromatic number without
|
150 |
+
relying on further complexity assumptions. To tackle this issue, we study the index coding prob-
|
151 |
+
lem on instances which are complement of line digraphs (see Theorem 3.17). As a consequence of
|
152 |
+
our results, we obtain that the NP-hardness of the general index coding problem can be derived
|
153 |
+
from an NP-hardness result of the chromatic number with only a single-exponential gap, not that
|
154 |
+
far from the best known gap given in [40]. For a precise statement, see Theorem 4.7.
|
155 |
+
1.2
|
156 |
+
Related Work
|
157 |
+
We gather here several related results from the literature.
|
158 |
+
• A result of Zuckerman [41] asserts that for any ε > 0, it is NP-hard to approximate the chro-
|
159 |
+
matic number of a graph on n vertices to within a factor of n1−ε. It would be interesting to
|
160 |
+
figure out if such hardness result holds for the orthogonality dimension and for the min-
|
161 |
+
rank parameter. The present paper, however, focuses on the hardness of gap problems with
|
162 |
+
constant thresholds, independent of the number of vertices.
|
163 |
+
• As mentioned earlier, Peeters [35] proved that for every field F, it is NP-hard to decide if the
|
164 |
+
minrank (or the orthogonality dimension) of a given graph is at most 3. We note that for
|
165 |
+
finite fields, this can also be derived from a result of Hell and Neˇsetˇril [23].
|
166 |
+
• For the chromatic number of hypergraphs, the gaps for which NP-hardness is known to hold
|
167 |
+
are much stronger than for graphs. For example, it was shown in [5] that for some δ > 0, it
|
168 |
+
is NP-hard to decide if a given 4-uniform hypergraph G on n vertices satisfies χ(G) ≤ 2 or
|
169 |
+
χ(G) ≥ logδ n. An analogue result for the orthogonality dimension of hypergraphs over R
|
170 |
+
was proved in [22].
|
171 |
+
• On the algorithmic side, a long line of work has explored the number of colors that an effi-
|
172 |
+
cient algorithm needs for properly coloring a given k-colorable graph, where k ≥ 3 is a fixed
|
173 |
+
4
|
174 |
+
|
175 |
+
constant. For example, there exists a polynomial-time algorithm that on a given 3-colorable
|
176 |
+
graph with n vertices uses O(n0.19996) colors [27]. Algorithms of this nature exist for the
|
177 |
+
graph parameters studied in this work as well. Indeed, there exists a polynomial-time algo-
|
178 |
+
rithm that given a graph G on n vertices with ξR(G) ≤ 3 finds a proper coloring of G with
|
179 |
+
O(n0.2413) colors [22]. Further, there exists a polynomial-time algorithm that given a graph
|
180 |
+
G on n vertices with minrkF2(G) ≤ 3 finds a proper coloring of G with O(n0.2574) colors [7].
|
181 |
+
Note that the colorings obtained by these two algorithms provide, respectively, orthogonal
|
182 |
+
and bi-orthogonal representations for the input graph G (see Claim 2.6).
|
183 |
+
1.3
|
184 |
+
Outline
|
185 |
+
The rest of the paper is organized as follows. In Section 2, we collect several definitions and results
|
186 |
+
that will be used throughout this paper. In Section 3, we study the underlying graphs of line
|
187 |
+
digraphs and their behavior with respect to the orthogonality dimension, the minrank parameter,
|
188 |
+
and the index coding problem. We also discuss there some limitations of our approach, given
|
189 |
+
in Sections 3.1.2 and 3.2.1. Finally, in Section 4, we prove our hardness results and complete the
|
190 |
+
proofs of Theorems 1.1 and 1.2.
|
191 |
+
2
|
192 |
+
Preliminaries
|
193 |
+
Throughout the paper, undirected graphs are referred to as graphs, and directed graphs are re-
|
194 |
+
ferred to as digraphs. All the considered graphs and digraphs are simple, and all the logarithms
|
195 |
+
are in base 2 unless otherwise specified. For an integer n, we use the notation [n] = {1, 2, . . . , n}.
|
196 |
+
2.1
|
197 |
+
Orthogonality Dimension and Minrank
|
198 |
+
The orthogonality dimension of a graph is defined as follows (see, e.g., [33, Chapter 11]).
|
199 |
+
Definition 2.1 (Orthogonality Dimension). A k-dimensional orthogonal representation of a graph
|
200 |
+
G = (V, E) over a field F is an assignment of a vector uv ∈ Fk with ⟨uv, uv⟩ ̸= 0 to each vertex v ∈ V,
|
201 |
+
such that ⟨uv, uv′⟩ = 0 whenever v and v′ are adjacent vertices in G. Here, for two vectors x, y ∈ Fk, we let
|
202 |
+
⟨x, y⟩ = ∑k
|
203 |
+
i=1 xiyi denote the standard inner product of x and y over F. The orthogonality dimension of
|
204 |
+
a graph G over a field F, denoted by ξF(G), is the smallest integer k for which there exists a k-dimensional
|
205 |
+
orthogonal representation of G over F.
|
206 |
+
Remark 2.2. We note that orthogonal representations are sometimes defined in the literature such that the
|
207 |
+
vectors associated with non-adjacent vertices are required to be orthogonal, that is, as orthogonal represen-
|
208 |
+
tations of the complement graph. While we find it more convenient to use the other definition in this paper,
|
209 |
+
one can view the notation ξF(G) as standing for ξF(G), i.e., the orthogonality dimension of the complement
|
210 |
+
graph. The same holds for the notion of orthogonal bi-representations, given in Definition 2.4.
|
211 |
+
The minrank parameter, introduced in [19], is defined as follows.
|
212 |
+
Definition 2.3 (Minrank). Let G = (V, E) be a digraph on the vertex set V = [n], and let F be a field.
|
213 |
+
We say that a matrix M ∈ Fn×n represents G if Mi,i ̸= 0 for every i ∈ V, and Mi,j = 0 for every distinct
|
214 |
+
5
|
215 |
+
|
216 |
+
vertices i, j ∈ V such that (i, j) /∈ E. The minrank of G over F is defined as
|
217 |
+
minrkF(G) = min{rankF(M) | M represents G over F}.
|
218 |
+
The definition is naturally extended to graphs by replacing every edge with two oppositely directed edges.
|
219 |
+
We next describe an alternative definition due to Peeters [35] for the minrank of graphs. This
|
220 |
+
requires the following extension of orthogonal representations, called orthogonal bi-representations.
|
221 |
+
Definition 2.4. A k-dimensional orthogonal bi-representation of a graph G = (V, E) over a field F is
|
222 |
+
an assignment of a pair of vectors (uv, wv) ∈ Fk × Fk with ⟨uv, wv⟩ ̸= 0 to each vertex v ∈ V, such that
|
223 |
+
⟨uv, wv′⟩ = ⟨uv′, wv⟩ = 0 whenever v and v′ are adjacent vertices in G.
|
224 |
+
The following proposition follows directly from Definitions 2.3 and 2.4 combined with the fact
|
225 |
+
that for every matrix M ∈ Fn×n, rankF(M) is the smallest integer k for which M can be written as
|
226 |
+
M = MT
|
227 |
+
1 · M2 for two matrices M1, M2 ∈ Fk×n.
|
228 |
+
Proposition 2.5 ([35]). For every field F and for every graph G, minrkF(G) is the smallest integer k for
|
229 |
+
which there exists a k-dimensional orthogonal bi-representation of G over F.
|
230 |
+
The following claim summarizes some known relations between the studied graph parame-
|
231 |
+
ters. We provide a quick proof for completeness.
|
232 |
+
Claim 2.6. For every field F and for every graph G, it holds that
|
233 |
+
minrkF(G) ≤ ξF(G) ≤ χ(G).
|
234 |
+
In addition, if F is finite, then
|
235 |
+
minrkF(G) ≥ log|F| χ(G).
|
236 |
+
Proof: The inequality minrkF(G) ≤ ξF(G) follows by combining Proposition 2.5 with the fact
|
237 |
+
that a k-dimensional orthogonal representation of G over F induces a k-dimensional orthogonal
|
238 |
+
bi-representation of G over F with two identical vectors for every vertex.
|
239 |
+
For the inequality ξF(G) ≤ χ(G), observe that any proper coloring of G with k colors induces
|
240 |
+
a k-dimensional orthogonal representation of G over any field F, by assigning the ith vector of the
|
241 |
+
standard basis of Fk to each vertex colored by the ith color.
|
242 |
+
Next, assuming that F is finite, we show that minrkF(G) ≥ log|F| χ(G). To this end, denote
|
243 |
+
k = minrkF(G), and apply Proposition 2.5 to obtain that there exists a k-dimensional orthogonal
|
244 |
+
bi-representation of G over F that assigns a pair (uv, wv) ∈ Fk × Fk to each vertex v of G. For
|
245 |
+
every two adjacent vertices v and v′ in G, the vectors uv and uv′ are distinct, because ⟨uv, wv′⟩ = 0
|
246 |
+
whereas ⟨uv′, wv′⟩ ̸= 0. This implies that G admits a proper coloring with at most |F|k colors,
|
247 |
+
completing the proof.
|
248 |
+
We finally recall that a homomorphism from a graph G1 = (V1, E1) to a graph G2 = (V2, E2)
|
249 |
+
is a function g : V1 → V2 such that for every two vertices x, y ∈ V1 with {x, y} ∈ E1, it holds
|
250 |
+
that {g(x), g(y)} ∈ E2. Observe that if there exists a homomorphism from G1 to G2 then we have
|
251 |
+
χ(G1) ≤ χ(G2), and for every field F, ξF(G1) ≤ ξF(G2) and minrkF(G1) ≤ minrkF(G2).
|
252 |
+
6
|
253 |
+
|
254 |
+
2.2
|
255 |
+
Index Coding
|
256 |
+
The index coding problem, introduced in [3], is concerned with economical strategies for broad-
|
257 |
+
casting information to n receivers in a way that enables each of them to retrieve its own message, a
|
258 |
+
symbol from some given alphabet Σ. For this purpose, each receiver is allowed to use some prior
|
259 |
+
side information that consists of a subset of the messages required by the other receivers. The
|
260 |
+
side information map is naturally represented by a digraph on [n], which includes an edge (i, j) if
|
261 |
+
the ith receiver knows the message required by the jth receiver. The objective is to minimize the
|
262 |
+
length of the transmitted information. For simplicity, we consider here the case of symmetric side
|
263 |
+
information maps, represented by graphs rather than by digraphs. The formal definition follows.
|
264 |
+
Definition 2.7 (Index Coding). Let G be a graph on the vertex set [n], and let Σ be an alphabet. An index
|
265 |
+
code for G over Σ of length k is an encoding function E : Σn → Σk such that for every i ∈ [n], there exists
|
266 |
+
a decoding function gi : Σk+|NG(i)| → Σ, such that for every x ∈ Σn, it holds that gi(E(x), x|NG(i)) = xi.
|
267 |
+
Here, NG(i) stands for the set of vertices in G adjacent to the vertex i, and x|NG(i) stands for the restriction
|
268 |
+
of x to the indices of NG(i). If Σ is a field F and the encoding function E is linear over F, then we say that
|
269 |
+
the index code is linear over F.
|
270 |
+
Bar-Yossef et al. [3] showed that the minrank parameter characterizes the length of optimal
|
271 |
+
solutions to the index coding problem in the linear setting.
|
272 |
+
Proposition 2.8 ([3]). For every field F and for every graph G, the minimal length of a linear index code
|
273 |
+
for G over F is minrkF(G).
|
274 |
+
3
|
275 |
+
Line Digraphs
|
276 |
+
In 1960, Harary and Norman [20] introduced the concept of line digraphs, defined as follows.
|
277 |
+
Definition 3.1 (Line Digraph). For a digraph G = (V, E), the line digraph of G, denoted by δG, is the
|
278 |
+
digraph on the vertex set E that includes a directed edge from a vertex (x, y) to a vertex (z, w) whenever
|
279 |
+
y = z.
|
280 |
+
Definition 3.1 is naturally extended to graphs G by replacing every edge of G with two oppositely
|
281 |
+
directed edges. Note that in this case, the number of vertices in δG is twice the number of edges
|
282 |
+
in G. We will frequently consider the underlying graph of the digraph δG, i.e., the graph obtained
|
283 |
+
from δG by ignoring the directions of the edges.
|
284 |
+
The following result of Poljak and R¨odl [36], which strengthens a previous result of Harner and
|
285 |
+
Entringer [21], shows that the chromatic number of a graph G precisely determines the chromatic
|
286 |
+
number of the underlying graph of δG. The statement of the result uses the function b : N → N
|
287 |
+
defined by b(n) = (
|
288 |
+
n
|
289 |
+
⌊n/2⌋).
|
290 |
+
Theorem 3.2 ([21, 36]). Let G be a graph, and let H be the underlying graph of the digraph δG. Then,
|
291 |
+
χ(H) = min{n | χ(G) ≤ b(n)}.
|
292 |
+
Using the fact that b(n) ∼
|
293 |
+
2n
|
294 |
+
√
|
295 |
+
πn/2, Theorem 3.2 implies that the chromatic number of G is expo-
|
296 |
+
nential in the chromatic number of H. Our goal in this section is to relate the chromatic number
|
297 |
+
of G to other graph parameters of H, namely, the orthogonality dimension, the minrank of the
|
298 |
+
complement, and the optimal length of an index code for the complement.
|
299 |
+
7
|
300 |
+
|
301 |
+
3.1
|
302 |
+
Orthogonality Dimension
|
303 |
+
For a field F, an integer n, and a subspace U of Fn, we denote by U⊥ the subspace of Fn that
|
304 |
+
consists of the vectors that are orthogonal to U over F, i.e.,
|
305 |
+
U⊥ = {w ∈ Fn | ⟨w, u⟩ = 0 for every u ∈ U}.
|
306 |
+
Consider the following family of graphs.
|
307 |
+
Definition 3.3. For a field F and an integer n, let S1(F, n) denote the graph whose vertices are all the
|
308 |
+
subspaces of Fn, where two distinct subspaces U1 and U2 are adjacent if there exists a vector w ∈ Fn with
|
309 |
+
⟨w, w⟩ ̸= 0 that satisfies w ∈ U1 ∩ U⊥
|
310 |
+
2 and, in addition, there exists a vector w′ ∈ Fn with ⟨w′, w′⟩ ̸= 0
|
311 |
+
that satisfies w′ ∈ U2 ∩ U⊥
|
312 |
+
1 .
|
313 |
+
In words, two subspaces of Fn are adjacent in the graph S1(F, n) if each of them includes a non-
|
314 |
+
self-orthogonal vector that is orthogonal to the entire other subspace. Note that for an infinite field
|
315 |
+
F and for n ≥ 2, the vertex set of S1(F, n) is infinite.
|
316 |
+
We argue that the chromatic number of a graph G can be used to estimate the orthogonality
|
317 |
+
dimension of the underlying graph H of its line digraph δG. First, recall that by Theorem 3.2, the
|
318 |
+
chromatic number of H is logarithmic in χ(G). This implies, using Claim 2.6, that the orthog-
|
319 |
+
onality dimension of H over any field is at most logarithmic in χ(G). For a lower bound on the
|
320 |
+
orthogonality dimension of H, we need the following lemma that involves the chromatic numbers
|
321 |
+
of the graphs S1(F, n).
|
322 |
+
Lemma 3.4. Let F be a field, let G be a graph, let H be the underlying graph of the digraph δG, and put
|
323 |
+
n = ξF(H). Then, χ(G) ≤ χ(S1(F, n)).
|
324 |
+
Proof: Put G = (VG, EG) and H = (VH, EH). The assumption n = ξF(H) implies that there exists
|
325 |
+
an n-dimensional orthogonal representation of H over F, that is, an assignment of a vector uv ∈ Fn
|
326 |
+
with ⟨uv, uv⟩ ̸= 0 to each vertex v ∈ VH, such that ⟨uv, uv′⟩ = 0 whenever v and v′ are adjacent in
|
327 |
+
H. Recall that the vertices of H, just as the vertices of δG, are the ordered pairs (x, y) of adjacent
|
328 |
+
vertices x, y in G.
|
329 |
+
For every vertex y ∈ VG, let Uy denote the subspace spanned by the vectors of the given
|
330 |
+
orthogonal representation that are associated with the vertices of H whose tail is y, namely,
|
331 |
+
Uy = span({uv | v = (x, y) for some x ∈ VG}).
|
332 |
+
Note that Uy is a subspace of Fn, and thus a vertex of S1(F, n).
|
333 |
+
Consider the function that maps every vertex y ∈ VG of G to the vertex Uy of S1(F, n). We claim
|
334 |
+
that this function forms a homomorphism from G to S1(F, n). To see this, let x, y ∈ VG be adjacent
|
335 |
+
vertices in G, and consider the vector w = u(x,y) assigned by the given orthogonal representation
|
336 |
+
to the vertex (x, y) of H. By the definition of an orthogonal representation, it holds that ⟨w, w⟩ ̸= 0.
|
337 |
+
Since (x, y) is a vertex of H whose tail is y, it follows that w ∈ Uy. Further, every vertex of H of the
|
338 |
+
form (x′, x) for some x′ ∈ VG is adjacent in H to (x, y), hence it holds that ⟨u(x′,x), w⟩ = 0. Since
|
339 |
+
the subspace Ux is spanned by those vectors u(x′,x), we obtain that w is orthogonal to the entire
|
340 |
+
subspace Ux. It thus follows that the vector w satisfies ⟨w, w⟩ ̸= 0 and w ∈ Uy ∩ U⊥
|
341 |
+
x . By symmetry,
|
342 |
+
there also exists a vector w′ ∈ Fn satisfying ⟨w′, w′⟩ ̸= 0 and w′ ∈ Ux ∩ U⊥
|
343 |
+
y , hence the subspaces Ux
|
344 |
+
8
|
345 |
+
|
346 |
+
and Uy are adjacent vertices in S1(F, n). We conclude that the above function is a homomorphism
|
347 |
+
from G to S1(F, n), hence the chromatic numbers of these graphs satisfy χ(G) ≤ χ(S1(F, n)), as
|
348 |
+
required.
|
349 |
+
In order to derive useful bounds from Lemma 3.4, we need upper bounds on the chromatic
|
350 |
+
numbers of the graphs S1(F, n). Every vertex of S1(F, n) is a subspace of Fn and thus can be
|
351 |
+
represented by a basis that generates it. For a finite field F of size q, the number of possible bases
|
352 |
+
does not exceed qn2, which obviously yields that χ(S1(F, n)) ≤ qn2. While this simple bound
|
353 |
+
suffices for proving our hardness results for the orthogonality dimension over finite fields, we
|
354 |
+
note that the number of vertices in S1(F, n) is in fact q(1+o(1))·n2/4, where the o(1) term tends to 0
|
355 |
+
when n tends to infinity.1
|
356 |
+
We conclude this discussion with the following theorem.
|
357 |
+
Theorem 3.5. Let F be a finite field of size q, let G be a graph, and let H be the underlying graph of the
|
358 |
+
digraph δG. Then, it holds that
|
359 |
+
ξF(H) ≥
|
360 |
+
�
|
361 |
+
logq χ(G).
|
362 |
+
Proof: Put n = ξF(H), and apply Lemma 3.4 to obtain that χ(G) ≤ χ(S1(F, n)) ≤ qn2. By rear-
|
363 |
+
ranging, the proof is completed.
|
364 |
+
3.1.1
|
365 |
+
The Chromatic Number of S1(R, n)
|
366 |
+
For the real field R and for n ≥ 2, the vertex set of the graph S1(R, n) is infinite, and yet, its
|
367 |
+
chromatic number is finite. To see this, let us firstly observe a simple upper bound of 23n. To each
|
368 |
+
vertex of S1(R, n), i.e., a subspace U of Rn, assign the subset of {0, ±1}n that consists of all the
|
369 |
+
sign vectors of the vectors of U. This assignment forms a proper coloring of the graph, because for
|
370 |
+
adjacent vertices U and V there exists a nonzero vector w ∈ U that is orthogonal to V, hence the
|
371 |
+
sign vector of w belongs to the set of sign vectors of U but does not belong to the one of V (because
|
372 |
+
the inner product of two vectors with the same nonzero sign vector is positive). Since the number
|
373 |
+
of subsets of {0, ±1}n is 23n, it follows that χ(S1(R, n)) ≤ 23n.
|
374 |
+
The above double-exponential bound is not sufficient for deriving NP-hardness of approxima-
|
375 |
+
tion results for the orthogonality dimension over R from the currently known NP-hardness results
|
376 |
+
of the chromatic number. We therefore need the following lemma that provides an exponentially
|
377 |
+
better bound which is suitable for our purposes. For a vector w ∈ Rn, we use here the notation
|
378 |
+
∥w∥ =
|
379 |
+
�
|
380 |
+
⟨w, w⟩ for the Euclidean norm of w.
|
381 |
+
Lemma 3.6. For every integer n, it holds that χ(S1(R, n)) ≤ (2n + 1)n2.
|
382 |
+
Proof: We define a coloring of the vertices of the graph S1(R, n) as follows. For every vertex of
|
383 |
+
S1(R, n), i.e., a subspace U of Rn, let (u1, . . . , uk) be an arbitrary orthonormal basis of U where
|
384 |
+
k ≤ n, and assign U to the color c(U) = (u′
|
385 |
+
1, . . . , u′
|
386 |
+
k) where u′
|
387 |
+
i is a vector obtained from ui by
|
388 |
+
1To see this, observe that the number of k-dimensional subspaces of Fn is precisely ∏k−1
|
389 |
+
i=0
|
390 |
+
qn−qi
|
391 |
+
qk−qi and that every term
|
392 |
+
in this product lies in [qn−k−1, qn−k+1]. Hence, the total number of subspaces of Fn is at least ∑n
|
393 |
+
k=0 q(n−k−1)k and at
|
394 |
+
most ∑n
|
395 |
+
k=0 q(n−k+1)k. It follows that the number of subspaces of Fn is q(1+o(1))·n2/4.
|
396 |
+
9
|
397 |
+
|
398 |
+
rounding each of its values to a closest integer multiple of 1
|
399 |
+
n. Note that for every i ∈ [k], the
|
400 |
+
vectors ui and u′
|
401 |
+
i differ in every coordinate by no more than 1
|
402 |
+
2n in absolute value.
|
403 |
+
We claim that c is a proper coloring of S1(R, n). To see this, let U and V be adjacent vertices
|
404 |
+
in the graph. If dim(U) ̸= dim(V) then it clearly holds that c(U) ̸= c(V). So suppose that the
|
405 |
+
dimensions of U and V are equal, and put k = dim(U) = dim(V). Denote the orthonormal bases
|
406 |
+
associated with U and V by (u1, . . . , uk) and (v1, . . . , vk) respectively, and let c(U) = (u′
|
407 |
+
1, . . . , u′
|
408 |
+
k)
|
409 |
+
and c(V) = (v′
|
410 |
+
1, . . . , v′
|
411 |
+
k) be their colors. Our goal is to show that c(U) ̸= c(V).
|
412 |
+
Assume for the sake of contradiction that c(U) = c(V), that is, u′
|
413 |
+
i = v′
|
414 |
+
i for every i ∈ [k]. This
|
415 |
+
implies that for every i ∈ [k], the vectors ui and vi differ in each coordinate by no more than 1
|
416 |
+
n in
|
417 |
+
absolute value, hence
|
418 |
+
∥ui − vi∥ ≤
|
419 |
+
�
|
420 |
+
n · 1
|
421 |
+
n2 =
|
422 |
+
1
|
423 |
+
√n.
|
424 |
+
(1)
|
425 |
+
Since U and V are adjacent in the graph S1(R, n), by scaling, there exists a unit vector u ∈ U ∩ V⊥.
|
426 |
+
Write u = ∑i∈[k] αi · ui for coefficients α1, . . . , αk ∈ R. Since the given basis of U is orthonormal, it
|
427 |
+
follows that ∑i∈[k] α2
|
428 |
+
i = ∥u∥2 = 1. Now, consider the vector v = ∑i∈[k] αi · vi, and observe that v is
|
429 |
+
a unit vector that belongs to the subspace V. Observe further that
|
430 |
+
∥u − v∥ =
|
431 |
+
��� ∑
|
432 |
+
i∈[k]
|
433 |
+
αi · (ui − vi)
|
434 |
+
��� ≤ ∑
|
435 |
+
i∈[k]
|
436 |
+
|αi| · ∥ui − vi∥ ≤
|
437 |
+
�
|
438 |
+
∑
|
439 |
+
i∈[k]
|
440 |
+
α2
|
441 |
+
i
|
442 |
+
�1/2
|
443 |
+
·
|
444 |
+
�
|
445 |
+
∑
|
446 |
+
i∈[k]
|
447 |
+
∥ui − vi∥2�1/2
|
448 |
+
≤ 1,
|
449 |
+
(2)
|
450 |
+
where the first inequality follows from the triangle inequality, the second from the Cauchy-Schwarz
|
451 |
+
inequality, and the third from (1) using k ≤ n. However, u and v are orthogonal unit vectors, and
|
452 |
+
as such, the distance between them satisfies ∥u − v∥ =
|
453 |
+
√
|
454 |
+
2. This yields a contradiction to (2),
|
455 |
+
hence c(U) ̸= c(V).
|
456 |
+
To complete the proof, we observe that the number of colors used by the proper coloring c does
|
457 |
+
not exceed (2n + 1)n2. Indeed, every color can be represented by an n × n matrix whose values are
|
458 |
+
of the form a
|
459 |
+
n for integers −n ≤ a ≤ n (where the matrix associated with a subspace of dimension
|
460 |
+
k consists of the rounded k column vectors concatenated with n − k columns of zeros). Since the
|
461 |
+
number of those matrices is bounded by (2n + 1)n2, we are done.
|
462 |
+
We derive the following theorem.
|
463 |
+
Theorem 3.7. There exists a constant c > 0, such that for every graph G with χ(G) ≥ 3, the underlying
|
464 |
+
graph H of the digraph δG satisfies
|
465 |
+
ξR(H) ≥ c ·
|
466 |
+
�
|
467 |
+
log χ(G)
|
468 |
+
log log χ(G).
|
469 |
+
Proof: Put n = ξR(H), and combine Lemma 3.4 with Lemma 3.6 to obtain that
|
470 |
+
χ(G) ≤ χ(S1(R, n)) ≤ (2n + 1)n2,
|
471 |
+
which yields the desired bound.
|
472 |
+
10
|
473 |
+
|
474 |
+
3.1.2
|
475 |
+
The Clique Number of S1(F, n)
|
476 |
+
We next consider the clique numbers of the graphs S1(F, n), whose estimation is motivated by the
|
477 |
+
following lemma. Here, the clique number of a graph G is denoted by ω(G).
|
478 |
+
Lemma 3.8. Let F be a field, let G be a graph, and let H be the underlying graph of the digraph δG. If
|
479 |
+
χ(G) ≤ ω(S1(F, n)), then ξF(H) ≤ n.
|
480 |
+
Proof: Put m = ω(S1(F, n)), and let U1, . . . , Um be m subspaces of Fn that form a clique in S1(F, n).
|
481 |
+
Put G = (V, E), suppose that χ(G) ≤ m, and let c : V → [m] be a proper coloring of G. Notice
|
482 |
+
that for every two adjacent vertices x, y in G, the subspaces Uc(x) and Uc(y) are adjacent vertices in
|
483 |
+
S1(F, n).
|
484 |
+
We define an n-dimensional orthogonal representation of H over F as follows. Recall that
|
485 |
+
every vertex of H is a pair (x, y) of adjacent vertices x, y in G. Assign every such vertex (x, y)
|
486 |
+
to some non-self-orthogonal vector u(x,y) that lies in Uc(y) ∩ U⊥
|
487 |
+
c(x). The existence of such a vector
|
488 |
+
follows from the adjacency of the vertices Uc(x) and Uc(y) in S1(F, n). We claim that this assign-
|
489 |
+
ment is an orthogonal representation of H. Indeed, for adjacent vertices (x, y) and (y, z) in H, the
|
490 |
+
vector u(x,y) belongs to Uc(y) whereas the vector u(y,z) is orthogonal to Uc(y), hence they satisfy
|
491 |
+
⟨u(x,y), u(y,z)⟩ = 0. Since this orthogonal representation lies in Fn, we establish that ξF(H) ≤ n.
|
492 |
+
For a graph G and for the underlying graph H of its line digraph δG, Theorem 3.2 implies that
|
493 |
+
if χ(G) ≤ (
|
494 |
+
n
|
495 |
+
⌊n/2⌋) then χ(H) ≤ n, and thus, by Claim 2.6, ξF(H) ≤ n for every field F. This raises
|
496 |
+
the question of whether Lemma 3.8 can be used to obtain a better upper bound on ξF(H) as a
|
497 |
+
function of χ(G). For certain cases, the following result answers this question negatively. Namely,
|
498 |
+
it shows that the clique number of the graphs S1(F, n) is precisely (
|
499 |
+
n
|
500 |
+
⌊n/2⌋), whenever the vector
|
501 |
+
space Fn has no nonzero self-orthogonal vectors (as in the case of F = R). It thus follows that
|
502 |
+
Lemma 3.8 cannot yield a better relation between the quantities ξR(H) and χ(G) than the one
|
503 |
+
stemming from Theorem 3.2.
|
504 |
+
Proposition 3.9. For a field F and an integer n such that Fn has no nonzero self-orthogonal vectors, it
|
505 |
+
holds that
|
506 |
+
ω(S1(F, n)) =
|
507 |
+
�
|
508 |
+
n
|
509 |
+
⌊n/2⌋
|
510 |
+
�
|
511 |
+
.
|
512 |
+
The proof of Proposition 3.9 relies on the following result of Kalai [25] (see also [31]).
|
513 |
+
Theorem 3.10 ([25]). For a field F and an integer n, let (U1, W1), . . . , (Um, Wm) be m pairs of subspaces
|
514 |
+
of Fn such that
|
515 |
+
1. Ui ∩ Wi = {0} for every i ∈ [m], and
|
516 |
+
2. Ui ∩ Wj ̸= {0} for every i ̸= j ∈ [m].
|
517 |
+
Then, m ≤ (
|
518 |
+
n
|
519 |
+
⌊n/2⌋).
|
520 |
+
Proof of Proposition 3.9: We first show that there exists a clique in S1(F, n) of size (
|
521 |
+
n
|
522 |
+
⌊n/2⌋). For
|
523 |
+
every set A ⊆ [n] of size |A| = ⌊n/2⌋, let UA denote the subspace of Fn spanned by the vectors
|
524 |
+
ei with i ∈ A, where ei stands for the vector of Fn with 1 on the ith entry and 0 everywhere else.
|
525 |
+
11
|
526 |
+
|
527 |
+
It clearly holds that for every distinct such sets A1, A2, there exists some i ∈ A1 \ A2, and that the
|
528 |
+
vector ei satisfies ⟨ei, ei⟩ = 1 and ei ∈ UA1 ∩ U⊥
|
529 |
+
A2. It thus follows that the (
|
530 |
+
n
|
531 |
+
⌊n/2⌋) subspaces UA with
|
532 |
+
|A| = ⌊n/2⌋ form a clique in the graph S1(F, n), as required.
|
533 |
+
We next show that the size of every clique in S1(F, n) does not exceed (
|
534 |
+
n
|
535 |
+
⌊n/2⌋). To see this,
|
536 |
+
let U1, . . . , Um be subspaces of Fn that form a clique in S1(F, n). Consider the pairs (Ui, U⊥
|
537 |
+
i ) for
|
538 |
+
i ∈ [m], and observe that they satisfy the conditions of Theorem 3.10. Indeed, for every i ∈ [m]
|
539 |
+
it holds that Ui ∩ U⊥
|
540 |
+
i
|
541 |
+
= {0}, because Fn has no nonzero self-orthogonal vectors. Further, since
|
542 |
+
the given collection of subspaces is a clique in S1(F, n), for every i ̸= j ∈ [m], there exists a vector
|
543 |
+
w ∈ Fn with ⟨w, w⟩ ̸= 0 such that w ∈ Ui ∩ U⊥
|
544 |
+
j , hence, Ui ∩ U⊥
|
545 |
+
j
|
546 |
+
̸= {0}. It thus follows from
|
547 |
+
Theorem 3.10 that m ≤ (
|
548 |
+
n
|
549 |
+
⌊n/2⌋), as required.
|
550 |
+
3.2
|
551 |
+
Minrank
|
552 |
+
As in the previous section, we start with a definition of a family of graphs.
|
553 |
+
Definition 3.11. For a field F and an integer n, let S2(F, n) denote the graph whose vertices are all the
|
554 |
+
pairs of subspaces of Fn, where two distinct pairs (U1, W1) and (U2, W2) are adjacent if there exist two
|
555 |
+
vectors u, w ∈ Fn with ⟨u, w⟩ ̸= 0 such that u ∈ U1 ∩ W⊥
|
556 |
+
2 and w ∈ W1 ∩ U⊥
|
557 |
+
2 and, in addition, there
|
558 |
+
exist two vectors u′, w′ ∈ Fn with ⟨u′, w′⟩ ̸= 0 such that u′ ∈ U2 ∩ W⊥
|
559 |
+
1 and w′ ∈ W2 ∩ U⊥
|
560 |
+
1 .
|
561 |
+
We next argue that the chromatic number of a graph G can be used to estimate the minrank
|
562 |
+
of the complement of the underlying graph of its line digraph δG. This is established using the
|
563 |
+
following lemma that involves the chromatic numbers of the graphs S2(F, n). Its proof resembles
|
564 |
+
that of Lemma 3.4.
|
565 |
+
Lemma 3.12. Let F be a field, let G be a graph, let H be the underlying graph of the digraph δG, and put
|
566 |
+
n = minrkF(H). Then, χ(G) ≤ χ(S2(F, n)).
|
567 |
+
Proof: Put G = (VG, EG) and H = (VH, EH). The assumption n = minrkF(H) implies, by Propo-
|
568 |
+
sition 2.5, that there exists an n-dimensional orthogonal bi-representation of H over F, that is, an
|
569 |
+
assignment of a pair of vectors (uv, wv) ∈ Fn × Fn with ⟨uv, wv⟩ ̸= 0 to each vertex v ∈ VH, such
|
570 |
+
that ⟨uv, wv′⟩ = ⟨uv′, wv⟩ = 0 whenever v and v′ are adjacent in H.
|
571 |
+
For every vertex y ∈ VG, let Uy denote the subspace spanned by the vectors uv of the given
|
572 |
+
orthogonal bi-representation associated with the vertices v of H whose tail is y, namely,
|
573 |
+
Uy = span({uv | v = (x, y) for some x ∈ VG}).
|
574 |
+
Similarly, let Wy denote the subspace spanned by the vectors wv of the given orthogonal bi-
|
575 |
+
representation associated with the vertices v of H whose tail is y, namely,
|
576 |
+
Wy = span({wv | v = (x, y) for some x ∈ VG}).
|
577 |
+
Note that Uy and Wy are subspaces of Fn, hence the pair (Uy, Wy) is a vertex of S2(F, n).
|
578 |
+
Consider the function that maps every vertex y ∈ VG of G to the vertex (Uy, Wy) of S2(F, n).
|
579 |
+
We claim that this function forms a homomorphism from G to S2(F, n). To see this, let x, y ∈ VG
|
580 |
+
be adjacent vertices in G, and consider the vectors u = u(x,y) and w = w(x,y) assigned by the
|
581 |
+
12
|
582 |
+
|
583 |
+
given orthogonal bi-representation to the vertex (x, y) of H. By the definition of an orthogonal
|
584 |
+
bi-representation, it holds that ⟨u, w⟩ ̸= 0. Since (x, y) is a vertex of H whose tail is y, it follows
|
585 |
+
that u ∈ Uy and w ∈ Wy. Further, every vertex of H of the form (x′, x) for some x′ ∈ VG is
|
586 |
+
adjacent in H to (x, y), hence it satisfies ⟨u(x′,x), w⟩ = ⟨u, w(x′,x)⟩ = 0. Since the subspaces Ux and
|
587 |
+
Wx are spanned, respectively, by those vectors u(x′,x) and w(x′,x), we obtain that u is orthogonal to
|
588 |
+
the subspace Wx and that w is orthogonal to the subspace Ux. It thus follows that the vectors u
|
589 |
+
and w satisfy ⟨u, w⟩ ̸= 0, u ∈ Uy ∩ W⊥
|
590 |
+
x , and w ∈ Wy ∩ U⊥
|
591 |
+
x . By symmetry, there also exist vectors
|
592 |
+
u′, w′ ∈ Fn satisfying ⟨u′, w′⟩ ̸= 0, u′ ∈ Ux ∩ W⊥
|
593 |
+
y , and w′ ∈ Wx ∩ U⊥
|
594 |
+
y , hence the pairs (Ux, Wx) and
|
595 |
+
(Uy, Wy) are adjacent vertices in S2(F, n). We conclude that the above function is a homomorphism
|
596 |
+
from G to S2(F, n), hence the chromatic numbers of these graphs satisfy χ(G) ≤ χ(S2(F, n)), as
|
597 |
+
required.
|
598 |
+
We derive the following theorem.
|
599 |
+
Theorem 3.13. Let F be a finite field of size q, let G be a graph, and let H be the underlying graph of the
|
600 |
+
digraph δG. Then, it holds that
|
601 |
+
minrkF(H) ≥
|
602 |
+
�
|
603 |
+
1
|
604 |
+
2 · logq χ(G).
|
605 |
+
Proof: Put n = minrkF(H), and apply Lemma 3.12 to obtain that
|
606 |
+
χ(G) ≤ χ(S2(F, n)) ≤ q2n2,
|
607 |
+
where the second inequality holds because the number of vertices in S2(F, n) does not exceed q2n2.
|
608 |
+
By rearranging, the proof is completed.
|
609 |
+
3.2.1
|
610 |
+
The Chromatic Number of S2(R, n)
|
611 |
+
We next consider the problem of determining the chromatic numbers of the graphs S2(R, n). The
|
612 |
+
following theorem shows that these graphs cannot be properly colored using a finite number of
|
613 |
+
colors, in contrast to the graphs S1(R, n) addressed in Lemma 3.6.
|
614 |
+
Theorem 3.14. For every integer n ≥ 3, it holds that χ(S2(R, n)) = ∞.
|
615 |
+
Before proving Theorem 3.14, let us describe a significant difference between the behavior of
|
616 |
+
ξR(G) and of minrkR(G) with respect to the chromatic number χ(G). It is not difficult to see that
|
617 |
+
the chromatic number of a graph G is bounded from above by some function of ξR(G). Indeed,
|
618 |
+
given a k-dimensional orthogonal representation of a graph G over R, one can assign to each
|
619 |
+
vertex the sign vector from {0, ±1}k of its vector, obtaining a proper coloring of G with at most 3k
|
620 |
+
colors. This implies that every graph G satisfies χ(G) ≤ 3ξR(G) (see also [33, Chapter 11]). On the
|
621 |
+
other hand, the chromatic number of a graph G cannot be bounded from above by any function
|
622 |
+
of minrkR(G), as proved below.
|
623 |
+
Theorem 3.15. For every integer m, there exists a graph G such that minrkR(G) ≤ 3 and yet χ(G) ≥ m.
|
624 |
+
Proof: For an integer n > 6, consider the ‘double shift graph’ Gn defined as follows. Its vertices
|
625 |
+
are all the 3-subsets of [n], where two sets {x1, x2, x3} and {y1, y2, y3} with x1 < x2 < x3 and
|
626 |
+
y1 < y2 < y3 are adjacent in Gn if either (x2, x3) = (y1, y2) or (x1, x2) = (y2, y3). It was shown
|
627 |
+
13
|
628 |
+
|
629 |
+
in [13] that the graph Gn satisfies χ(Gn) = (1 + o(1)) · log log n (see also [14]), whereas its local
|
630 |
+
chromatic number, a concept introduced by Erd¨os et al. [12], is known to be 3. By an argument
|
631 |
+
of Shanmugam, Dimakis, and Langberg [37, Theorem 1], this implies that minrkR(Gn) ≤ 3 (see
|
632 |
+
also [2, Proposition 6.5]). This completes the proof.
|
633 |
+
We are ready to derive Theorem 3.14.
|
634 |
+
Proof of Theorem 3.14: It clearly suffices to prove the assertion of the theorem for n = 3. Let
|
635 |
+
F denote the subgraph of S2(R, 3) induced by the pairs (U, W) of subspaces of R3 satisfying
|
636 |
+
dim(U) = dim(W) = 1. By Proposition 2.5, for every graph G with minrkR(G) ≤ 3, there exists a
|
637 |
+
homomorphism from G to F and thus χ(G) ≤ χ(F). By Theorem 3.15, the chromatic number of a
|
638 |
+
graph G with minrkR(G) ≤ 3 can be arbitrarily large, hence χ(F) = ∞. Since F is a subgraph of
|
639 |
+
S2(R, 3), this yields that χ(S2(R, 3)) = ∞, as required.
|
640 |
+
3.3
|
641 |
+
Index Coding
|
642 |
+
In this section, we study the optimal length of (not necessarily linear) index codes for the comple-
|
643 |
+
ment of underlying graphs of line digraphs. Recall Definition 2.7.
|
644 |
+
We start by presenting an argument of Langberg and Sprintson [30, Theorem 4(a)] that relates
|
645 |
+
the chromatic number of a graph to the length of an index code for its complement. In fact, we
|
646 |
+
slightly modify their argument to obtain the improved bound stated below (with 2|Σ|k rather than
|
647 |
+
|Σ||Σ|k in the statement of the result).
|
648 |
+
Proposition 3.16. Let Σ be an alphabet of size at least 2, and let G be a graph. If there exists an index code
|
649 |
+
for G over Σ of length k, then χ(G) ≤ 2|Σ|k.
|
650 |
+
Proof: Assume without loss of generality that {0, 1} ⊆ Σ. Put G = (V, E) and n = |V|. Suppose
|
651 |
+
that there exists an index code for G over Σ of length k, and let E : Σn → Σk and gi : Σk+|NG(i)| → Σ
|
652 |
+
for i ∈ V denote the corresponding encoding and decoding functions.
|
653 |
+
For every vertex i ∈ V, we define a function hi : Σk → {0, 1} that determines for a given
|
654 |
+
encoded message y ∈ Σk whether gi returns 0 on y when all the symbols of the side informa-
|
655 |
+
tion of the ith receiver are zeros. Formally speaking, for every y ∈ Σk, we define hi(y) = 0 if
|
656 |
+
gi(y, 0, . . . , 0) = 0, and hi(y) = 1 otherwise.
|
657 |
+
We claim that the assignment of the function hi to each vertex i ∈ V forms a proper coloring
|
658 |
+
of G. To see this, let i and j be adjacent vertices in G. Let x ∈ Σn denote the vector with 1 in the
|
659 |
+
ith entry and 0 everywhere else, and put y = E(x). By the correctness of the decoding functions,
|
660 |
+
it follows that gi(y, x|NG(i)) = xi = 1 whereas gj(y, x|NG(j)) = xj = 0. Since i and j are adjacent in
|
661 |
+
G, they are not adjacent in G, hence all the symbols in the side information x|NG(i) of i and in the
|
662 |
+
side information x|NG(j) of j are zeros. This implies that gi(y, 0, . . . , 0) = 1 and gj(y, 0, . . . , 0) = 0,
|
663 |
+
and therefore hi(y) = 1 and hj(y) = 0, which yields that hi ̸= hj, as required. Finally, observe that
|
664 |
+
the number of distinct functions hi : Σk → {0, 1} for i ∈ V does not exceed 2|Σ|k, implying that
|
665 |
+
χ(G) ≤ 2|Σ|k.
|
666 |
+
We proceed by proving an analogue of Proposition 3.16 for line digraphs.
|
667 |
+
14
|
668 |
+
|
669 |
+
Theorem 3.17. Let Σ be an alphabet of size at least 2, let G be a graph, and let H be the underlying graph
|
670 |
+
of the digraph δG. If there exists an index code for H over Σ of length k, then χ(G) ≤ 2|Σ|k.
|
671 |
+
Proof: Assume without loss of generality that {0, 1} ⊆ Σ. Put G = (VG, EG), H = (VH, EH),
|
672 |
+
and n = |VH|. Recall that the vertices of H are the ordered pairs of adjacent vertices in G, hence
|
673 |
+
n = 2 · |EG|. Suppose that there exists an index code for H over Σ of length k, and let E : Σn → Σk
|
674 |
+
and g(u,v) : Σk+|NH(u,v)| → Σ for (u, v) ∈ VH denote the corresponding encoding and decoding
|
675 |
+
functions.
|
676 |
+
For every vertex v ∈ VG, we define a function hv : Σk → {0, 1} that determines for a given
|
677 |
+
encoded message y ∈ Σk whether every function g(u,v) associated with a vertex (u, v) ∈ VH returns
|
678 |
+
0 on y when all the symbols in the side information of the receiver of the vertex (u, v) are zeros.
|
679 |
+
Formally speaking, for every y ∈ Σk, we define hv(y) = 0 if for every u ∈ VG with (u, v) ∈ VH, it
|
680 |
+
holds that g(u,v)(y, 0, . . . , 0) = 0, and hv(y) = 1 otherwise.
|
681 |
+
We claim that the assignment of the function hv to each vertex v ∈ VG forms a proper coloring
|
682 |
+
of G. To see this, let v1 and v2 be adjacent vertices in G, and notice that (v1, v2) is a vertex of H. Let
|
683 |
+
x ∈ Σn denote the vector with 1 in the entry of (v1, v2) and 0 everywhere else, and put y = E(x).
|
684 |
+
We first claim that hv1(y) = 0. To see this, consider any vertex (u, v1) ∈ VH, and notice
|
685 |
+
that (u, v1) and (v1, v2) are adjacent in H and are thus not adjacent in H. By the correctness
|
686 |
+
of the decoding function g(u,v1), it follows that g(u,v1)(y, x|NH(u,v1)) = x(u,v1) = 0. Since (u, v1)
|
687 |
+
and (v1, v2) are not adjacent in H, all the symbols in the side information x|NH(u,v1) of the vertex
|
688 |
+
(u, v1) are zeros. We thus obtain that for every vertex u ∈ VG with (u, v1) ∈ VH, it holds that
|
689 |
+
g(u,v1)(y, 0, . . . , 0) = 0. By the definition of hv1, it follows that hv1(y) = 0, as required.
|
690 |
+
We next claim that hv2(y) = 1. To see this, observe that by the correctness of the decoding
|
691 |
+
function g(v1,v2), it follows that g(v1,v2)(y, x|NH(v1,v2)) = x(v1,v2) = 1. It further holds that all the
|
692 |
+
symbols in the side information x|NH(v1,v2) of the vertex (v1, v2) are zeros. By the definition of hv2,
|
693 |
+
it follows that hv2(y) = 1, as required.
|
694 |
+
We obtain that every two adjacent vertices v1 and v2 in G satisfy hv1 ̸= hv2. Since the number
|
695 |
+
of functions hv : Σk → {0, 1} for v ∈ VG does not exceed 2|Σ|k, it follows that χ(G) ≤ 2|Σ|k, and we
|
696 |
+
are done.
|
697 |
+
4
|
698 |
+
Hardness Results
|
699 |
+
In this section, we prove our hardness results for the orthogonality dimension and for minrank.
|
700 |
+
We also suggest a potential avenue for proving hardness results for the general index coding
|
701 |
+
problem over a constant-size alphabet.
|
702 |
+
The starting point of our hardness proofs is the following theorem of Wrochna and ˇZivn´y [40].
|
703 |
+
Recall that the function b : N → N is defined by b(n) = (
|
704 |
+
n
|
705 |
+
⌊n/2⌋).
|
706 |
+
Theorem 4.1 ([40]). For every integer k ≥ 4, it is NP-hard to decide whether a given graph G satisfies
|
707 |
+
χ(G) ≤ k or χ(G) ≥ b(k).
|
708 |
+
Our hardness results for the orthogonality dimension and the minrank parameter over finite
|
709 |
+
fields are given by the following theorem, which confirms Theorem 1.2.
|
710 |
+
15
|
711 |
+
|
712 |
+
Theorem 4.2. There exists a function f : N → N satisfying f(k) = (1 − o(1)) ·
|
713 |
+
�
|
714 |
+
b(k) such that for
|
715 |
+
every finite field F and for every sufficiently large integer k, the following holds.
|
716 |
+
1. It is NP-hard to decide whether a given graph G satisfies
|
717 |
+
ξF(G) ≤ k or ξF(G) ≥
|
718 |
+
1
|
719 |
+
√
|
720 |
+
log |F| · f(k).
|
721 |
+
2. It is NP-hard to decide whether a given graph G satisfies
|
722 |
+
minrkF(G) ≤ k or minrkF(G) ≥
|
723 |
+
1
|
724 |
+
√
|
725 |
+
2·log |F| · f(k).
|
726 |
+
Proof: Fix a finite field F of size q. We start by proving the first item of the theorem. For an integer
|
727 |
+
k ≥ 4, consider the problem of deciding whether a given graph G satisfies
|
728 |
+
χ(G) ≤ b(k) or χ(G) ≥ b(b(k)),
|
729 |
+
whose NP-hardness follows from Theorem 4.1. To obtain our hardness result on the orthogonality
|
730 |
+
dimension over F, we reduce from this problem. Consider the reduction that given an input graph
|
731 |
+
G produces and outputs the underlying graph H of the digraph δG. This reduction can clearly be
|
732 |
+
implemented in polynomial time (in fact, in logarithmic space).
|
733 |
+
To prove the correctness of the reduction, we analyze the orthogonality dimension of H over
|
734 |
+
F. If G is a YES instance, that is, χ(G) ≤ b(k), then by combining Claim 2.6 with Theorem 3.2, it
|
735 |
+
follows that
|
736 |
+
ξF(H) ≤ χ(H) ≤ k.
|
737 |
+
If G is a NO instance, that is, χ(G) ≥ b(b(k)), then by Theorem 3.5, it follows that
|
738 |
+
ξF(H) ≥
|
739 |
+
�
|
740 |
+
logq χ(G) ≥
|
741 |
+
�
|
742 |
+
logq b(b(k)) = 1−o(1)
|
743 |
+
√
|
744 |
+
log q ·
|
745 |
+
�
|
746 |
+
b(k),
|
747 |
+
where the o(1) term tends to 0 when k tends to infinity. Note that we have used here the fact that
|
748 |
+
b(n) = Θ(2n/√n). By letting k be any sufficiently large integer, the proof of the first item of the
|
749 |
+
theorem is completed.
|
750 |
+
The proof of the second item of the theorem is similar. To avoid repetitions, we briefly mention
|
751 |
+
the needed changes in the proof. First, to obtain a hardness result for the minrank parameter, the
|
752 |
+
reduction has to output the complement H of the graph H rather than H itself. Second, in the
|
753 |
+
analysis of the NO instances, one has to apply Theorem 3.13 instead of Theorem 3.5 to obtain that
|
754 |
+
minrkF(H) ≥
|
755 |
+
�
|
756 |
+
1
|
757 |
+
2 · logq χ(G) ≥
|
758 |
+
�
|
759 |
+
1
|
760 |
+
2 · logq b(b(k)) =
|
761 |
+
1−o(1)
|
762 |
+
√
|
763 |
+
2·log q ·
|
764 |
+
�
|
765 |
+
b(k).
|
766 |
+
This completes the proof of the theorem.
|
767 |
+
As an immediate corollary of Theorem 4.2, we obtain the following.
|
768 |
+
Corollary 4.3. For every finite field F, the following holds.
|
769 |
+
1. It is NP-hard to approximate ξF(G) for a given graph G to within any constant factor.
|
770 |
+
16
|
771 |
+
|
772 |
+
2. It is NP-hard to approximate minrkF(G) for a given graph G to within any constant factor.
|
773 |
+
We next prove a hardness result for the orthogonality dimension over the reals, confirming
|
774 |
+
Theorem 1.1.
|
775 |
+
Theorem 4.4. There exists a function f : N → N satisfying f(k) = Θ(
|
776 |
+
�
|
777 |
+
b(k)/k) such that for every
|
778 |
+
sufficiently large integer k, it is NP-hard to decide whether a given graph G satisfies
|
779 |
+
ξR(G) �� k or ξR(G) ≥ f(k).
|
780 |
+
Proof: As in the proof of Theorem 4.2, for an integer k ≥ 4, we reduce from the problem of
|
781 |
+
deciding whether a given graph G satisfies
|
782 |
+
χ(G) ≤ b(k) or χ(G) ≥ b(b(k)),
|
783 |
+
whose NP-hardness follows from Theorem 4.1. Consider the polynomial-time reduction that given
|
784 |
+
an input graph G produces and outputs the underlying graph H of the digraph δG.
|
785 |
+
To prove the correctness of the reduction, we analyze the orthogonality dimension of H over
|
786 |
+
R. If G is a YES instance, that is, χ(G) ≤ b(k), then by combining Claim 2.6 with Theorem 3.2, it
|
787 |
+
follows that
|
788 |
+
ξR(H) ≤ χ(H) ≤ k.
|
789 |
+
If G is a NO instance, that is, χ(G) ≥ b(b(k)), then by Theorem 3.7 combined with the fact that
|
790 |
+
b(n) = Θ(2n/√n), it follows that
|
791 |
+
ξR(H) ≥ c ·
|
792 |
+
�
|
793 |
+
log b(b(k))
|
794 |
+
log log b(b(k)) = Θ
|
795 |
+
��
|
796 |
+
b(k)
|
797 |
+
k
|
798 |
+
�
|
799 |
+
,
|
800 |
+
where c is an absolute positive constant. This completes the proof of the theorem.
|
801 |
+
As an immediate corollary of Theorem 4.4, we obtain the following.
|
802 |
+
Corollary 4.5. It is NP-hard to approximate ξR(G) for a given graph G to within any constant factor.
|
803 |
+
We end this section with a statement that might be useful for proving NP-hardness results for
|
804 |
+
the general index coding problem. Consider the following definition.
|
805 |
+
Definition 4.6. For an alphabet Σ and for two integers k1 < k2, let Index-CodingΣ(k1, k2) denote the
|
806 |
+
problem of deciding whether the minimal length of an index code for a given graph G over Σ is at most k1
|
807 |
+
or at least k2.
|
808 |
+
We prove the following result.
|
809 |
+
Theorem 4.7. Let Σ be an alphabet of size at least 2, and let k1, k2 be two integers. Then, there exists a
|
810 |
+
polynomial-time reduction from the problem of deciding whether a given graph G satisfies χ(G) ≤ b(k1)
|
811 |
+
or χ(G) ≥ k2 to Index-CodingΣ(k1, log|Σ| log k2).
|
812 |
+
17
|
813 |
+
|
814 |
+
Proof: Consider the polynomial-time reduction that given an input graph G produces the under-
|
815 |
+
lying graph H of the digraph δG and outputs its complement H. For correctness, suppose first
|
816 |
+
that G is a YES instance, that is, χ(G) ≤ b(k1). Then, by combining Claim 2.6 with Theorem 3.2,
|
817 |
+
it follows that minrkF2(H) ≤ χ(H) ≤ k1. By Proposition 3.16, it further follows that there exists a
|
818 |
+
linear index code for H over F2 of length k1. In particular, using |Σ| ≥ 2, there exists an index code
|
819 |
+
for H over the alphabet Σ of length k1. Suppose next that G is a NO instance, that is, χ(G) ≥ k2.
|
820 |
+
By Theorem 3.17, it follows that the length of any index code for H over Σ is at least log|Σ| log k2,
|
821 |
+
so we are done.
|
822 |
+
Theorem 4.7 implies that in order to prove the NP-hardness of the general index coding prob-
|
823 |
+
lem over some finite alphabet Σ of size at least 2, it suffices to prove for some integer k that it is
|
824 |
+
NP-hard to decide whether a given graph G satisfies χ(G) ≤ b(k) or χ(G) > 2|Σ|k.
|
825 |
+
Acknowledgements
|
826 |
+
We thank the anonymous reviewers for their helpful comments.
|
827 |
+
References
|
828 |
+
[1] N. Alon. The Shannon capacity of a union. Combinatorica, 18(3):301–310, 1998.
|
829 |
+
[2] I. Attias and I. Haviv. Local orthogonality dimension. arXiv, abs/2110.00718, 2021.
|
830 |
+
[3] Z. Bar-Yossef, Y. Birk, T. S. Jayram, and T. Kol. Index coding with side information. IEEE
|
831 |
+
Trans. Inform. Theory, 57(3):1479–1494, 2011. Preliminary vesrion in FOCS’06.
|
832 |
+
[4] L. Barto, J. Bul´ın, A. A. Krokhin, and J. Oprˇsal. Algebraic approach to promise constraint
|
833 |
+
satisfaction. J. ACM, 68(4):28:1–28:66, 2021. Preliminary version in STOC’19.
|
834 |
+
[5] A. Bhangale.
|
835 |
+
NP-hardness of coloring 2-colorable hypergraph with poly-logarithmically
|
836 |
+
many colors. In Proc. of the 45th International Colloquium on Automata, Languages, and Pro-
|
837 |
+
gramming (ICALP’18), pages 15:1–15:11, 2018.
|
838 |
+
[6] J. Brakensiek and V. Guruswami. New hardness results for graph and hypergraph colorings.
|
839 |
+
In Proc. of the 31st Conference on Computational Complexity (CCC’16), pages 14:1–14:27, 2016.
|
840 |
+
[7] E. Chlamt´aˇc and I. Haviv. Linear index coding via semidefinite programming. Combinatorics,
|
841 |
+
Probability & Computing, 23(2):223–247, 2014. Preliminary version in SODA’12.
|
842 |
+
[8] S. H. Dau, V. Skachek, and Y. M. Chee. Optimal index codes with near-extreme rates. IEEE
|
843 |
+
Trans. Inform. Theory, 60(3):1515–1527, 2014. Preliminary version in ISIT’12.
|
844 |
+
[9] R. de Wolf. Quantum Computing and Communication Complexity. PhD thesis, Universiteit
|
845 |
+
van Amsterdam, 2001.
|
846 |
+
[10] I. Dinur, E. Mossel, and O. Regev. Conditional hardness for approximate coloring. SIAM J.
|
847 |
+
Comput., 39(3):843–873, 2009. Preliminary version in STOC’06.
|
848 |
+
18
|
849 |
+
|
850 |
+
[11] I. Dinur and I. Shinkar. On the conditional hardness of coloring a 4-colorable graph with
|
851 |
+
super-constant number of colors. In Proc. of the 13th International Workshop on Approximation
|
852 |
+
Algorithms for Combinatorial Optimization Problems (APPROX’10), pages 138–151, 2010.
|
853 |
+
[12] P. Erd¨os, Z. F¨uredi, A. Hajnal, P. Komj´ath, V. R¨odl, and ´A. Seress. Coloring graphs with locally
|
854 |
+
few colors. Discrete Mathematics, 59(1–2):21–34, 1986.
|
855 |
+
[13] P. Erd¨os and A. Hajnal. On chromatic number of infinite graphs. In Theory of Graphs, Proc.
|
856 |
+
Colloq., Tihany, pages 83–98. Academic Press, 1966.
|
857 |
+
[14] Z. F¨uredi, A. Hajnal, V. R¨odl, and W. T. Trotter. Interval orders and shift graphs. In Sets,
|
858 |
+
Graphs and Numbers, volume 60 of Colloq. Math. Soc. J´anos Bolyai, pages 297–313. 1991.
|
859 |
+
[15] M. R. Garey and D. S. Johnson. The complexity of near-optimal graph coloring. J. ACM,
|
860 |
+
23(1):43–49, 1976.
|
861 |
+
[16] A. Golovnev and I. Haviv. The (generalized) orthogonality dimension of (generalized) Kneser
|
862 |
+
graphs: Bounds and applications. Theory of Computing, 18(22):1–22, 2022. Preliminary version
|
863 |
+
in CCC’21.
|
864 |
+
[17] V. Guruswami and S. Sandeep. d-To-1 hardness of coloring 3-colorable graphs with O(1)
|
865 |
+
colors. In Proc. of the 47th International Colloquium on Automata, Languages, and Programming,
|
866 |
+
(ICALP’20), pages 62:1–62:12, 2020.
|
867 |
+
[18] W. H. Haemers. On some problems of Lov´asz concerning the Shannon capacity of a graph.
|
868 |
+
IEEE Trans. Inform. Theory, 25(2):231–232, 1979.
|
869 |
+
[19] W. H. Haemers. An upper bound for the Shannon capacity of a graph. In L. Lov´asz and V. T.
|
870 |
+
S´os, editors, Algebraic Methods in Graph Theory, volume 25/I of Colloquia Mathematica Societatis
|
871 |
+
J´anos Bolyai, pages 267–272. Bolyai Society and North-Holland, 1981.
|
872 |
+
[20] F. Harary and R. Z. Norman. Some properties of line digraphs. Rend. Circ. Mat. Palermo,
|
873 |
+
9(2):161–168, 1960.
|
874 |
+
[21] C. C. Harner and R. C. Entringer. On the arc-chromatic number of a digraph. J. Comb. Theory,
|
875 |
+
Ser. B, 13(3):219–225, 1972.
|
876 |
+
[22] I. Haviv. Approximating the orthogonality dimension of graphs and hypergraphs. In Proc.
|
877 |
+
of the 44th International Symposium on Mathematical Foundations of Computer Science (MFCS’19),
|
878 |
+
pages 39:1–39:15, 2019.
|
879 |
+
[23] P. Hell and J. Neˇsetˇril. On the complexity of H-coloring. J. Comb. Theory, Ser. B, 48(1):92–110,
|
880 |
+
1990.
|
881 |
+
[24] S. Huang. Improved hardness of approximating chromatic number. In Proc. of the 16th In-
|
882 |
+
ternational Workshop on Approximation Algorithms for Combinatorial Optimization Problems (AP-
|
883 |
+
PROX’13), pages 233–243, 2013.
|
884 |
+
19
|
885 |
+
|
886 |
+
[25] G. Kalai. Analogues for Sperner and Erd¨os-Ko-Rado theorems for subspaces of linear spaces.
|
887 |
+
In P. L. Hammer, editor, Combinatorics 79, volume 9 of Annals of Discrete Math., page 135.
|
888 |
+
Elsevier, 1980.
|
889 |
+
[26] R. M. Karp.
|
890 |
+
Reducibility among combinatorial problems.
|
891 |
+
In Proc. of a Symposium on the
|
892 |
+
Complexity of Computer Computations, pages 85–103, 1972.
|
893 |
+
[27] K. Kawarabayashi and M. Thorup. Coloring 3-colorable graphs with less than n1/5 colors. J.
|
894 |
+
ACM, 64(1):4:1–4:23, 2017. Preliminary versions in FOCS’12 and STACS’14.
|
895 |
+
[28] S. Khanna, N. Linial, and S. Safra. On the hardness of approximating the chromatic number.
|
896 |
+
Combinatorica, 20(3):393–415, 2000. Preliminary version in ISTCS’93.
|
897 |
+
[29] S. Khot. Improved inaproximability results for maxclique, chromatic number and approx-
|
898 |
+
imate graph coloring.
|
899 |
+
In Proc. of the 42nd Symposium on Foundations of Computer Science
|
900 |
+
(FOCS’01), pages 600–609, 2001.
|
901 |
+
[30] M. Langberg and A. Sprintson. On the hardness of approximating the network coding capac-
|
902 |
+
ity. IEEE Trans. Inform. Theory, 57(2):1008–1014, 2011. Preliminary version in ISIT’08.
|
903 |
+
[31] L. Lov´asz. Flats in matroids and geometric graphs. In Combinatorial surveys: Proc. of the 6th
|
904 |
+
British Comb. Conf., Royal Holloway Coll., pages 45–86. Academic Press, 1977.
|
905 |
+
[32] L. Lov´asz. On the Shannon capacity of a graph. IEEE Trans. Inform. Theory, 25(1):1–7, 1979.
|
906 |
+
[33] L. Lov´asz. Graphs and Geometry, volume 65. Colloquium Publications, 2019.
|
907 |
+
[34] L. Lov´asz, M. Saks, and A. Schrijver. Orthogonal representations and connectivity of graphs.
|
908 |
+
Linear Algebra Appl., 114–115:439–454, 1989. Special Issue Dedicated to Alan J. Hoffman.
|
909 |
+
[35] R. Peeters. Orthogonal representations over finite fields and the chromatic number of graphs.
|
910 |
+
Combinatorica, 16(3):417–431, 1996.
|
911 |
+
[36] S. Poljak and V. R¨odl. On the arc-chromatic number of a digraph. J. Comb. Theory, Ser. B,
|
912 |
+
31(2):190–198, 1981.
|
913 |
+
[37] K. Shanmugam, A. G. Dimakis, and M. Langberg. Local graph coloring and index coding.
|
914 |
+
In Proc. of the IEEE International Symposium on Information Theory (ISIT’13), pages 1152–1156,
|
915 |
+
2013.
|
916 |
+
[38] C. E. Shannon. The zero error capacity of a noisy channel. Institute of Radio Engineers, Trans.
|
917 |
+
Inform. Theory, IT-2:8–19, 1956.
|
918 |
+
[39] S. Stahl. n-tuple colorings and associated graphs. J. Comb. Theory, Ser. B, 20(2):185–203, 1976.
|
919 |
+
[40] M. Wrochna and S. ˇZivn´y. Improved hardness for H-colourings of G-colourable graphs. In
|
920 |
+
Proc. of the 31st Annual ACM-SIAM Symposium on Discrete Algorithms (SODA’20), pages 1426–
|
921 |
+
1435, 2020.
|
922 |
+
[41] D. Zuckerman. Linear degree extractors and the inapproximability of max clique and chro-
|
923 |
+
matic number. Theory of Computing, 3(1):103–128, 2007. Preliminary version in STOC’06.
|
924 |
+
20
|
925 |
+
|
09AyT4oBgHgl3EQf1PmO/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
0dFLT4oBgHgl3EQfoy-j/content/tmp_files/2301.12133v1.pdf.txt
ADDED
@@ -0,0 +1,1061 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
arXiv:2301.12133v1 [gr-qc] 28 Jan 2023
|
2 |
+
The first variation of the matter energy-momentum tensor with respect to the metric,
|
3 |
+
and its implications on modified gravity theories
|
4 |
+
Zahra Haghani,1, ∗ Tiberiu Harko,2, 3, 4, † and Shahab Shahidi1, ‡
|
5 |
+
1School of Physics, Damghan University, Damghan, 41167-36716, Iran
|
6 |
+
2Department of Physics, Babes-Bolyai University,
|
7 |
+
1 Kogalniceanu Street, 400084 Cluj-Napoca, Romania,
|
8 |
+
3Department of Theoretical Physics, National Institute of Physics
|
9 |
+
and Nuclear Engineering (IFIN-HH), Bucharest, 077125 Romania,
|
10 |
+
4Astronomical Observatory, 19 Ciresilor Street, 400487 Cluj-Napoca, Romania,
|
11 |
+
(Dated: January 31, 2023)
|
12 |
+
The first order variation of the matter energy-momentum tensor Tµν with respect to the metric
|
13 |
+
tensor gαβ plays an important role in modified gravity theories with geometry-matter coupling, and
|
14 |
+
in particular in the f(R, T ) modified gravity theory.
|
15 |
+
We obtain the expression of the variation
|
16 |
+
δTµν/δgαβ for the baryonic matter described by an equation given in a parametric form, with
|
17 |
+
the basic thermodynamic variables represented by the particle number density, and by the specific
|
18 |
+
entropy, respectively. The first variation of the matter energy-momentum tensor turns out to be
|
19 |
+
independent on the matter Lagrangian, and can be expressed in terms of the pressure, the energy-
|
20 |
+
momentum tensor itself, and the matter fluid four-velocity. We apply the obtained results for the
|
21 |
+
case of the f(R, T ) gravity theory, where R is the Ricci scalar, and T is the trace of the matter
|
22 |
+
energy-momentum tensor, which thus becomes a unique theory, also independent on the choice of
|
23 |
+
the matter Lagrangian. A simple cosmological model, in which the Hilbert-Einstein Lagrangian is
|
24 |
+
generalized through the addition of a term proportional to T n is considered in detail, and it is shown
|
25 |
+
that it gives a very good description of the observational values of the Hubble parameter up to a
|
26 |
+
redshift of z ≈ 2.5.
|
27 |
+
PACS numbers: 04.50.+h,04.20.Cv, 95.35.+d
|
28 |
+
I.
|
29 |
+
INTRODUCTION
|
30 |
+
There are at least three theoretical perspectives [1]
|
31 |
+
that could be used to explain the large amount of re-
|
32 |
+
cent observations, which strongly suggest a faster and
|
33 |
+
faster expanding Universe [2, 3], with a composition in
|
34 |
+
which ordinary matter represents only 5% of its com-
|
35 |
+
position, the rest being represented by the dark energy,
|
36 |
+
and the dark matter [3, 4].
|
37 |
+
The first point of view
|
38 |
+
is represented by the dark constituents theory, which
|
39 |
+
adds two more components to the total energy mo-
|
40 |
+
mentum tensor of the Universe, representing dark mat-
|
41 |
+
ter and dark energy, respectively.
|
42 |
+
Therefore the cos-
|
43 |
+
mological dynamics is described by the field equation
|
44 |
+
Gµν = κ2T bar
|
45 |
+
µν
|
46 |
+
+ κ2T DM
|
47 |
+
µν (φ, ψµ, ...) + κ2T DE
|
48 |
+
µν (φ, ψµ, ...),
|
49 |
+
where T bar
|
50 |
+
µν , T DM
|
51 |
+
µν (φ, ψµ, ...), and T DE
|
52 |
+
µν (φ, ψµ, ...) represent
|
53 |
+
the energy-momentum tensors of baryonic matter, dark
|
54 |
+
matter, and dark energy, respectively, with φ and ψµ rep-
|
55 |
+
resenting scalar or vector fields. A well studied dark con-
|
56 |
+
stituent model is represented by the quintessence (scalar
|
57 |
+
field) description of dark energy [5, 6].
|
58 |
+
In the dark geometry approach, an exclusively ge-
|
59 |
+
ometric attitude on the gravitational phenomena is
|
60 |
+
adopted,
|
61 |
+
by
|
62 |
+
explaining
|
63 |
+
the
|
64 |
+
cosmological
|
65 |
+
dynamics
|
66 |
+
through the modification of the geometry underly-
|
67 | |
68 | |
69 | |
70 |
+
ing the Einstein field equations.
|
71 |
+
Hence,
|
72 |
+
the ex-
|
73 |
+
tended Einstein equations become in this approach
|
74 |
+
Gµν = κ2T bar
|
75 |
+
µν + κ2T (geom)
|
76 |
+
µν
|
77 |
+
(gµν, R, □R, ...), where Tµν
|
78 |
+
is the energy-momentum tensor of ordinary matter, and
|
79 |
+
T (geom)
|
80 |
+
µν
|
81 |
+
(gµν, R, □R, ...) is a purely geometric term, ob-
|
82 |
+
tained from the metric, torsion τ, nonmetricity Q, exten-
|
83 |
+
sions of Riemann geometry etc., and which can effectively
|
84 |
+
mimic dark energy, dark matter, or both. Some typical
|
85 |
+
example of dark geometric theories are the f(R) [7], f(Q)
|
86 |
+
[8], hybrid metric-Palatini gravity [9] theories, or gravi-
|
87 |
+
tational theories based on the Weyl-Cartan-Weitzenb¨ock
|
88 |
+
[10], or Weyl [11, 12], and Finsler geometries [13, 14].
|
89 |
+
The
|
90 |
+
third
|
91 |
+
avenue
|
92 |
+
for
|
93 |
+
the
|
94 |
+
understanding
|
95 |
+
of
|
96 |
+
the
|
97 |
+
gravitational
|
98 |
+
and
|
99 |
+
cosmological
|
100 |
+
phenomena
|
101 |
+
is
|
102 |
+
rep-
|
103 |
+
resented by the dark coupling approach,
|
104 |
+
in which
|
105 |
+
the
|
106 |
+
standard
|
107 |
+
Einstein
|
108 |
+
gravitational
|
109 |
+
equations
|
110 |
+
are
|
111 |
+
generalized
|
112 |
+
to
|
113 |
+
take
|
114 |
+
the
|
115 |
+
mathematical
|
116 |
+
form
|
117 |
+
Gµν
|
118 |
+
=
|
119 |
+
κ2Tµν
|
120 |
+
+ κ2T (coup)
|
121 |
+
µν
|
122 |
+
(R, Lm, T, □R, □T, ...),
|
123 |
+
where
|
124 |
+
the
|
125 |
+
effective
|
126 |
+
energy-momentum
|
127 |
+
tensor
|
128 |
+
T (coup)
|
129 |
+
µν
|
130 |
+
(gµν, R, Lm, T, □R, □T, ...)
|
131 |
+
of
|
132 |
+
the
|
133 |
+
theory
|
134 |
+
is
|
135 |
+
built up by considering the maximal extension of the
|
136 |
+
Hilbert-Einstein Lagrangian, by abandoning its additive
|
137 |
+
structure in matter and geometry. In the dark coupling
|
138 |
+
approach, matter is represented either by the trace T
|
139 |
+
of the matter energy-momentum tensor, by the matter
|
140 |
+
Lagrangian Lm or by some scalar made by Tµν such as
|
141 |
+
TµνT µν.
|
142 |
+
The dark coupling approach is also a theoretical an-
|
143 |
+
swer to the problem of the maximal extension of the
|
144 |
+
additive Hilbert-Einstein Lagrangian, which automati-
|
145 |
+
|
146 |
+
2
|
147 |
+
cally implies a non-additive structure of the action in
|
148 |
+
the geometric and matter variables. In a general form
|
149 |
+
the requirement of the maximal extension of the grav-
|
150 |
+
itational action can be implemented by assuming that
|
151 |
+
the Lagrangian of the gravitational field is an arbitrary
|
152 |
+
function of the curvature scalar R, and of the matter
|
153 |
+
Lagrangian Lm. One of the interesting features of the
|
154 |
+
dark coupling models is that they imply the presence
|
155 |
+
of a nonminimal geometry-matter coupling. Dark cou-
|
156 |
+
plings are not restricted to Riemannian geometry, but
|
157 |
+
they can be considered in the framework of the exten-
|
158 |
+
sions of Riemann geometry. Typical examples of dark
|
159 |
+
coupling theories are the f (R, Lm) [15, 16], f(R, T ) [17],
|
160 |
+
f (R, T, RµνT µν) [18], f(τ, T ) [19], f(Q, T ) [20], or the
|
161 |
+
f (R, T, Q, Tm) [21] theories. Other gravitational theories
|
162 |
+
implying geometry-matter coupling have been considered
|
163 |
+
in [22–27].
|
164 |
+
One of the interesting consequences of the dark cou-
|
165 |
+
pling theories is the reconsideration of the role of the or-
|
166 |
+
dinary (baryonic) matter in the cosmological dynamics.
|
167 |
+
Through its coupling to gravity, matter becomes a key
|
168 |
+
element in the explanation of cosmic dynamics, and re-
|
169 |
+
covers its central role gravity, which is minimized or even
|
170 |
+
neglected in the dark constituents and dark geometric
|
171 |
+
type theories. An important implication of the geometry-
|
172 |
+
matter coupling is that the matter energy-momentum
|
173 |
+
tensor is generally not conserved, and thus an extra-
|
174 |
+
force is generated, acting on massive particles moving
|
175 |
+
in a gravitational field, with the particles following non-
|
176 |
+
geodesic paths [16, 17]. The possibility of the existence
|
177 |
+
of such couplings between matter and geometry have
|
178 |
+
opened interesting, and novel pathways for the study of
|
179 |
+
gravitational phenomena [28].
|
180 |
+
However, the dependence of the gravitational action
|
181 |
+
in the dark coupling theories on Lm gives a new rele-
|
182 |
+
vance to the old problem of the degeneracy of the matter
|
183 |
+
Lagrangian. Two, physically inequivalent expressions of
|
184 |
+
the matter Lagrangian, Lm = −ρ, and Lm = P, lead to
|
185 |
+
the same energy-momentum tensor for matter. This re-
|
186 |
+
sult has important implications for dark coupling gravity
|
187 |
+
models. For example, in the framework of the f (R, Lm)
|
188 |
+
theory, it was shown in [29] that adopting for the La-
|
189 |
+
grangian density the expression Lm = p, where p is the
|
190 |
+
pressure, in the case of dust the extra force vanishes.
|
191 |
+
However, for the form Lm = ρ of the matter Lagrangian,
|
192 |
+
the extra-force does not vanish [30]. In [31] it was shown,
|
193 |
+
by using the variational formulation for the derivation
|
194 |
+
of the equations of motion, that both the matter La-
|
195 |
+
grangian, and the energy-momentum tensor, are uniquely
|
196 |
+
and completely determined by the form of the geometry-
|
197 |
+
matter coupling. Therefore, the extra-force never van-
|
198 |
+
ishes as a consequence of the thermodynamic properties
|
199 |
+
of the system. In [32] it was shown that if the particle
|
200 |
+
number is conserved, the Lagrangian of a barotropic per-
|
201 |
+
fect fluid with P = P(ρ) is Lm = −ρ
|
202 |
+
�
|
203 |
+
c2 +
|
204 |
+
�
|
205 |
+
P(ρ)/ρ2dρ
|
206 |
+
�
|
207 |
+
,
|
208 |
+
where ρ is the rest mass density.
|
209 |
+
This result can be
|
210 |
+
used successfully in the study of the modified theories
|
211 |
+
of gravity. The result is based on the assumption that
|
212 |
+
the Lagrangian does not depend on the derivatives of the
|
213 |
+
metric, and that the particle number of the fluid is a con-
|
214 |
+
served quantity, ∇µ (ρuµ) = 0. The matter Lagrangian
|
215 |
+
also plays an important role in the f(R, T ) theory of
|
216 |
+
gravity [17].
|
217 |
+
In theories with geometry-matter coupling another im-
|
218 |
+
portant quantity, the variation of the energy-momentum
|
219 |
+
tensor with respect to the metric does appear, and plays
|
220 |
+
an important role. The corresponding second order ten-
|
221 |
+
sor is denoted as Tµν, and it is introduced via the defini-
|
222 |
+
tion [17]
|
223 |
+
Tµν ≡ gρσ δTρσ
|
224 |
+
δgµν .
|
225 |
+
If the matter Lagrangian does not depend on the deriva-
|
226 |
+
tives of the metric, one can obtain for Tµν a mathemat-
|
227 |
+
ical expression that also contains the second variation
|
228 |
+
of the matter Lagrangian with respect to the metric,
|
229 |
+
δ2Lm/δgµνδgαβ. The Lagrangian of the electromagnetic
|
230 |
+
field is quadratic in the components of the metric tensor,
|
231 |
+
and hence its second variation gives a non-zero contri-
|
232 |
+
bution to Tµν. However, the case of ordinary baryonic
|
233 |
+
matter is more complicated. At first sight, by taking into
|
234 |
+
account the explicit forms of the matter Lagrangians,
|
235 |
+
Lm = −ρ, or Lm = p, no explicit dependence on the
|
236 |
+
metric does appear, as opposed, for example, to the case
|
237 |
+
of the electromagnetic field. This would suggest that the
|
238 |
+
second variation of the matter Lagrangian always iden-
|
239 |
+
tically vanishes, no matter what its functional form is.
|
240 |
+
This conclusion may be valid indeed for some special
|
241 |
+
forms of the equation of state, but it is not correct if
|
242 |
+
one adopts a general thermodynamic description of the
|
243 |
+
baryonic fluids.
|
244 |
+
It is the goal of the present Letter to investigate the
|
245 |
+
problem of the second variation of the perfect fluid mat-
|
246 |
+
ter Lagrangian with respect to the metric tensor com-
|
247 |
+
ponents, and to analyze its impact on modified gravity
|
248 |
+
theories. As a first step in our analysis, we obtain, from
|
249 |
+
general thermodynamic considerations, the expressions
|
250 |
+
of the variations with respect to the metric and of the
|
251 |
+
baryonic matter energy density and pressure. Once these
|
252 |
+
expressions are known, a straightforward calculation, in-
|
253 |
+
volving the computation of the second variation of the
|
254 |
+
energy density and pressure, gives the first variation of
|
255 |
+
the matter energy-momentum tensor with respect to the
|
256 |
+
metric, which also allows to obtain the tensor Tµν. The
|
257 |
+
basic result of our investigation is that the tensor Tµν
|
258 |
+
is independent of the choice of the matter Lagrangian.
|
259 |
+
The effect of the second order correction is estimated in
|
260 |
+
a cosmological background. As a specific example we will
|
261 |
+
concentrate on the f(R, T ) gravity theory, in which the
|
262 |
+
tensor Tµν plays an important role.
|
263 |
+
The present Letter is organized as follows. The general
|
264 |
+
thermodynamic formalism used for the calculation of the
|
265 |
+
second variation of the matter Lagrangian is discussed
|
266 |
+
in Section II. The general expression for the second vari-
|
267 |
+
ation of the matter Lagrangian, and of the variation of
|
268 |
+
the energy-momentum tensor is presented in Section III.
|
269 |
+
|
270 |
+
3
|
271 |
+
Some cosmological applications of the obtained results
|
272 |
+
are presented in Section III A. We then briefly review the
|
273 |
+
basics of the f(R, T ) gravity theory in Section IV and
|
274 |
+
outline its cosmological implications for a simple choice
|
275 |
+
f(R, T ) = α|T |n. Finally, we discuss and conclude our
|
276 |
+
results in Section V.
|
277 |
+
II.
|
278 |
+
THERMODYNAMICS AND GEOMETRY
|
279 |
+
In order to obtain the second variation of the baryonic
|
280 |
+
matter Lagrangian, it is necessary to review the deriva-
|
281 |
+
tion of its first variation using thermodynamics consid-
|
282 |
+
erations. The first law of the thermodynamic is given
|
283 |
+
by
|
284 |
+
dU = T dS − PdV + µdN,
|
285 |
+
(1)
|
286 |
+
where U is the total energy, µ is the chemical potential,
|
287 |
+
related to the change in the number of particles in the
|
288 |
+
system, N is the particle number and V is the volume en-
|
289 |
+
closing the fluid. An important thermodynamic relation
|
290 |
+
is the Gibbs-Duhem equation,
|
291 |
+
U = T S − PV + µN,
|
292 |
+
(2)
|
293 |
+
which
|
294 |
+
follows
|
295 |
+
from
|
296 |
+
the
|
297 |
+
extensivity
|
298 |
+
of
|
299 |
+
the
|
300 |
+
energy,
|
301 |
+
U(λX) = λU(X), where λ is a constant, and from Euler’s
|
302 |
+
theorem of the homogeneous functions.
|
303 |
+
Let us define the particle number density n = N/V
|
304 |
+
and entropy per particle s = S/N. The first law of ther-
|
305 |
+
modynamics (1) and the Gibbs-Duhem relation (2) can
|
306 |
+
be simplified to [33, 34]
|
307 |
+
dρ = T nds + µ′dn,
|
308 |
+
(3)
|
309 |
+
ρ = µ′n − P,
|
310 |
+
(4)
|
311 |
+
where µ′ = µ+T s and we have defined the energy density
|
312 |
+
as ρ = U/V . Also, by taking the differential of the Gibbs-
|
313 |
+
Duhem relation (2) we obtain
|
314 |
+
dU = T dS + SdT − PdV − V dP + Ndµ + µdN,
|
315 |
+
and using the first law of thermodynamics (1), one can
|
316 |
+
obtain
|
317 |
+
dP = sdT + ndµ = ndµ′ − nT ds,
|
318 |
+
(5)
|
319 |
+
implying that ρ = ρ(s, n) and P = P(µ′, s).
|
320 |
+
Now, we define the particle number flux
|
321 |
+
Jµ = √−gnuµ,
|
322 |
+
(6)
|
323 |
+
and the Taub current [34]
|
324 |
+
Vµ = µ′uµ,
|
325 |
+
(7)
|
326 |
+
where uµ is the fluid 4-velocity, and n, the particle num-
|
327 |
+
ber density, can be obtained according to the relation,
|
328 |
+
n =
|
329 |
+
�
|
330 |
+
gµνJµJν
|
331 |
+
g
|
332 |
+
.
|
333 |
+
(8)
|
334 |
+
.
|
335 |
+
With the above definition, one obtains
|
336 |
+
J ≡
|
337 |
+
�
|
338 |
+
−JµJµ = √−gn,
|
339 |
+
Jµ = Juµ,
|
340 |
+
(9)
|
341 |
+
V ≡
|
342 |
+
�
|
343 |
+
−VµV µ = µ′,
|
344 |
+
V µ = V uµ.
|
345 |
+
(10)
|
346 |
+
In the context of general relativity, it is well-known
|
347 |
+
that there are two equivalent baryonic matter La-
|
348 |
+
grangians corresponding to
|
349 |
+
Lm = −ρ,
|
350 |
+
Lm = p,
|
351 |
+
(11)
|
352 |
+
It should be noted that from the definition of the
|
353 |
+
energy-momentum tensor as
|
354 |
+
Tµν = −
|
355 |
+
2
|
356 |
+
√−g
|
357 |
+
δ(√−gLm)
|
358 |
+
δgµν
|
359 |
+
,
|
360 |
+
(12)
|
361 |
+
both Lagrangians in Eq. (11) give the same result,
|
362 |
+
Tµν = (ρ + P)uµuν + Pgµν.
|
363 |
+
(13)
|
364 |
+
As a next step in our study, we introduce the basic
|
365 |
+
assumptions that the variations of the entropy density s
|
366 |
+
and of the ordinary matter number flux vector density
|
367 |
+
Jµ = nuµ√−g, satisfy the two independent constraints
|
368 |
+
[35],
|
369 |
+
δs = 0,
|
370 |
+
(14)
|
371 |
+
and
|
372 |
+
δJµ = 0,
|
373 |
+
(15)
|
374 |
+
respectively. Hence, in the following we impose the re-
|
375 |
+
striction that the entropy and particle production rates re-
|
376 |
+
main unchanged during the dynamical evolution. There-
|
377 |
+
fore, the entropy and particle number currents satisfy the
|
378 |
+
conservation equations δ (Jµ∂µs) = 0 and ∇µ (nuµ) = 0,
|
379 |
+
respectively. The first of these relations is obtained by
|
380 |
+
taking the divergence of Eq. (14), contracting the ob-
|
381 |
+
tained expression with Jµ, and by using Eq. (15).
|
382 |
+
By taking the variation of the particle number n, with
|
383 |
+
the use of the assumptions previously introduced, we find
|
384 |
+
[35],
|
385 |
+
δn = n
|
386 |
+
2 (−g) uµuν
|
387 |
+
�δgµν
|
388 |
+
g
|
389 |
+
− gµν
|
390 |
+
g2 δg
|
391 |
+
�
|
392 |
+
= n
|
393 |
+
2 (uµuν + gµν) δgµν.
|
394 |
+
(16)
|
395 |
+
In order to obtain the variation of the energy-
|
396 |
+
momentum tensor, we need to find the variations of the
|
397 |
+
energy density and pressure with respect to the metric,
|
398 |
+
namely, δρ/δgµν and δP/δgµν, respectively. In the case
|
399 |
+
of isentropic processes, we have
|
400 |
+
δρ = ρ + P
|
401 |
+
n
|
402 |
+
δn,
|
403 |
+
(17)
|
404 |
+
δP = n dµ′.
|
405 |
+
(18)
|
406 |
+
|
407 |
+
4
|
408 |
+
Let the equation of state for matter be given as ρ =
|
409 |
+
ρ (n, s). Then, since δs = 0, from the thermodynamic
|
410 |
+
relation (���ρ/∂n)s = w = (ρ + P) /n, we obtain δρ =
|
411 |
+
wδn.
|
412 |
+
The variation of n is given by Eq (16), while the vari-
|
413 |
+
ation of µ′ from equation (10) can be obtained as,
|
414 |
+
δµ′ = δV = −VµVν
|
415 |
+
2V δgµν = −1
|
416 |
+
2µ′uµuνδgµν.
|
417 |
+
(19)
|
418 |
+
These relations give the thermodynamic variations of
|
419 |
+
the energy density and pressure with respect to the met-
|
420 |
+
ric as,
|
421 |
+
δρ
|
422 |
+
δgµν = 1
|
423 |
+
2(ρ + P)(gµν + uµuν),
|
424 |
+
(20)
|
425 |
+
δP
|
426 |
+
δgµν = −1
|
427 |
+
2(ρ + P)uµuν.
|
428 |
+
(21)
|
429 |
+
Eqs. (19) and (20) can be obtained in a direct way
|
430 |
+
by starting from the definition of the matter energy-
|
431 |
+
momentum tensor, as given by Eq. (12). If the matter
|
432 |
+
Lagrangian does not depend on the derivatives of the
|
433 |
+
metric tensor, from Eq. (12) we obtain
|
434 |
+
Tµν = Lmgµν − 2 δLm
|
435 |
+
δgµν ,
|
436 |
+
(22)
|
437 |
+
giving
|
438 |
+
δLm
|
439 |
+
δgµν = 1
|
440 |
+
2Lmgµν − 1
|
441 |
+
2Tµν.
|
442 |
+
(23)
|
443 |
+
If we take now Lm = −ρ, from the above equation we
|
444 |
+
find
|
445 |
+
δ(−ρ)
|
446 |
+
δgµν = −1
|
447 |
+
2ρgµν − 1
|
448 |
+
2Tµν = −1
|
449 |
+
2(ρ + P) (gµν + uµuν) ,
|
450 |
+
(24)
|
451 |
+
where we have used the expression (13) for the energy-
|
452 |
+
momentum tensor. For Lm = P, we obtain
|
453 |
+
δP
|
454 |
+
δgµν = 1
|
455 |
+
2Pgµν − 1
|
456 |
+
2Tµν = −1
|
457 |
+
2(ρ + P)uµuν.
|
458 |
+
(25)
|
459 |
+
Hence, we have recovered the expressions of the varia-
|
460 |
+
tions with respect to the metric of the energy and pres-
|
461 |
+
sure variations, previously obtained from first principle
|
462 |
+
thermodynamic considerations.
|
463 |
+
III.
|
464 |
+
THE FIRST VARIATION OF THE MATTER
|
465 |
+
ENERGY-MOMENTUM TENSOR
|
466 |
+
Now, we have all the necessary tools for computing the
|
467 |
+
second variation of the energy density and of the pressure
|
468 |
+
of a perfect fluid. Taking into account that
|
469 |
+
δgµν = −gµαgνβδgαβ,
|
470 |
+
(26)
|
471 |
+
and
|
472 |
+
δuµ
|
473 |
+
δgαβ = uν δgµν
|
474 |
+
δgαβ ,
|
475 |
+
(27)
|
476 |
+
respectively, one immediately obtains
|
477 |
+
δ2P
|
478 |
+
δgαβδgµν ≡
|
479 |
+
δ
|
480 |
+
δgαβ
|
481 |
+
� δp
|
482 |
+
δgµν
|
483 |
+
�
|
484 |
+
= 1
|
485 |
+
4(ρ + P)
|
486 |
+
�
|
487 |
+
gµβuαuν + gµαuβuν + gνβuαuµ + gναuβuµ − 1
|
488 |
+
2gαβuµuν − 1
|
489 |
+
2gµνuαuβ
|
490 |
+
�
|
491 |
+
,
|
492 |
+
(28)
|
493 |
+
and
|
494 |
+
δ2(−ρ)
|
495 |
+
δgαβδgµν =
|
496 |
+
δ2P
|
497 |
+
δgαβδgµν
|
498 |
+
− 1
|
499 |
+
4(ρ + P)(gαβgµν − gµαgνβ − gµβgνα),
|
500 |
+
(29)
|
501 |
+
respectively. Here, since the energy density and pressure
|
502 |
+
are scalars, we expect that the second variation is sym-
|
503 |
+
metric with respect to the change (αβ) ⇄ (µν). Hence,
|
504 |
+
we have implemented this symmetry to the above expres-
|
505 |
+
sions.
|
506 |
+
After a little algebra one can obtain from its definition
|
507 |
+
(12), and by assuming that the matter Lagrangian does
|
508 |
+
not depend on the derivatives of the metric tensor, the
|
509 |
+
variation of the energy-momentum tensor as
|
510 |
+
δTµν
|
511 |
+
δgαβ = 1
|
512 |
+
2Lm(gαβgµν − gµαgνβ − gµβgνα)
|
513 |
+
− 1
|
514 |
+
2Tαβgµν − 2
|
515 |
+
δ2Lm
|
516 |
+
δgαβδgµν .
|
517 |
+
(30)
|
518 |
+
Therefore, after substituting the expressions of the sec-
|
519 |
+
ond variations of the matter Lagrangians, we find the im-
|
520 |
+
portant result that for both baryonic matter Lagrangians
|
521 |
+
in Eq. (11), we obtain,
|
522 |
+
δTµν
|
523 |
+
δgαβ = 1
|
524 |
+
2P(gαβgµν − gµαgνβ − gµβgνα)
|
525 |
+
− 1
|
526 |
+
2Tαβgµν − 2
|
527 |
+
δ2P
|
528 |
+
δgαβδgµν ,
|
529 |
+
(31)
|
530 |
+
implying that the expression of δTµν/δgαβ is indepen-
|
531 |
+
dent on the choice of the matter Lagrangian. This is not
|
532 |
+
|
533 |
+
5
|
534 |
+
the case for the approximate result obtained by neglect-
|
535 |
+
ing the second variation of the matter Lagrangian with
|
536 |
+
respect to the metric,
|
537 |
+
δTµν
|
538 |
+
δgαβ ≈ 1
|
539 |
+
2Lm(gαβgµν − gµαgνβ − gµβgνα) − 1
|
540 |
+
2Tαβgµν,
|
541 |
+
(32)
|
542 |
+
which obviously depends on the choice of Lagrangian
|
543 |
+
density.
|
544 |
+
It should be noted at this moment that the energy-
|
545 |
+
momentum tensor, and its variation, should be indepen-
|
546 |
+
dent to the choice of the baryonic matter Lagrangian, as
|
547 |
+
we have summarized in the previous Section on thermo-
|
548 |
+
dynamics grounds.
|
549 |
+
Eq. (31) can also be written in the form,
|
550 |
+
δTµν
|
551 |
+
δgαβ = 1
|
552 |
+
2P(gνβgαµ + gναgβµ) − 1
|
553 |
+
2
|
554 |
+
�
|
555 |
+
Tανgµβ + Tβνgµα + Tαµgνβ + Tβµgνα − 1
|
556 |
+
2Tµνgαβ + 1
|
557 |
+
2Tαβgµν
|
558 |
+
�
|
559 |
+
.
|
560 |
+
(33)
|
561 |
+
Also, by defining a modified energy-momentum tensor
|
562 |
+
¯Tµν = (ρ + P)uµuν + 1
|
563 |
+
2Pgµν,
|
564 |
+
(34)
|
565 |
+
one can write the first variation of the energy-momentum
|
566 |
+
tensor as
|
567 |
+
δTµν
|
568 |
+
δgαβ = −1
|
569 |
+
2
|
570 |
+
�
|
571 |
+
¯Tβνgµα + ¯Tανgµβ + ¯Tαµgνβ + ¯Tβµgνα − 1
|
572 |
+
2
|
573 |
+
¯Tµνgαβ + 1
|
574 |
+
2
|
575 |
+
¯Tαβgµν
|
576 |
+
�
|
577 |
+
.
|
578 |
+
(35)
|
579 |
+
In the well-known f(R, T ) gravity theories [17], on en-
|
580 |
+
counters with the expression gµνδTµν/δgαβ, which enters
|
581 |
+
into the modified field equations. With the result given
|
582 |
+
by Eq. (33), we define
|
583 |
+
Tαβ ≡ gµν δTµν
|
584 |
+
δgαβ = −1
|
585 |
+
4(12 ¯Tαβ − ¯Tgαβ),
|
586 |
+
(36)
|
587 |
+
where ¯T = −ρ + P. Alternatively, we also have,
|
588 |
+
δT
|
589 |
+
δgαβ = Tαβ + Tαβ.
|
590 |
+
(37)
|
591 |
+
In the comoving frame one can then obtain,
|
592 |
+
Tµ
|
593 |
+
ν = 1
|
594 |
+
4diag (11ρ + 7P, −ρ − 5P, −ρ − 5P, −ρ − 5P) δµ
|
595 |
+
ν .
|
596 |
+
(38)
|
597 |
+
Taking the trace of the above expression, one finds
|
598 |
+
T ≡ gµνTµν = 2(ρ − P).
|
599 |
+
(39)
|
600 |
+
The approximate results, obtained by neglecting the
|
601 |
+
second variation of the matter Lagrangian, are,
|
602 |
+
Tµ
|
603 |
+
ν ≈ −1
|
604 |
+
2(ρ + 3P)δµ
|
605 |
+
ν ,
|
606 |
+
(40)
|
607 |
+
for Lm = −ρ, and
|
608 |
+
Tµ
|
609 |
+
ν ≈ 1
|
610 |
+
2(ρ − P)δµ
|
611 |
+
ν ,
|
612 |
+
(41)
|
613 |
+
for Lm = P.
|
614 |
+
For the approximate result with Lm = −ρ we obtain
|
615 |
+
T ≈ −2(ρ + 3P), while for Lm = P we obtain T ≈
|
616 |
+
2(ρ − P). We thus arrive to the interesting conclusion
|
617 |
+
that the approximate result with Lm = P still gives the
|
618 |
+
correct answer for the trace of the tensor T.
|
619 |
+
A.
|
620 |
+
Cosmological implications
|
621 |
+
In order to determine the effect of the new term in the
|
622 |
+
variation of the energy-momentum tensor, let us find its
|
623 |
+
behavior for a conserved matter source in a flat FLRW
|
624 |
+
Universe, with the line element
|
625 |
+
ds2 = −dt2 + a2(t)
|
626 |
+
�
|
627 |
+
dx2 + dy2 + dz2�
|
628 |
+
,
|
629 |
+
(42)
|
630 |
+
where a is the scale factor.
|
631 |
+
In this case, one has for the baryonic matter density
|
632 |
+
ρm, assumed to be in the form of dust, the expression
|
633 |
+
ρm = Ωm0
|
634 |
+
a3 ,
|
635 |
+
(43)
|
636 |
+
|
637 |
+
6
|
638 |
+
where Ω0m is the present time density abundance. For
|
639 |
+
the variation of the density of the radiation we have
|
640 |
+
ρr = Ωr0
|
641 |
+
a4 .
|
642 |
+
(44)
|
643 |
+
Assume that the Universe is filled with dust and radi-
|
644 |
+
ation, with
|
645 |
+
ρ = ρm + ρr = Ωm0
|
646 |
+
a3
|
647 |
+
+ Ωr0
|
648 |
+
a4 ,
|
649 |
+
P = 1
|
650 |
+
3ρr.
|
651 |
+
(45)
|
652 |
+
In this case, one obtains
|
653 |
+
T = 2Ωm0(1 + z)3 + 4
|
654 |
+
3Ωr0(1 + z)4,
|
655 |
+
(46)
|
656 |
+
where we have introduced the redshift z, defined as
|
657 |
+
1 + z = 1
|
658 |
+
a,
|
659 |
+
(47)
|
660 |
+
and Ωm,0 and Ωr,0 are the current values of the dust and
|
661 |
+
radiation abundances, Ωm0 = 0.305, and Ωr0 = 5.3 ×
|
662 |
+
10−5, respectively [36].
|
663 |
+
In Fig. 1 we have depicted the evolution of the new
|
664 |
+
term T as a function of the redshift.
|
665 |
+
As a result, we
|
666 |
+
expect that the new term changes the behavior of the
|
667 |
+
cosmological models in theories in which the first order
|
668 |
+
variation of the energy-momentum tensor with respect
|
669 |
+
to the metric is present in the gravitational field equa-
|
670 |
+
tions. There are major differences as compared with the
|
671 |
+
approximate relation for Lm = −ρ, but the two relations
|
672 |
+
coincide for Lm = P.
|
673 |
+
IV.
|
674 |
+
f(R, T ) GRAVITY
|
675 |
+
Now let us consider a typical gravitational theory in
|
676 |
+
which the above results can have an important influence.
|
677 |
+
Consider the action [17],
|
678 |
+
S =
|
679 |
+
�
|
680 |
+
d4x√−g(κ2R + f(R, T ) + Lm),
|
681 |
+
(48)
|
682 |
+
where f(R, T ) is an arbitrary function of the Ricci scalar
|
683 |
+
R, and of the trace of the energy-momentum tensor T .
|
684 |
+
We suppose that the Universe is filled with a perfect fluid
|
685 |
+
with the matter energy-momentum having the form (13).
|
686 |
+
The field equations can be obtained as
|
687 |
+
κ2Gµν − 1
|
688 |
+
2fgµν + fRRµν + (gµν□ − ∇µ∇ν)fR
|
689 |
+
= 1
|
690 |
+
2Tµν − fT Tµν − fT Tµν,
|
691 |
+
(49)
|
692 |
+
where the last term is computed as in Eq. (36). It should
|
693 |
+
be noted that using the correct result Eq. (36), the choice
|
694 |
+
of the matter Lagrangian is irrelevant, both cases with
|
695 |
+
Lm = −ρ and Lm = P giving the same field equations.
|
696 |
+
With the use of the mathematical identity
|
697 |
+
(□∇ν − ∇ν□) fR = Rµν∇µfR,
|
698 |
+
FIG. 1. The behavior of the extra term T as a function of the
|
699 |
+
redshift z for the new correct expression (solid curve), and for
|
700 |
+
the previously considered approximate relation for Lm = −ρ
|
701 |
+
(dashed curve). The approximate relation with Lm = P for
|
702 |
+
T exactly coincides with the correct result.
|
703 |
+
after taking the divergence of Eq. (49) we obtain the
|
704 |
+
conservation equation in the f (R, T ) gravity theory in
|
705 |
+
the form
|
706 |
+
�1
|
707 |
+
2 − fT
|
708 |
+
�
|
709 |
+
∇µTµν = (Tµν + Tµν) ∇µfT
|
710 |
+
+ fT
|
711 |
+
�
|
712 |
+
∇µTµν + 1
|
713 |
+
2∇νT
|
714 |
+
�
|
715 |
+
.
|
716 |
+
(50)
|
717 |
+
As one can see from the field equations (49), the dy-
|
718 |
+
namical behavior in f(R, T ) gravity essentially depends
|
719 |
+
on the tensor Tµν. In this Letter, we will consider a sim-
|
720 |
+
ple case that indicates the importance of the new term.
|
721 |
+
Let us assume that f(R, T ) = α|T |n, and P = 0. In this
|
722 |
+
case, the field equations reduce to
|
723 |
+
κ2Gµν = 1
|
724 |
+
2Tµν + 1
|
725 |
+
2α|T |ngµν − nαǫ|T |n−1(Tµν + Tµν),
|
726 |
+
(51)
|
727 |
+
where ǫ = sign(T ).
|
728 |
+
Here we have T = −ρ and then
|
729 |
+
ǫ = −1.
|
730 |
+
The Friedmann and Raychaudhuri equations
|
731 |
+
are then
|
732 |
+
h2 = ¯ρm − 1
|
733 |
+
2β(7n + 2)¯ρn
|
734 |
+
m,
|
735 |
+
(52)
|
736 |
+
h′ = −3
|
737 |
+
2 (¯ρm − 4βn¯ρn
|
738 |
+
m) ,
|
739 |
+
(53)
|
740 |
+
where we have used the following set of dimensionless
|
741 |
+
variables,
|
742 |
+
τ = H0t,
|
743 |
+
H = H0h,
|
744 |
+
¯ρ =
|
745 |
+
ρ
|
746 |
+
6κ2H2
|
747 |
+
0
|
748 |
+
,
|
749 |
+
β = (6κ2H2
|
750 |
+
0)n−1α,
|
751 |
+
(54)
|
752 |
+
and we have denoted by H0 the current value of the Hub-
|
753 |
+
ble parameter, and by a prime the derivative with respect
|
754 |
+
to τ.
|
755 |
+
As an indicator of the decelerating/accelerating
|
756 |
+
|
757 |
+
15
|
758 |
+
10
|
759 |
+
5
|
760 |
+
-
|
761 |
+
correct
|
762 |
+
approximate(Lm=-p)
|
763 |
+
-10E
|
764 |
+
-15
|
765 |
+
0.0
|
766 |
+
0.5
|
767 |
+
1.0
|
768 |
+
1.5
|
769 |
+
2.0
|
770 |
+
:N7
|
771 |
+
0.0
|
772 |
+
0.5
|
773 |
+
1.0
|
774 |
+
1.5
|
775 |
+
2.0
|
776 |
+
2.5
|
777 |
+
3.0
|
778 |
+
3.5
|
779 |
+
4.0
|
780 |
+
z
|
781 |
+
50
|
782 |
+
100
|
783 |
+
150
|
784 |
+
200
|
785 |
+
250
|
786 |
+
300
|
787 |
+
350
|
788 |
+
400
|
789 |
+
H
|
790 |
+
0.0
|
791 |
+
0.5
|
792 |
+
1.0
|
793 |
+
1.5
|
794 |
+
2.0
|
795 |
+
2.5
|
796 |
+
3.0
|
797 |
+
3.5
|
798 |
+
4.0
|
799 |
+
z
|
800 |
+
−0.6
|
801 |
+
−0.4
|
802 |
+
−0.2
|
803 |
+
0.0
|
804 |
+
0.2
|
805 |
+
0.4
|
806 |
+
q
|
807 |
+
FIG. 2. The behavior of the Hubble parameter H and of the deceleration parameter q as a function of the redshift for the best
|
808 |
+
fit values of the parameters as given by Eqs. (59). The dashed line represents the ΛCDM model.
|
809 |
+
0.0
|
810 |
+
0.5
|
811 |
+
1.0
|
812 |
+
1.5
|
813 |
+
2.0
|
814 |
+
2.5
|
815 |
+
3.0
|
816 |
+
3.5
|
817 |
+
4.0
|
818 |
+
z
|
819 |
+
0.0
|
820 |
+
0.2
|
821 |
+
0.4
|
822 |
+
0.6
|
823 |
+
0.8
|
824 |
+
1.0
|
825 |
+
1.2
|
826 |
+
Ω
|
827 |
+
m
|
828 |
+
FIG. 3. The behavior of the matter density parameter Ωm as
|
829 |
+
a function of redshift for the best fit values of the parameters
|
830 |
+
as given by Eq. (59). The dashed line represents the ΛCDM
|
831 |
+
model.
|
832 |
+
evolution we introduce the deceleration parameter, de-
|
833 |
+
fined as
|
834 |
+
q = d
|
835 |
+
dτ
|
836 |
+
1
|
837 |
+
h − 1.
|
838 |
+
(55)
|
839 |
+
Note that from the normalized Friedmann equation
|
840 |
+
(52), and by taking into account that at the present time
|
841 |
+
we have h(present) = 1, we can obtain the coupling β as
|
842 |
+
β = − 2(1 − Ωm0)
|
843 |
+
(2 + 7n)Ωn
|
844 |
+
m0
|
845 |
+
.
|
846 |
+
(56)
|
847 |
+
In order to find the best fit value of the parameter n,
|
848 |
+
H0 and Ωm0, we use the Likelihood analysis using the ob-
|
849 |
+
servational data on the Hubble parameter in the redshift
|
850 |
+
range z ∈ (0.07, 2.36) [36]. In the case of independent
|
851 |
+
0.2
|
852 |
+
0.3
|
853 |
+
Ω
|
854 |
+
m
|
855 |
+
0.015
|
856 |
+
0.020
|
857 |
+
0.025
|
858 |
+
n
|
859 |
+
64
|
860 |
+
66
|
861 |
+
68
|
862 |
+
70
|
863 |
+
72
|
864 |
+
H
|
865 |
+
0
|
866 |
+
64
|
867 |
+
66
|
868 |
+
68
|
869 |
+
70
|
870 |
+
72
|
871 |
+
H
|
872 |
+
0
|
873 |
+
0.015
|
874 |
+
0.020
|
875 |
+
0.025
|
876 |
+
n
|
877 |
+
FIG. 4. The corner plot for the values of the parameters H0,
|
878 |
+
Ωm0 and n with their 1σ and 2σ confidence levels.
|
879 |
+
data points, the likelihood function can be defined as
|
880 |
+
L = L0e−χ2/2,
|
881 |
+
(57)
|
882 |
+
where L0 is the normalization constant and the quantity
|
883 |
+
χ2 is defined as
|
884 |
+
χ2 =
|
885 |
+
�
|
886 |
+
i
|
887 |
+
�Oi − Ti
|
888 |
+
σi
|
889 |
+
�2
|
890 |
+
.
|
891 |
+
(58)
|
892 |
+
Here i counts the data points, Oi are the observational
|
893 |
+
value, Ti are the theoretical values, and σi are the errors
|
894 |
+
associated with the ith data obtained from observations.
|
895 |
+
By maximizing the likelihood function, the best fit val-
|
896 |
+
ues of the parameters n, Ωm0 and H0 at 1σ confidence
|
897 |
+
|
898 |
+
8
|
899 |
+
level, can be obtained as
|
900 |
+
Ωm0 = 0.224+0.024
|
901 |
+
−0.023,
|
902 |
+
H0 = 68.352+1.391
|
903 |
+
−1.418,
|
904 |
+
n = 0.020+0.002
|
905 |
+
−0.002.
|
906 |
+
(59)
|
907 |
+
Also, with the use of equation (56) we obtain
|
908 |
+
β = −0.747+0.027
|
909 |
+
−0.026.
|
910 |
+
(60)
|
911 |
+
The redshift evolution of the Hubble function, of the
|
912 |
+
deceleration parameter q, and of the matter density pa-
|
913 |
+
rameter Ωm = ¯ρm/h2 are represented, for this model, in
|
914 |
+
Figs. 2 and 3, respectively. Also, the corner plot for the
|
915 |
+
values of the parameters H0, Ωm0 and n with their 1σ
|
916 |
+
and 2σ confidence levels is shown in Fig. 4.
|
917 |
+
V.
|
918 |
+
DISCUSSIONS AND FINAL REMARKS
|
919 |
+
In the present Letter we have obtained the complete
|
920 |
+
expression of the first variation of the matter energy-
|
921 |
+
momentum tensor with respect to the metric gµν, and
|
922 |
+
of its associated tensor Tµν. The full estimation of this
|
923 |
+
term requires the calculation of the second variations of
|
924 |
+
the matter Lagrangian with respect to the metric, a term
|
925 |
+
which was generally ignored in the previous investiga-
|
926 |
+
tions of this problem. The expression of δ2Lm/δgµνδgαβ
|
927 |
+
can be calculated straightforwardly from the first varia-
|
928 |
+
tion δLm/δgµν, which can be obtained for the two possi-
|
929 |
+
ble choices of the matter Lagrangian either from thermo-
|
930 |
+
dynamic considerations, or in a direct way by using the
|
931 |
+
definition of the energy-momentum tensor. The main re-
|
932 |
+
sult of this Letter is that the first variation of the matter
|
933 |
+
energy-momentum tensor, given by Eq. (31), is indepen-
|
934 |
+
dent of the choice of the matter Lagrangian; both possible
|
935 |
+
choices lead to the same expression (31), depending only
|
936 |
+
on the thermodynamic pressure, and its second variation.
|
937 |
+
The variation of the energy-momentum tensor can also
|
938 |
+
be expressed in terms of the pressure, and the energy-
|
939 |
+
momentum tensor itself, or in a compact form in terms
|
940 |
+
of a generalized energy-momentum tensor, formally de-
|
941 |
+
fined in Eq. (34).
|
942 |
+
The new form of the variation of the matter energy-
|
943 |
+
momentum tensor may have some important implications
|
944 |
+
on modified gravity theories with geometry-matter cou-
|
945 |
+
pling. As an important example we have considered the
|
946 |
+
particular case of the f(R, T ) gravity theory. We have
|
947 |
+
investigated the cosmological implications of a particular
|
948 |
+
representation of the f(R, T ) gravity, with action given
|
949 |
+
by Eq. (48), in which the standard Hilbert-Einstein La-
|
950 |
+
grangian is corrected by a general term f(R, T ). As a
|
951 |
+
simple case we have taken f(R, T ) = α|T |n. The gener-
|
952 |
+
alized Friedmann equations take a simple form, and they
|
953 |
+
allow a complete analysis of the cosmological features of
|
954 |
+
this simple model, and a full fitting of the observational
|
955 |
+
cosmological data, which permits the determination of
|
956 |
+
the optimal values of the free parameters. The model
|
957 |
+
gives an excellent description of the observational data
|
958 |
+
for the Hubble function, up to a redshift of z ≈ 4. In
|
959 |
+
this redshift range the model basically coincides with the
|
960 |
+
ΛCDM model. The transition from acceleration to decel-
|
961 |
+
eration takes place a redshift that again coincides with
|
962 |
+
the ΛCDM value. Moreover, the deceleration parameter
|
963 |
+
q basically coincides with the ΛCDM prediction. How-
|
964 |
+
ever, significant differences in the behavior of the matter
|
965 |
+
density do appear at higher redshifts.
|
966 |
+
The search for the “true” physical quantities from
|
967 |
+
which the matter energy-momentum tensor can be ob-
|
968 |
+
tained (−ρ or P) in a variational formulation is still go-
|
969 |
+
ing on.
|
970 |
+
Interestingly enough, the two possible matter
|
971 |
+
Lagrangians are not equivalent in any sense (physical or
|
972 |
+
mathematical), but their functional variation coincides,
|
973 |
+
leading to the same energy-momentum tensor. However,
|
974 |
+
as shown in the present Letter, the first variation of the
|
975 |
+
matter energy-momentum tensor is independent on the
|
976 |
+
adopted form of the matter Lagrangian, making the mod-
|
977 |
+
ified gravity theories containing this term unique, and
|
978 |
+
well defined. Hence, the study of the various orders of
|
979 |
+
variations of the matter Lagrangians and of the energy-
|
980 |
+
momentum tensor turns out to be an important field of
|
981 |
+
research, which could lead to a new understanding of the
|
982 |
+
mathematical formalism, and of the astrophysical and
|
983 |
+
cosmological implications of the modified gravitational
|
984 |
+
theories, and in particular of the f(R, T ) gravity.
|
985 |
+
ACKNOWLEDGMENTS
|
986 |
+
We would like to thank Dr. Nihan Katirci for useful
|
987 |
+
discussions, and suggestions.
|
988 |
+
The work of TH is sup-
|
989 |
+
ported by a grant of the Romanian Ministry of Educa-
|
990 |
+
tion and Research, CNCS-UEFISCDI, project number
|
991 |
+
PN-III-P4-ID-PCE-2020-2255 (PNCDI III).
|
992 |
+
[1] T. Harko and F. S. N. Lobo, Int. J. Mod. Phys. D 29,
|
993 |
+
2030008 (2020).
|
994 |
+
[2] D. H. Weinberg, M. J. Mortonson, D. J. Eisenstein, C.
|
995 |
+
Hirata, A. G. Riess, and E. Rozo, Physics Reports 530,
|
996 |
+
87 (2013).
|
997 |
+
[3] D. Brout et al., Astrophys. J. 938, 110 (2022).
|
998 |
+
[4] N. Aghanim et al., Planck 2018 results. VI. Cosmological
|
999 |
+
parameters, Astron. Astrophys. 641, A6 (2020).
|
1000 |
+
[5] S. Tsujikawa, Class. Quant. Grav. 30, 214003 (2013).
|
1001 |
+
[6] J. de Haro and L. A. Sal´o, Galaxies 9, 73 (2021).
|
1002 |
+
[7] S. Nojiri, S. D. Odintsov, and V. K. Oikonomou, Phys.
|
1003 |
+
Rept. 692, 1 (2017).
|
1004 |
+
|
1005 |
+
9
|
1006 |
+
[8] J. B. Jimenez, L. Heisenberg, and T. Koivisto, Phys. Rev.
|
1007 |
+
D 98, 044048 (2018).
|
1008 |
+
[9] T. Harko, T. S. Koivisto, F. S. N. Lobo and G. J. Olmo,
|
1009 |
+
Phys. Rev. D 85, 084016 (2012).
|
1010 |
+
[10] Z. Haghani, T. Harko, H. R. Sepangi, and S. Shahidi,
|
1011 |
+
JCAP 10, 061 (2012).
|
1012 |
+
[11] D. M. Ghilencea, Eur. Phys. J. C 80, 1147 (2020).
|
1013 |
+
[12] D. M. Ghilencea, Eur. Phys. J. C 81, 510 (2021).
|
1014 |
+
[13] R. Hama, T. Harko, S. V.. Sabau, and S. Shahidi, Eur.
|
1015 |
+
Phys. J. C 81, 742 (2021).
|
1016 |
+
[14] R. Hama, T. Harko, and S. V. Sabau, Eur. Phys. J. C
|
1017 |
+
82, 385 (2022).
|
1018 |
+
[15] O. Bertolami, C. G. Boehmer, T. Harko, and F. S. N.
|
1019 |
+
Lobo, Phys. Rev. D 75, 104016 (2007).
|
1020 |
+
[16] T. Harko and F. S. N. Lobo, Eur. Phys. J. C 70, :373
|
1021 |
+
(2010).
|
1022 |
+
[17] T. Harko, F. S. N. Lobo, S. Nojiri, and S. D. Odintsov,
|
1023 |
+
Phys. Rev. D 84, 024020 (2011).
|
1024 |
+
[18] Z. Haghani, T. Harko, F. S. N. Lobo, H. R. Sepangi, and
|
1025 |
+
S. Shahidi, Phys. Rev. D 88, 044023 (2013).
|
1026 |
+
[19] T. Harko, F. S. N. Lobo, G. Otalora, and E. N. Saridakis,
|
1027 |
+
Phys. Rev. D 89, 124036 (2014).
|
1028 |
+
[20] Y. Xu, G. Li, T. Harko, and S.-D. Liang, Eur. Phys. J.
|
1029 |
+
C 79, 708 (2019).
|
1030 |
+
[21] T. Harko,
|
1031 |
+
N. Myrzakulov,
|
1032 |
+
R. Myrzakulov,
|
1033 |
+
and S.
|
1034 |
+
Shahidi, PDU 34, 100886 (2021).
|
1035 |
+
[22] ¨O. Akarsu, N. Katırcı, S. Kumar, R.C. Nunes, and M.
|
1036 |
+
Sami, Phys. Rev. D 98, 063522 (2018).
|
1037 |
+
[23] ¨O. Akarsu, J. D. Barrow, and N. M. Uzun, Phys. Rev. D
|
1038 |
+
102, 124059 (2020).
|
1039 |
+
[24] ¨O Akarsu, N. Katırcı, and S. Kumar, Phys. Rev. D 97,
|
1040 |
+
024011 (2018).
|
1041 |
+
[25] G. Acquaviva and N. Katırcı, Physics of the Dark Uni-
|
1042 |
+
verse 38, 101128 (2022).
|
1043 |
+
[26] H. Ludwig, O. Minazzoli, and S. Capozziello, Phys. Lett.
|
1044 |
+
B 751, 576 (2015).
|
1045 |
+
[27] O. Minazzoli, Phys. Rev. D 98, 124020 (2018).
|
1046 |
+
[28] T. Harko and F. S. N. Lobo, Extensions of f(R) gravity:
|
1047 |
+
Curvature-Matter Couplings and Hybrid Metric-Palatini
|
1048 |
+
Theory, Cambridge University Press, Cambridge, 2018
|
1049 |
+
[29] T. P. Sotiriou and V. Faraoni, Class. Quant. Grav. 25,
|
1050 |
+
205002 (2008).
|
1051 |
+
[30] O. Bertolami, F. S. N. Lobo and J. Paramos, Phys. Rev.
|
1052 |
+
D 78, 064036 (2008).
|
1053 |
+
[31] T. Harko, Phys. Rev. D 81, 044021 (2010).
|
1054 |
+
[32] O. Minazzoli and T. Harko, Phys. Rev. D 86, 087502
|
1055 |
+
(2012).
|
1056 |
+
[33] B. F. Schutz, Phys. Rev. D 2, 2762 (1970).
|
1057 |
+
[34] J. D. Brown, Class. Quant. Grav. 10, 1579 (1993).
|
1058 |
+
[35] F. de Felice and C. J. S. Clarke, Relativity on curved
|
1059 |
+
manifolds, Cambridge University Press, Cambridge, 1990
|
1060 |
+
[36] O. Farooq, et. al, Astophys. J. 835, 26 (2017).
|
1061 |
+
|
0dFLT4oBgHgl3EQfoy-j/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
0tFKT4oBgHgl3EQfOC19/content/tmp_files/2301.11757v1.pdf.txt
ADDED
@@ -0,0 +1,1391 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion
|
2 |
+
Flavio Schneider 1 Zhijing Jin 1 2 Bernhard Schölkopf 2
|
3 |
+
Abstract
|
4 |
+
The recent surge in popularity of diffusion mod-
|
5 |
+
els for image generation has brought new atten-
|
6 |
+
tion to the potential of these models in other ar-
|
7 |
+
eas of media synthesis. One area that has yet to
|
8 |
+
be fully explored is the application of diffusion
|
9 |
+
models to music generation. Music generation
|
10 |
+
requires to handle multiple aspects, including the
|
11 |
+
temporal dimension, long-term structure, multi-
|
12 |
+
ple layers of overlapping sounds, and nuances that
|
13 |
+
only trained listeners can detect. In our work, we
|
14 |
+
investigate the potential of diffusion models for
|
15 |
+
text-conditional music generation. We develop a
|
16 |
+
cascading latent diffusion approach that can gen-
|
17 |
+
erate multiple minutes of high-quality stereo mu-
|
18 |
+
sic at 48kHz from textual descriptions. For each
|
19 |
+
model, we make an effort to maintain reasonable
|
20 |
+
inference speed, targeting real-time on a single
|
21 |
+
consumer GPU. In addition to trained models, we
|
22 |
+
provide a collection of open-source libraries with
|
23 |
+
the hope of facilitating future work in the field.1
|
24 |
+
1. Introduction
|
25 |
+
Music generation, or more generally audio generation, has
|
26 |
+
multiple aspects at different levels of abstraction that make it
|
27 |
+
a challenging problem (van den Oord et al., 2016; Dieleman
|
28 |
+
et al., 2018). Regardless of its challenging nature, automated
|
29 |
+
or model-assisted music generation has been an active area
|
30 |
+
of research (Doornbusch, 2010; Salas et al., 2011; Giraudo,
|
31 |
+
2021).
|
32 |
+
Recently, with the rise of deep learning models and their suc-
|
33 |
+
cess in computer vision (Deng et al., 2009; Rombach et al.,
|
34 |
+
2022; Chang et al., 2023) and natural language process-
|
35 |
+
ing (Pennington et al., 2014; Radford et al., 2018; Devlin
|
36 |
+
et al., 2019; Ouyang et al., 2022), it is also promising to
|
37 |
+
see how much benefit deep learning models can bring to
|
38 |
+
1ETH Zürich, Switzerland 2Max Planck Institute for Intelli-
|
39 |
+
gent Systems, Tübingen, Germany. Correspondence to: Flavio
|
40 |
+
Schneider <fl[email protected]>.
|
41 |
+
1We open-source the following:
|
42 |
+
– Music samples for this paper: bit.ly/anonymous-mousai
|
43 |
+
– All music samples for all models: bit.ly/audio-diffusion
|
44 |
+
– Codes: github.com/archinetai/audio-diffusion-pytorch
|
45 |
+
UNet1
|
46 |
+
Tokenizer
|
47 |
+
UNet1
|
48 |
+
UNet1
|
49 |
+
Text Description
|
50 |
+
Noise
|
51 |
+
Noise
|
52 |
+
Audio
|
53 |
+
Embedding
|
54 |
+
Latent
|
55 |
+
Transformer
|
56 |
+
UNet2
|
57 |
+
UNet2
|
58 |
+
UNet2
|
59 |
+
UNet2
|
60 |
+
DiffusionDecoder
|
61 |
+
DiffusionGenerator
|
62 |
+
TextEncoder
|
63 |
+
Egyptian Darbuka,
|
64 |
+
Drums, Rythm,
|
65 |
+
(Deluxe Edition),
|
66 |
+
2 of 4
|
67 |
+
Figure 1. Two-stage generation architecture in the inference mode
|
68 |
+
of our model. Specifically, we first encode text with a pretrained
|
69 |
+
and frozen language model into a text embedding. Then, condition-
|
70 |
+
ing on the text, we generate a compressed latent with the diffusion
|
71 |
+
generator, and finally, the compressed latent in turn is used to
|
72 |
+
condition the diffusion decoder to generate the final waveform.
|
73 |
+
audio generation. Existing audio generation models explore
|
74 |
+
the use of recursive neural networks (Mehri et al., 2017),
|
75 |
+
adversarial generative networks (Kumar et al., 2019; Kim
|
76 |
+
et al., 2021; Engel et al., 2019; Morrison et al., 2022), au-
|
77 |
+
toencoders (Deng et al., 2021), and transformers (Yu et al.,
|
78 |
+
2022a). As the more recent advancement in generative mod-
|
79 |
+
els, diffusion models have been used in speech synthesis
|
80 |
+
(Kong et al., 2021; Lam et al., 2022; Leng et al., 2022), but
|
81 |
+
are still under-explored for music generation.
|
82 |
+
Moreover, there are several long-standing challenges in the
|
83 |
+
area of music generation: (1) modeling the long-term struc-
|
84 |
+
ture, (2) improving the sound quality, (3) increasing the
|
85 |
+
diversity of the generated music, and (4) enabling easier
|
86 |
+
control of the generation, such as text prompts. A single
|
87 |
+
model mastering all the proposed aspects would be a great
|
88 |
+
addition to the music industry. It can enable the broader
|
89 |
+
public to be part of the creative process by allowing them to
|
90 |
+
compose music using an accessible text-based interface, as-
|
91 |
+
arXiv:2301.11757v1 [cs.CL] 27 Jan 2023
|
92 |
+
|
93 |
+
Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion
|
94 |
+
Table 1. Comparison of our Moûsai model with previous music generation models. We show the comparisons along the (1) audio sample
|
95 |
+
rate@the number of channels (Sample Rate↑, where the higher the better), (2) context length of the generated music (Ctx. Len.↑,
|
96 |
+
where the higher the more capable the model is to generate structural music; we use ⋆ to indicate variable length, and we assume that
|
97 |
+
autoregressive methods are variable by default, but have an upper-bound imposed by attention. ), (3) input type (Input, where we feature
|
98 |
+
using Text as the condition for the generation), (4) type of the generate music (Music, where the more Diverse↑ genre, the better), (5)
|
99 |
+
an example of the generated music type (Example), (6) inference time (Infer. Time↓, where the shorter the better, and since the music
|
100 |
+
length is seconds or minutes, the inference time equivalent to the audio length is the shortest, and we use ⋆ to show models that can run
|
101 |
+
inference fast on CPU), and (7) total length of the music in the training data in hours (Data).
|
102 |
+
Model
|
103 |
+
Sample Rate↑ Ctx. Len.↑ Input (Text )
|
104 |
+
Music (Diverse↑)
|
105 |
+
Example
|
106 |
+
Infer. Time↓
|
107 |
+
Data
|
108 |
+
WaveNet (2016)
|
109 |
+
16kHz@1
|
110 |
+
Secs
|
111 |
+
None
|
112 |
+
Piano or speech
|
113 |
+
Piano
|
114 |
+
= Audio len.⋆ 260
|
115 |
+
Jukebox (2020)
|
116 |
+
44.1kHz@1
|
117 |
+
Mins⋆
|
118 |
+
Lyrics, author, etc.
|
119 |
+
Song with the lyrics Song
|
120 |
+
Hours
|
121 |
+
70K
|
122 |
+
RAVE (2021)
|
123 |
+
48kHz@2
|
124 |
+
Secs⋆
|
125 |
+
Latent
|
126 |
+
Single-genre Music
|
127 |
+
Strings
|
128 |
+
= Audio len.⋆ 100
|
129 |
+
AudioLM (2022) 16kHz@1
|
130 |
+
Secs⋆
|
131 |
+
Beginning of the music
|
132 |
+
Piano or speech
|
133 |
+
Piano
|
134 |
+
Mins
|
135 |
+
40K
|
136 |
+
Musika (2022)
|
137 |
+
22.5kHz@2
|
138 |
+
Secs
|
139 |
+
Context vector
|
140 |
+
Single-genre Music
|
141 |
+
Piano
|
142 |
+
= Audio len.⋆ 1K
|
143 |
+
Riffusion (2022)
|
144 |
+
44.1kHz@1
|
145 |
+
5s
|
146 |
+
Text (genre, author, etc.) Music of any genre Jazzy clarinet
|
147 |
+
Mins
|
148 |
+
–
|
149 |
+
AudioGen (2022) 16kHz@1
|
150 |
+
Secs⋆
|
151 |
+
Text (a phrase/sentence) Daily sounds
|
152 |
+
Dog barks
|
153 |
+
Hours
|
154 |
+
4K
|
155 |
+
Moûsai (Ours)
|
156 |
+
48kHz@2
|
157 |
+
Mins⋆
|
158 |
+
Text (genre, author, etc.) Music of any genre African drums = Audio len.
|
159 |
+
2.5K
|
160 |
+
sist creators in finding inspiration, and provide an unlimited
|
161 |
+
supply of novel audio samples.
|
162 |
+
From the landscape of existing music generation models
|
163 |
+
in Table 1, we can see that the aforementioned challenges
|
164 |
+
widely exist throughout the literature. For example, most
|
165 |
+
text-to-audio systems (Forsgren & Martiros, 2022; Kreuk
|
166 |
+
et al., 2022) can only generate a few seconds of audio, and
|
167 |
+
many tend to require long inference time up to many GPU
|
168 |
+
hours to generate one minute of audio (Dhariwal et al., 2020;
|
169 |
+
Kreuk et al., 2022). Apart from the text-to-music generation
|
170 |
+
models, if we look at the unconditional music generation,
|
171 |
+
some can generate high-quality samples and run in real time
|
172 |
+
on CPU (Caillon & Esling, 2021; Pasini & Schlüter, 2022),
|
173 |
+
but they are usually trained on a single modality (resulting in
|
174 |
+
the ability to handle only single-genre music, but not diverse
|
175 |
+
ones), and none can handle long-term structure (van den
|
176 |
+
Oord et al., 2016; Caillon & Esling, 2021; Pasini & Schlüter,
|
177 |
+
2022).
|
178 |
+
To this end, we propose Moûsai,2 a text-conditional cascad-
|
179 |
+
ing diffusion model (Figure 1) that tries to address all the
|
180 |
+
mentioned challenges at the same time. Specifically, our
|
181 |
+
Moûsai model uses a custom two-stage cascading diffusion
|
182 |
+
method shown in Figure 1. In the first stage, it compresses
|
183 |
+
the audio waveform using a novel diffusion autoencoder,
|
184 |
+
and in the second stage, it learns to generate the reduced
|
185 |
+
2Moûsai is romanized ancient Greek for Muses, the sources of
|
186 |
+
artistic inspiration (https://en.wikipedia.org/wiki/
|
187 |
+
Muses). Given that inspiration is exactly what the system may be
|
188 |
+
lacking, this name may not be apposite, but the reminiscence to
|
189 |
+
both music and AI was simply too compelling.
|
190 |
+
latent representations conditioned on the text embedding
|
191 |
+
generated by a pretrained language model. Both stages use
|
192 |
+
an efficient U-Net optimized by us, enabling fast inference
|
193 |
+
speed which makes it realistic for usage in future applica-
|
194 |
+
tions.
|
195 |
+
In conclusion, the main contributions of our work are as
|
196 |
+
follows:
|
197 |
+
1. We make it possible to generate long-context 48kHz
|
198 |
+
stereo music exceeding the minute mark, based on
|
199 |
+
context exceeding the minute mark, and generate a
|
200 |
+
variety of music.
|
201 |
+
2. We propose an efficient 1D U-Net architecture for both
|
202 |
+
stages of the cascade, making it possible to generate
|
203 |
+
audio in real-time on a single consumer GPU. Likewise,
|
204 |
+
each stage of our system can be trained on one A100
|
205 |
+
GPU in approximately 1 week, making it possible to
|
206 |
+
train and run the overall system using modest resources,
|
207 |
+
as available in most universities.
|
208 |
+
3. We present a new diffusion magnitude autoencoder
|
209 |
+
that can compress the audio signal 64x compared to
|
210 |
+
the original waveform with only moderate quality loss,
|
211 |
+
used by the generation stage of the architecture to apply
|
212 |
+
latent diffusion on.
|
213 |
+
2. Related Work
|
214 |
+
A common trend in the generative space has been to first
|
215 |
+
train a representation learning, compression, or upsampling
|
216 |
+
model on the input domain, and later learn a generative
|
217 |
+
|
218 |
+
Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion
|
219 |
+
model on top of the reduced representation while condition-
|
220 |
+
ing on the information of interest (Rombach et al., 2022;
|
221 |
+
Yang et al., 2022; Kreuk et al., 2022; Ho et al., 2022; Ville-
|
222 |
+
gas et al., 2022). This can be drastically more efficient than
|
223 |
+
directly learning on the raw input data, as the generative
|
224 |
+
model can work on a much lower dimensional representa-
|
225 |
+
tion and hence capture coarse structures.
|
226 |
+
Auto-encoding (Hinton & Salakhutdinov, 2006; Kingma &
|
227 |
+
Welling, 2014) or quantized auto-encoding (van den Oord
|
228 |
+
et al., 2017; Esser et al., 2021; Lee et al., 2022) are popu-
|
229 |
+
lar compression methods originally proposed for the image
|
230 |
+
domain, that have been similarly and successfully applied
|
231 |
+
as audio representations (Caillon & Esling, 2021; Pasini &
|
232 |
+
Schlüter, 2022; Baevski et al., 2020; Zeghidour et al., 2022;
|
233 |
+
Défossez et al., 2022). The two most popular directions in
|
234 |
+
the generative space suggest either to learn a quantized rep-
|
235 |
+
resentation followed by masked or autoregressive learning
|
236 |
+
on tokens (Villegas et al., 2022; Yu et al., 2022b; Chang
|
237 |
+
et al., 2023; Dhariwal et al., 2020; Borsos et al., 2022; Yang
|
238 |
+
et al., 2022; Kreuk et al., 2022), or to use learned (continous)
|
239 |
+
compressed or deterministic downsampled representation
|
240 |
+
and later apply diffusion models as generators to reconstruct
|
241 |
+
the noise-masked data in another stage (Ramesh et al., 2022;
|
242 |
+
Rombach et al., 2022; Saharia et al., 2022; Ho et al., 2022;
|
243 |
+
Forsgren & Martiros, 2022). Methods using the former to-
|
244 |
+
kenized representation have been successful but not up to
|
245 |
+
the same level of performance as the latter (“cascading")
|
246 |
+
diffusion methods.
|
247 |
+
In our work, we follow ideas from the cascading diffusion
|
248 |
+
approach, which, to the best of our knowledge, has never
|
249 |
+
been attempted for audio generation. We use a custom
|
250 |
+
two-stage cascading diffusion method, where the first stage
|
251 |
+
compresses audio using a novel diffusion autoencoder, and
|
252 |
+
the second stage learns to generate the reduced representa-
|
253 |
+
tion while conditioning on a textual description.
|
254 |
+
3. Preliminaries
|
255 |
+
In this section, we introduce several preliminaries that serve
|
256 |
+
as the basis for our model. Specifically, we give an overview
|
257 |
+
of the workings of diffusion, latent diffusion, and the U-Net.
|
258 |
+
3.1. Audio Generation
|
259 |
+
Audio generation has long been a challenging task. At the
|
260 |
+
lowest level, we have digital waveforms that control air
|
261 |
+
movement from speakers. Waveforms can be represented in
|
262 |
+
different resolutions, or sample rates. Higher sample rates
|
263 |
+
(e.g., 48kHz)allow for more temporal resolution and can
|
264 |
+
represent higher frequencies, but at the same time it is com-
|
265 |
+
putationally more demanding to generate. At higher levels
|
266 |
+
of abstraction, we find qualitative properties such as texture
|
267 |
+
(timbre) or pitch. Zooming out, we observe structure such
|
268 |
+
as rhythm and melody that can span multiple seconds, or
|
269 |
+
even structurally be composed into choruses that form min-
|
270 |
+
utes of interconnected patterns. Audio can be represented
|
271 |
+
with a single waveform (mono), two waveforms (stereo),
|
272 |
+
or even more in the case of surround sound. Audio with
|
273 |
+
two or more channels can give a sense of movement and
|
274 |
+
spatialisation. From the modelling perspective, there are
|
275 |
+
unconditional models that generate novel samples from the
|
276 |
+
training distribution without any additional information, or
|
277 |
+
conditional models that use a form of guidance, such as text,
|
278 |
+
to control the generation. Models can be trained on a single
|
279 |
+
modality (e.g., drums or piano) or on multiple modalities,
|
280 |
+
which usually require more parameters for an increased
|
281 |
+
modelling capacity and decrease in speed.
|
282 |
+
3.2. Diffusion
|
283 |
+
We employ vvv-objective diffusion as proposed by Salimans
|
284 |
+
& Ho (2022). Given a sample xxx0 from a distribution p(xxx0),
|
285 |
+
some noise schedule σt ∈ [0, 1], and some noisy data-point
|
286 |
+
xxxσt = ασtxxx0 + βσtϵϵϵ, vvv-objective diffusion tries to esti-
|
287 |
+
mate a model ˆvvvσt = f(xxxσt, σt) minimizing the following
|
288 |
+
objective:
|
289 |
+
Et∼[0,1],σt,xxxσt
|
290 |
+
î
|
291 |
+
∥fθ(xxxσt, σt) − vvvσt∥2
|
292 |
+
2
|
293 |
+
ó
|
294 |
+
,
|
295 |
+
(1)
|
296 |
+
where vvvσt = ∂xxxσt
|
297 |
+
σt
|
298 |
+
= ασtϵϵϵ − βσtxxxσt with ασt
|
299 |
+
..= cos(φt),
|
300 |
+
and βσt
|
301 |
+
..= sin(φt) and φt ..= π
|
302 |
+
2 σt.
|
303 |
+
By estimating the rate of change, ODE samplers can be used
|
304 |
+
to turn noise into a new datapoint. In this work, we use the
|
305 |
+
DDIM sampler (Song et al., 2021), which we find to work
|
306 |
+
well and have a reasonable tradeoff between the number of
|
307 |
+
steps and audio quality. The DDIM sampler denoises the
|
308 |
+
signal by repeated application of the following:
|
309 |
+
ˆvvvσt = fθ(xxxσt, σt)
|
310 |
+
(2)
|
311 |
+
ˆxxx0 = ασtxxxσt − βσtˆvvvσt
|
312 |
+
(3)
|
313 |
+
ˆϵϵϵσt = βσtxxxσt + ασtˆvvvσt
|
314 |
+
(4)
|
315 |
+
ˆxxxσt−1 = ασt−1ˆxxx0 + βσt−1ˆϵϵϵt,
|
316 |
+
(5)
|
317 |
+
which estimates both the initial data-point and the noise at
|
318 |
+
step σt, for some T-step noise schedule σT , . . . , σ0 linearly
|
319 |
+
spaced between 1 and 0.
|
320 |
+
3.3. Latent Diffusion
|
321 |
+
Following the work on image diffusion (Rombach et al.,
|
322 |
+
2022), we compress audio into a smaller representation and
|
323 |
+
apply the diffusion process on the reduced latent space. In
|
324 |
+
contrast to Rombach et al. (2022), we propose a diffusion
|
325 |
+
based autoencoder instead of a standard autoencoder, in-
|
326 |
+
creasing the representation power of the decoding process
|
327 |
+
and the amount of compressibility allowed.
|
328 |
+
|
329 |
+
Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion
|
330 |
+
Downsample
|
331 |
+
Upsample
|
332 |
+
Items
|
333 |
+
Skip
|
334 |
+
UNetBlock
|
335 |
+
Items
|
336 |
+
Items
|
337 |
+
×N
|
338 |
+
R
|
339 |
+
C
|
340 |
+
A
|
341 |
+
M
|
342 |
+
I
|
343 |
+
Figure 2. 1D U-Net architecture used both for the diffusion decoder
|
344 |
+
and latent diffusion generator. The inner dashed region indicates
|
345 |
+
that the UNetBlock can be recursively nested. Resnet items (R)
|
346 |
+
are used as convolutional blocks, modulation items (M) are used to
|
347 |
+
provide the diffusion noise level as a feature vector conditioning
|
348 |
+
,
|
349 |
+
inject items (I) are used to inject external channels as conditioning
|
350 |
+
(used for diffusion decoding only), attention items (A) are used
|
351 |
+
to share information timewise, and cross attention items (C) are
|
352 |
+
used to condition on an external (text) embeddings
|
353 |
+
.
|
354 |
+
3.4. U-Net
|
355 |
+
U-Nets were first proposed by Ronneberger et al. (2015) as
|
356 |
+
an hourglass convolutional only 2D architecture with skip
|
357 |
+
connections; originally used for medial image segmentation,
|
358 |
+
and since repurposed for multiple uses, such as image, au-
|
359 |
+
dio, and video generation. Our proposed U-Net has little
|
360 |
+
resemblance to the original work, and is infused with multi-
|
361 |
+
ple new components, such as more modern convolutional
|
362 |
+
blocks, a variety of attention blocks, conditioning blocks,
|
363 |
+
and improved skip connections, maintaining only a skeleton
|
364 |
+
of the hourglass architecture.
|
365 |
+
4. Text-to-Music Generation with Moûsai
|
366 |
+
Moûsai is composed of two independently trained models.
|
367 |
+
The first stage (DMAE) is responsible for compressing the
|
368 |
+
audio waveform 64x using a diffusion autoencoder. In the
|
369 |
+
second stage (latent text-to-audio diffusion), we generate a
|
370 |
+
novel latent space by the diffusion model while conditioning
|
371 |
+
on text embeddings obtained from a frozen transformer
|
372 |
+
language model. For both diffusion models, we use the same
|
373 |
+
efficient 1D U-Net architecture with varying configurations.
|
374 |
+
4.1. 1D U-Net
|
375 |
+
In this work, we use a 1D U-Net architecture employed in
|
376 |
+
different configurations for both the autoencoding and latent
|
377 |
+
diffusion stage (Figure 2). U-Nets with 1D convolutional
|
378 |
+
kernels are more efficient compared to 2D in terms of speed
|
379 |
+
and can be successfully used both on waveforms or on
|
380 |
+
UNet
|
381 |
+
||·||
|
382 |
+
Noise
|
383 |
+
Encoder
|
384 |
+
STFTMag
|
385 |
+
Latent
|
386 |
+
Audio
|
387 |
+
Figure 3. Diffusion Magnitude Autoencoder (DMAE) training
|
388 |
+
scheme. The diffusion autoencoder stage learns to compress au-
|
389 |
+
dio 64x (compared to the original waveform) into a smaller latent
|
390 |
+
space. To train this stage, the waveform is first converted to a
|
391 |
+
magnitude spectrogram, then auto-encoded into a latent. At the
|
392 |
+
same time, the original audio is corrupted with a random amount
|
393 |
+
of noise and the U-Net is trained to remove that noise. During the
|
394 |
+
noise removal process, the U-Net is conditioned on the noise level
|
395 |
+
and the compressed latent
|
396 |
+
which can have access to a reduced
|
397 |
+
version of the non-noisy audio.
|
398 |
+
spectrograms if each frequency is considered as a different
|
399 |
+
channel.
|
400 |
+
We use a variety of repeated items at each resolution of the
|
401 |
+
U-Net, namely: (R) a residual 1D convolutional unit, (M)
|
402 |
+
a modulation unit used to alter the channels given features
|
403 |
+
from the diffusion noise level, (I) an inject item that con-
|
404 |
+
catenates external channels to the ones at the current depth
|
405 |
+
(the lengths must match), (A) an attention item used to share
|
406 |
+
long-context structural information, and (C) a cross atten-
|
407 |
+
tion item used to condition on text embeddings. Inject items
|
408 |
+
are applied only at a specific depth in the first stage decoder
|
409 |
+
to condition on the latent. Attention and cross attention
|
410 |
+
items are instead used only in the inner blocks of the second
|
411 |
+
stage U-Net, to learn structure and condition on text.
|
412 |
+
4.2. Diffusion Magnitude-Autoencoding (DMAE)
|
413 |
+
Diffusion autoencoders were first introduced by Preechakul
|
414 |
+
et al. (2022), as a way to condition the diffusion process on
|
415 |
+
a compressed latent vector of the input itself. Diffusion can
|
416 |
+
act as a more powerful generative decoder, and hence the in-
|
417 |
+
put can be reduced to latents with higher compression ratios.
|
418 |
+
In this work, we propose a new diffusion autoencoder that
|
419 |
+
|
420 |
+
500
|
421 |
+
0
|
422 |
+
400
|
423 |
+
-20
|
424 |
+
300
|
425 |
+
-40
|
426 |
+
200
|
427 |
+
-60
|
428 |
+
100
|
429 |
+
-80
|
430 |
+
00
|
431 |
+
200
|
432 |
+
400
|
433 |
+
600
|
434 |
+
800
|
435 |
+
1000Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion
|
436 |
+
first encodes a magnitude spectrogram into a compressed
|
437 |
+
representation, and later injects the latent into intermediate
|
438 |
+
channels of the decoding 1D U-Net (Figure 3).
|
439 |
+
Let www be a waveform of shape [c, t] for c channels and t
|
440 |
+
timesteps, and (m
|
441 |
+
m
|
442 |
+
mw
|
443 |
+
ww,pppww
|
444 |
+
w) = stft(www; n = 1024, h = 256)
|
445 |
+
be the magnitude and phase obtained from a short-time
|
446 |
+
furier tranform of the waveform with a window size of
|
447 |
+
1024 and hop-length of 256. Then the resulting spectro-
|
448 |
+
grams will have shape [c · n, t
|
449 |
+
h]. We discard phase and
|
450 |
+
encode the magnitude into a latent zzz = encθenc(m
|
451 |
+
m
|
452 |
+
mw
|
453 |
+
ww) us-
|
454 |
+
ing a 1D convolutional encoder. The original waveform is
|
455 |
+
then reconstructed by decoding the latent using a diffusion
|
456 |
+
model ˆwww = decθdec(zzz,ϵϵϵ, s), where decθdec is the diffusion
|
457 |
+
sampling process with starting noise ϵϵϵ and s is the num-
|
458 |
+
ber of decoding (sampling) steps. The decoder is trained
|
459 |
+
with vvv-objective diffusion while conditioning on the latent
|
460 |
+
fθdec(wwwσt; σt,zzz), where fθdec is the proposed 1D U-Net,
|
461 |
+
called repeatedly during decoding.
|
462 |
+
Since only the magnitude is used and phase is discarded,
|
463 |
+
this diffusion autoencoder is simultaneously a compressing
|
464 |
+
autoencoder and vocoder. By using the magnitude spec-
|
465 |
+
trograms, higher compression ratios can be obtained than
|
466 |
+
autoencoding directly the waveform. We found that wave-
|
467 |
+
forms are less compressible and efficient to work with. Sim-
|
468 |
+
ilarly, discarding phase is benificial to obtain higher com-
|
469 |
+
pression ratios for the same level of quality. The diffusion
|
470 |
+
model can easily learn to generate a waveform with realistic
|
471 |
+
phase even if conditioned only on the encoded magnitude.
|
472 |
+
Depending on the desired speed/quality tradeoff, more or
|
473 |
+
less compression can be applied in this first stage. Following
|
474 |
+
our single GPU constraint, we ���nd that 64x compression
|
475 |
+
factor is a good balance to make sure the second stage can
|
476 |
+
work on a reduced representation.
|
477 |
+
The latent space produced is then used as a starting point
|
478 |
+
for the next diffusion stage. To make sure that the reduced
|
479 |
+
latent space can be used for latent diffusion, we apply a tanh
|
480 |
+
function on the bottleneck, keeping the values in the range
|
481 |
+
[−1, 1]. A more disentangled bottleneck, such as the one
|
482 |
+
used in VAEs (Kingma & Welling, 2014) can be used, but
|
483 |
+
the additional regularization reduces the amount of allowed
|
484 |
+
compressibility.
|
485 |
+
4.3. Latent Text-to-Audio Diffusion
|
486 |
+
The second stage applies latent diffusion on the previously
|
487 |
+
obtained compressed space (Figure 4). Similarly to the pre-
|
488 |
+
vious stage we use vvv-objective diffusion with the 1D U-Net
|
489 |
+
architecture and a different configuration fθgen(zzzσt; σt,eee)
|
490 |
+
while conditoning on the text embedding eee to generate the
|
491 |
+
compressed latent zzz = encθenc(m
|
492 |
+
m
|
493 |
+
mw
|
494 |
+
w
|
495 |
+
w). The generation func-
|
496 |
+
tion ˆzzz = genθgen(eee,ϵϵϵ, s) uses again DDIM sampling and
|
497 |
+
calls the U-Net s times to generate an approximate latent ˆzzz
|
498 |
+
UNet
|
499 |
+
||·||
|
500 |
+
Noise
|
501 |
+
Text
|
502 |
+
Embedding
|
503 |
+
Embedding
|
504 |
+
Transformer
|
505 |
+
Latent
|
506 |
+
Figure 4. Text-conditional latent diffusion generator training
|
507 |
+
scheme. This stage is trained to generate novel latent spaces
|
508 |
+
that follow a similar distribution to the ones generated by the au-
|
509 |
+
toencoder. The audio source is first encoded into the latent using
|
510 |
+
the encoder, then the latent is corrupted with a random amount
|
511 |
+
of noise, and the U-Net is trained to remove the noise. While the
|
512 |
+
U-Net denoises the signal, the noise level is provided as a feature
|
513 |
+
vector
|
514 |
+
, and an encoded textual description of the original wave-
|
515 |
+
form is provided as an embedding encoded with a frozen language
|
516 |
+
model
|
517 |
+
.
|
518 |
+
from the text embedding eee and starting noise ϵϵϵ. The final
|
519 |
+
generation stack during inference to obtain a waveform is
|
520 |
+
ˆwww = decθdec(genθgen(eee,ϵϵϵgen, sgen),ϵϵϵdec, sdec) .
|
521 |
+
(6)
|
522 |
+
The 1D U-Net used in this stage includes cross attention
|
523 |
+
blocks to provide the conditioning text embedding and mul-
|
524 |
+
tiple attention blocks to make sure information can be shared
|
525 |
+
over the entire latent, crucial to learn long-range audio struc-
|
526 |
+
ture.
|
527 |
+
Given the compressed size of the latent space, the size of
|
528 |
+
this inner U-Net can be greatly increased compared to the
|
529 |
+
first stage, maintaining a reasonable training and inference
|
530 |
+
speed, even with large parameter counts.
|
531 |
+
4.4. Text Conditioning
|
532 |
+
To obtain the text embeddings, prior work on text-
|
533 |
+
conditioning suggests either learning a joint data-text rep-
|
534 |
+
resentation (Li et al., 2022; Elizalde et al., 2022; Ramesh
|
535 |
+
et al., 2022) or using embeddings from pre-trained language
|
536 |
+
model as direct conditioning (Saharia et al., 2022; Ho et al.,
|
537 |
+
2022) of the latent model.
|
538 |
+
In our model, we follow the practice in Saharia et al. (2022)
|
539 |
+
to use a pre-trained and frozen T5 language model (Raffel
|
540 |
+
et al., 2020) to generate text embeddings from the given
|
541 |
+
description. We use the classifier-free guidance (CFG) (Ho
|
542 |
+
|
543 |
+
Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion
|
544 |
+
Example Text Prompts in Our Dataset
|
545 |
+
Nr. 415 (Premium Edition), german hip hop, 2 of 7, 2012,
|
546 |
+
XATAR, Konnekt
|
547 |
+
30 Años de Exitos, Mundanzas, 2 of 6, latin pop, Lupita
|
548 |
+
D’Alessio, 2011
|
549 |
+
emo rap 2018 Runaway Lil Peep 4 of 5
|
550 |
+
Alone, Pt. II (Remixes) 2020 electro house Alone, Pt. II - Da
|
551 |
+
Tweekaz Remix Alan Walker
|
552 |
+
Table 2. Example text prompts in our dataset.
|
553 |
+
& Salimans, 2022) with a learned mask applied on batch
|
554 |
+
elements with a probability of 0.1 to improve the strength
|
555 |
+
of the text-embedding during inference.
|
556 |
+
5. Experimental Setup
|
557 |
+
For the experimental setup, we first give an high-level
|
558 |
+
overview of the dataset and the training setup in Section 5.1,
|
559 |
+
and then we dive into details of the implementation in Sec-
|
560 |
+
tion 5.2 and hardware requirements in Section 5.3.
|
561 |
+
5.1. Dataset and Training Setup
|
562 |
+
We train all the models on a (relatively modest) collection
|
563 |
+
that we compiled consisting of 2,500 hours of stereo music
|
564 |
+
sampled at 48kHz spanning multiple genres, artists, instru-
|
565 |
+
ments, and provenience in order to maintain a high diversity
|
566 |
+
dataset. The autoencoder is trained on random crops of
|
567 |
+
length 218 (∼5.5s at 48kHz) and the text-conditional diffu-
|
568 |
+
sion generation model is trained on fixed crops of length
|
569 |
+
221 (∼44s at 48kHz) encoded in the 32-channels, 64x com-
|
570 |
+
pressed latent.
|
571 |
+
For the textual description, we use metadata such as the title,
|
572 |
+
author, album, genre, and year of release. Given that a song
|
573 |
+
could span longer than 44s, we append a string indicating
|
574 |
+
which chunk is currently being trained on, together with the
|
575 |
+
total chunks the song is made of (e.g., 1 of 4). This allows
|
576 |
+
to select the region of interest during inference. Hence, an
|
577 |
+
example prompt is like “Egyptian Darbuka, Drums, Rythm,
|
578 |
+
(Deluxe Edition), 2 of 4.” To make the conditioning more
|
579 |
+
robust, we shuffle the list of metadata and drop each element
|
580 |
+
with a probability of 0.1. Furthermore, for 50% of the times
|
581 |
+
we concatenate the list with spaces and the other 50% of
|
582 |
+
the times we use commas to make the interface more robust
|
583 |
+
during inference. Some example prompts in our dataset can
|
584 |
+
be seen in Table 2.
|
585 |
+
5.2. Implementation Details
|
586 |
+
We train a 185M-parameter diffusion autoencoder with
|
587 |
+
7 nested U-Net blocks of increasing channel count
|
588 |
+
([256, 512, 512, 512, 1024, 1024, 1024]) and downsample
|
589 |
+
each time by 2, except for the first block ([1, 2, 2, 2, 2, 2, 2]).
|
590 |
+
The diffusion autoencoder only uses resnet and modulation
|
591 |
+
items with the following repetitions [1, 2, 2, 2, 2, 2, 2], atten-
|
592 |
+
tion is not used to allow decoding of variable and possibly
|
593 |
+
very long latents. Channel injection only happens at depth
|
594 |
+
4, which matches the output of the magnitude encoder la-
|
595 |
+
tent, post tanh application. Furthermore, we train a 857M
|
596 |
+
text-conditional generator (including the parameters of the
|
597 |
+
frozen T5-base model) with 6 nested U-Net blocks of in-
|
598 |
+
creasing channel counts ([128, 256, 512, 512, 1024, 1024])
|
599 |
+
and again downsample each time by 2, except for the first
|
600 |
+
block ([1, 2, 2, 2, 2, 2]), we use attention blocks at the fol-
|
601 |
+
lowing depths [0, 0, 1, 1, 1, 1], skipping the first two blocks
|
602 |
+
to allow for further downsampling before sharing informa-
|
603 |
+
tion over the entire latent, instead use cross attention blocks
|
604 |
+
at all resolutions ([1, 1, 1, 1, 1, 1]). For both attention and
|
605 |
+
cross attention, we use 64 head features and 12 heads per
|
606 |
+
layer. We repeat items with an increasing count towards
|
607 |
+
the inner U-Net low-resolution and large-context blocks
|
608 |
+
([2, 2, 2, 4, 8, 8]), this allows good structural learning over
|
609 |
+
minutes of audio. Both models are trained with the AdamW
|
610 |
+
optimizer (Loshchilov & Hutter, 2019) using a learning rate
|
611 |
+
of 10−4, β1 = 0.95, β2 = 0.999, ϵ = 10−6, and wight
|
612 |
+
decay of 10−3. Moreover, we use an exponential moving
|
613 |
+
average (EMA) with β = 0.995 and power of 0.7.
|
614 |
+
5.3. Hardware Requirements
|
615 |
+
We use limited computational resources as available in a
|
616 |
+
university lab. Both models can be trained on a single A100
|
617 |
+
GPU in 1 week of training using a batch size of 32; this is
|
618 |
+
equivalent to around 1M steps for both the diffusion autoen-
|
619 |
+
coder and latent generator. For inference, as an example,
|
620 |
+
a novel audio source of ∼88s can be synthesized less than
|
621 |
+
∼88s using a consumer GPU with a DDIM sampler and
|
622 |
+
a high step count (100 generation steps and 100 decoding
|
623 |
+
steps).
|
624 |
+
6. Results
|
625 |
+
As mentioned in Table 1, our model is the only model that
|
626 |
+
generates long-context music from text descriptions. Most
|
627 |
+
other models do not take text as input (van den Oord et al.,
|
628 |
+
2016; Caillon & Esling, 2021; Borsos et al., 2022; Pasini &
|
629 |
+
Schlüter, 2022), and some others use lyrics or descriptions
|
630 |
+
of daily sounds (e.g., “a dog barking”) (Kreuk et al., 2022;
|
631 |
+
Dhariwal et al., 2020). The only text-to-music model com-
|
632 |
+
parable with our work is the Riffusion model (Forsgren &
|
633 |
+
Martiros, 2022).
|
634 |
+
We describe the merits of our model in both quantitative and
|
635 |
+
qualitative ways from multiple perspectives: (1) genre diver-
|
636 |
+
sity, (2) relevance of the music to the given text prompt, (3)
|
637 |
+
sound quality, and (4) long-term structure in the generated
|
638 |
+
music. Our analyses are reported in Sections 6.1 to 6.3.
|
639 |
+
Note that there is no perfect evaluation metric in the existing
|
640 |
+
|
641 |
+
Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion
|
642 |
+
literature (Kreuk et al., 2022; Borsos et al., 2022; Dhariwal
|
643 |
+
et al., 2020), since music is a complex artifact with a range
|
644 |
+
of properties (e.g., timbre, rhythm, and structure), not to
|
645 |
+
mention the subjectivity of music perception. In the present
|
646 |
+
work, we try our best to provide a diverse set of angles
|
647 |
+
to evaluate the generated music. In addition, we suggest
|
648 |
+
readers listen to the provided samples in order to gain a
|
649 |
+
more holistic impression of our model compared to the
|
650 |
+
Riffusion model (Forsgren & Martiros, 2022): bit.ly/
|
651 |
+
anonymous-mousai.
|
652 |
+
6.1. Diversity & Text-to-Music Relevance
|
653 |
+
We design a listener test to illustrate the diversity and text
|
654 |
+
relevance of Moûsai. Specifically, we compose a list of 40
|
655 |
+
text prompts spanning across several common music genres:
|
656 |
+
electronic, hip hop, metal, and pop. (See Appendix A for
|
657 |
+
the entire list of prompts, ten per category.)
|
658 |
+
Using these prompts, we generate music with both Moûsai
|
659 |
+
and the Riffusion model (Forsgren & Martiros, 2022), with
|
660 |
+
a total of 80 pieces of music, two for each prompt. Quali-
|
661 |
+
tatively, we observe that our music samples exhibit a good
|
662 |
+
diversity and fit the text descriptions well.
|
663 |
+
To validate this quantitatively, we conducted a small-scale
|
664 |
+
psychophysics evaluation, recruiting three perceivers (anno-
|
665 |
+
tators) with diverse demographic backgrounds (both female
|
666 |
+
and male, all with at least a Master’s degree of education).
|
667 |
+
Each annotator listens to all 80 music samples we provide,
|
668 |
+
and is instructed to categorize each sample into exactly
|
669 |
+
one of the four provided genres. This is a four-alternative
|
670 |
+
forced choice paradigm, i.e., a variant of the two-alternative
|
671 |
+
forced choice setting which is considered the gold standard
|
672 |
+
in psychophysics.
|
673 |
+
We record how many times the perceiver correctly identifies
|
674 |
+
the genre which the respective model was generating from.
|
675 |
+
A large number (or score) means that the model often gener-
|
676 |
+
ated music that, according to the human perceiver, plausibly
|
677 |
+
belonged to the correct category (when compared to the
|
678 |
+
other three categories). To achieve a good score, the model
|
679 |
+
needs to generate diverse and genre-specific music. We take
|
680 |
+
the score as a quality score of the model when it comes to
|
681 |
+
correctly performing text-conditional music generation.
|
682 |
+
In Figure 5, we display the confusion matrix of this genre
|
683 |
+
identification test for both our model (left) and the Riffusion
|
684 |
+
model (right). For our model, the annotators identify the
|
685 |
+
right genres most of the time, whereas for the Riffusion
|
686 |
+
model, the annotators often perceive the music as more
|
687 |
+
generic, categorizing it as Pop.
|
688 |
+
6.2. Sound Quality
|
689 |
+
Apart from the diversity and relevance, we also evaluate
|
690 |
+
the sound quality of the music we generate. From the mel
|
691 |
+
ElectronicHip Hop
|
692 |
+
Metal
|
693 |
+
Pop
|
694 |
+
Electronic
|
695 |
+
Hip Hop
|
696 |
+
Metal
|
697 |
+
Pop
|
698 |
+
0
|
699 |
+
5
|
700 |
+
10
|
701 |
+
15
|
702 |
+
20
|
703 |
+
25
|
704 |
+
30
|
705 |
+
(a) Confusion matrix for the
|
706 |
+
music pieces generated by
|
707 |
+
Moûsai.
|
708 |
+
ElectronicHip Hop
|
709 |
+
Metal
|
710 |
+
Pop
|
711 |
+
Electronic
|
712 |
+
Hip Hop
|
713 |
+
Metal
|
714 |
+
Pop
|
715 |
+
0
|
716 |
+
5
|
717 |
+
10
|
718 |
+
15
|
719 |
+
20
|
720 |
+
25
|
721 |
+
30
|
722 |
+
(b) Confusion matrix for the
|
723 |
+
music pieces generated by the
|
724 |
+
Riffusion model.
|
725 |
+
Figure 5. Evaluation results of genre categorization for our model
|
726 |
+
(left) and the Riffusion model (right). We show the confusion
|
727 |
+
matrix across the four common music genres (electronic, hip hop,
|
728 |
+
metal, and pop). Dark values on the diagonal mean that a model
|
729 |
+
generates music the perceivers categorize into the correct genre.
|
730 |
+
We can see that our model (left) has most mass on the diagnal,
|
731 |
+
while the riffusion model tends to generate generic samples that
|
732 |
+
are very similar to Pop for all genres, thus being difficult to be
|
733 |
+
categorized correctly. Note that each matrix adds up to 120, corre-
|
734 |
+
sponding to 40 samples per model annotated by three perceivers
|
735 |
+
each.
|
736 |
+
spectrograms we visualize in Figure 6, we can see that low-
|
737 |
+
frequency sounds are handled rather well by our model.
|
738 |
+
From the music samples we provide, it is apparent that our
|
739 |
+
model performs well with drum-like sounds as frequently
|
740 |
+
found in electronic, house, dubstep, techno, EDM, and metal
|
741 |
+
music. This is likely a consequence of the lower amount of
|
742 |
+
information required to represent low-frequency sounds.
|
743 |
+
6.3. Structure
|
744 |
+
Another qualitative advantage of our model is its capability
|
745 |
+
to handle long-term structure, as opposed to riffusion mod-
|
746 |
+
els’ context length of 5 seconds, as mentioned in Table 1.
|
747 |
+
Our generated samples exhibit structure over longer periods
|
748 |
+
of time, exceeding the minute mark. All of rhythm, loops,
|
749 |
+
riffs, and occasionally even entire choruses are found in
|
750 |
+
generated music. We find that increasing the number of at-
|
751 |
+
tention blocks (e.g., from a total of 4–8 to a total of 32+) in
|
752 |
+
the latent diffusion model can improve the general structure
|
753 |
+
of the songs, thanks to the long-context view. If the model
|
754 |
+
is trained without attention blocks, the context provided
|
755 |
+
by the U-Net is not large enough to learn any meaningful
|
756 |
+
long-term structure.
|
757 |
+
6.4. Additional Properties
|
758 |
+
In addition to the main evaluation results, we also explore
|
759 |
+
several properties of our model, namely the trade-off be-
|
760 |
+
tween speed and quality, between the compression ratio and
|
761 |
+
quality, as well as the text-audio binding.
|
762 |
+
Trade-Off between Speed and Quality. We find that 10
|
763 |
+
|
764 |
+
Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion
|
765 |
+
Figure 6. Mel spectrogram comparison between the true samples
|
766 |
+
(top) and the auto-encoded samples (bottom); cf. text.
|
767 |
+
sampling steps in both stages can be enough to generate
|
768 |
+
reasonable audio. We can achieve improved quality and
|
769 |
+
reduced noise for high-frequency sounds by trading off the
|
770 |
+
speed, i.e., increasing the number of sampling steps in the
|
771 |
+
diffusion decoder, e.g., 50 – 100 steps). Increasing the num-
|
772 |
+
ber of sampling steps in the latent diffusion model (again
|
773 |
+
in the order of 50 – 100 steps) will similarly improve the
|
774 |
+
quality, likely due to the more detailed generated latents,
|
775 |
+
and at the same time result in an overall better structured
|
776 |
+
music. To make sure the results are comparable when vary-
|
777 |
+
ing the number of sampling steps, we use the same starting
|
778 |
+
noise in both stages. In both cases, this suggests that using
|
779 |
+
more advanced samplers could be helpful to improve on the
|
780 |
+
speed-quality trade-off.
|
781 |
+
Trade-Off between Compression Ratio and Quality. We
|
782 |
+
find that decreasing the compression ratio of the first stage
|
783 |
+
(e.g., to 32x) can improve the quality of low-frequency
|
784 |
+
sounds, but in turn will slow down the model, as the second
|
785 |
+
stage has to work on higher dimensional data. As proposed
|
786 |
+
later in Section 7, we hypothesize that using perceptually
|
787 |
+
weighted loss functions instead of L2 loss during diffusion
|
788 |
+
could help this trade-off, giving a more balanced importance
|
789 |
+
to high frequency sounds even at high compression ratios.
|
790 |
+
Text-Audio Binding. We find that the text-audio binding
|
791 |
+
works well with CFG higher than 3.0. Since the model
|
792 |
+
is trained with metadata such as title, album, artist, genre,
|
793 |
+
year, and chunk, the best keywords to control the generation
|
794 |
+
appear to be frequent descriptive names, such as the genre
|
795 |
+
of the music, or descriptions commonly found in titles, such
|
796 |
+
as “remix”, “(Deluxe Edition)”, and possibly many more.
|
797 |
+
A similar behavior has been observed and exploited in text-
|
798 |
+
to-image models to generate better looking results. We find
|
799 |
+
that the chunk based text-conditioning is coherent with the
|
800 |
+
description, for example providing a description of the form
|
801 |
+
“1 of N” will tend to result in a starting portion of a song, a
|
802 |
+
description of the form “N of N” will tend to result in the
|
803 |
+
ending portion of a song, and anything in between will tend
|
804 |
+
to result in a song playing over the entire generation period.
|
805 |
+
7. Future Work
|
806 |
+
Data and Scaling. Increasing scale of both data and the
|
807 |
+
model can very likely provide drastic quality improvements.
|
808 |
+
Following (Dhariwal et al., 2020; Borsos et al., 2022) we
|
809 |
+
suggest training with 50k-100k hours instead of 2.5k. Using
|
810 |
+
a larger pretrained language model to obtain text embed-
|
811 |
+
dings has been shown to be very important for quality in
|
812 |
+
images (Saharia et al., 2022), we hypothesize that the same
|
813 |
+
is true if applied to our second-stage model.
|
814 |
+
Diffusion. More sophisticated diffusion samplers can be
|
815 |
+
used to get higher quality for the same number of sampling
|
816 |
+
steps, or similarly more advanced distillation techniques
|
817 |
+
could be used (Salimans & Ho, 2022).
|
818 |
+
Model. Some promising future modelling approaches that
|
819 |
+
need more experimentation include: (1) training diffusion
|
820 |
+
models using perceptual losses on the waveforms instead of
|
821 |
+
L2 — this might help decrease the initial size of the U-Net,
|
822 |
+
as we would not have to process non-perceivable sounds,
|
823 |
+
(2) improving the quality of the diffusion autoencoder by
|
824 |
+
using mel-spectrograms instead of magnitude spectrograms
|
825 |
+
as input, (3) other types of conditioning which are not text-
|
826 |
+
based might be useful to navigate the audio latent space,
|
827 |
+
which is often hard to describe in words — DreamBooth-
|
828 |
+
like models (Ruiz et al., 2022).
|
829 |
+
8. Conclusion
|
830 |
+
In this work, we presented Moûsai, a waveform based audio
|
831 |
+
generation method building on two diffusion models. First,
|
832 |
+
we trained a diffusion autoencoder to compress a magnitude
|
833 |
+
only spectrogram 64x. Using a custom 1D U-Net, the com-
|
834 |
+
pressed latent is decoded back to waveform by diffusion.
|
835 |
+
In the second stage, we train a diffusion model to generate
|
836 |
+
a new latent from noise while conditioning on text embed-
|
837 |
+
dings extracted from a frozen T5 transformer model, using
|
838 |
+
a similar 1D U-Net architecture as used in the first stage.
|
839 |
+
We show that — in contrast to earlier approaches — our
|
840 |
+
model can generate minutes of high-quality music in real-
|
841 |
+
time on a consumer GPU, with compelling text-audio bind-
|
842 |
+
ing. In addition to trained models, we provide a collection
|
843 |
+
of open-source libraries with the hope of facilitating future
|
844 |
+
work in the field. We expect that the present work will
|
845 |
+
help pave the way towards higher-quality, longer-context
|
846 |
+
text-to-music generation for future applications.
|
847 |
+
|
848 |
+
70
|
849 |
+
70
|
850 |
+
60
|
851 |
+
60
|
852 |
+
50
|
853 |
+
50
|
854 |
+
40
|
855 |
+
40
|
856 |
+
30
|
857 |
+
30
|
858 |
+
20
|
859 |
+
20
|
860 |
+
10
|
861 |
+
10
|
862 |
+
0
|
863 |
+
100
|
864 |
+
200
|
865 |
+
300
|
866 |
+
400
|
867 |
+
500
|
868 |
+
0
|
869 |
+
100
|
870 |
+
200
|
871 |
+
300
|
872 |
+
400
|
873 |
+
500
|
874 |
+
70
|
875 |
+
70
|
876 |
+
60
|
877 |
+
60
|
878 |
+
50
|
879 |
+
50
|
880 |
+
40
|
881 |
+
40
|
882 |
+
30
|
883 |
+
30
|
884 |
+
20
|
885 |
+
20
|
886 |
+
10
|
887 |
+
10
|
888 |
+
100
|
889 |
+
200
|
890 |
+
300
|
891 |
+
400
|
892 |
+
500
|
893 |
+
0
|
894 |
+
100
|
895 |
+
200
|
896 |
+
300
|
897 |
+
400
|
898 |
+
500Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion
|
899 |
+
Author Contributions
|
900 |
+
Flavio Schneider came up with the idea and implemented
|
901 |
+
all the elements of this paper, which is part of his Master’s
|
902 |
+
thesis at ETH Zürich (Schneider, 2023).
|
903 |
+
Zhijing Jin co-supervised the Master’s thesis and the work,
|
904 |
+
conducted weekly meetings, helped designed the structure
|
905 |
+
of the paper, and led the human evaluation experiments of
|
906 |
+
this paper.
|
907 |
+
Bernhard Schölkopf supervised the work and provided
|
908 |
+
precious suggestions during the progress of this work, as
|
909 |
+
well as extensive suggestions for the writing.
|
910 |
+
All of Flavio Schneider, Zhijing Jin, and Bernhard
|
911 |
+
Schölkopf contributed significantly to the writing and pol-
|
912 |
+
ishing of the paper.
|
913 |
+
Acknowledgment
|
914 |
+
We thank Stability AI for their generous support for the com-
|
915 |
+
putational resources. We are also grateful for the generous
|
916 |
+
help by our annotators Andrew Lee, Aylin Gunal, Fernando
|
917 |
+
Gonzalez, and Yiwen Ding. We thank Fernando Gonzalez
|
918 |
+
and Zhiheng Lyu for helping to improve the format of the pa-
|
919 |
+
per. We thank Nasim Rahaman for early-stage discussions
|
920 |
+
to improve the model design and contributions.
|
921 |
+
This material is based in part upon works supported by
|
922 |
+
the German Federal Ministry of Education and Research
|
923 |
+
(BMBF): Tübingen AI Center, FKZ: 01IS18039B; and by
|
924 |
+
the Machine Learning Cluster of Excellence, EXC number
|
925 |
+
2064/1 – Project number 390727645. Zhijing Jin is sup-
|
926 |
+
ported by PhD fellowships from the Future of Life Institute
|
927 |
+
and Open Philanthropy, as well as the travel support from
|
928 |
+
ELISE (GA no 951847) for the ELLIS program.
|
929 |
+
References
|
930 |
+
Baevski, A., Zhou, Y., Mohamed, A., and Auli, M.
|
931 |
+
wav2vec 2.0: A framework for self-supervised learning
|
932 |
+
of speech representations. In Larochelle, H., Ranzato,
|
933 |
+
M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Ad-
|
934 |
+
vances in Neural Information Processing Systems 33:
|
935 |
+
Annual Conference on Neural Information Processing
|
936 |
+
Systems 2020, NeurIPS 2020, December 6-12, 2020,
|
937 |
+
virtual,
|
938 |
+
2020.
|
939 |
+
URL https://proceedings.
|
940 |
+
neurips.cc/paper/2020/hash/
|
941 |
+
92d1e1eb1cd6f9fba3227870bb6d7f07-Abstract.
|
942 |
+
html.
|
943 |
+
Borsos,
|
944 |
+
Z.,
|
945 |
+
Marinier,
|
946 |
+
R.,
|
947 |
+
Vincent,
|
948 |
+
D.,
|
949 |
+
Kharitonov,
|
950 |
+
E., Pietquin, O., Sharifi, M., Teboul, O., Grangier,
|
951 |
+
D., Tagliasacchi, M., and Zeghidour, N.
|
952 |
+
Audiolm:
|
953 |
+
a language modeling approach to audio generation.
|
954 |
+
CoRR, abs/2209.03143, 2022.
|
955 |
+
doi: 10.48550/arXiv.
|
956 |
+
2209.03143. URL https://doi.org/10.48550/
|
957 |
+
arXiv.2209.03143.
|
958 |
+
Caillon, A. and Esling, P. RAVE: A variational autoencoder
|
959 |
+
for fast and high-quality neural audio synthesis. CoRR,
|
960 |
+
abs/2111.05011, 2021. URL https://arxiv.org/
|
961 |
+
abs/2111.05011.
|
962 |
+
Chang, H., Zhang, H., Barber, J., Maschinot, A., Lezama,
|
963 |
+
J., Jiang, L., Yang, M., Murphy, K., Freeman, W. T.,
|
964 |
+
Rubinstein, M., Li, Y., and Krishnan, D. Muse: Text-
|
965 |
+
to-image generation via masked generative transform-
|
966 |
+
ers. CoRR, abs/2301.00704, 2023. doi: 10.48550/arXiv.
|
967 |
+
2301.00704. URL https://doi.org/10.48550/
|
968 |
+
arXiv.2301.00704.
|
969 |
+
Défossez, A., Copet, J., Synnaeve, G., and Adi, Y. High fi-
|
970 |
+
delity neural audio compression. CoRR, abs/2210.13438,
|
971 |
+
2022. doi: 10.48550/arXiv.2210.13438. URL https:
|
972 |
+
//doi.org/10.48550/arXiv.2210.13438.
|
973 |
+
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei,
|
974 |
+
L. ImageNet: A large-scale hierarchical image database.
|
975 |
+
In Computer Vision and Pattern Recognition (CVPR), pp.
|
976 |
+
248–255, 2009.
|
977 |
+
Deng, K., Bansal, A., and Ramanan, D. Unsupervised audio-
|
978 |
+
visual synthesis via exemplar autoencoders. In 9th Inter-
|
979 |
+
national Conference on Learning Representations, ICLR
|
980 |
+
2021, Virtual Event, Austria, May 3-7, 2021. OpenRe-
|
981 |
+
view.net, 2021. URL https://openreview.net/
|
982 |
+
forum?id=43VKWxg\_Sqr.
|
983 |
+
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. BERT:
|
984 |
+
Pre-training of deep bidirectional transformers for lan-
|
985 |
+
guage understanding. In Proceedings of the 2019 Confer-
|
986 |
+
ence of the North American Chapter of the Association for
|
987 |
+
Computational Linguistics: Human Language Technolo-
|
988 |
+
gies, Volume 1 (Long and Short Papers), pp. 4171–4186,
|
989 |
+
Minneapolis, Minnesota, June 2019. Association for
|
990 |
+
Computational Linguistics. doi: 10.18653/v1/N19-1423.
|
991 |
+
URL https://aclanthology.org/N19-1423.
|
992 |
+
Dhariwal, P., Jun, H., Payne, C., Kim, J. W., Radford, A.,
|
993 |
+
and Sutskever, I. Jukebox: A generative model for music.
|
994 |
+
CoRR, abs/2005.00341, 2020. URL https://arxiv.
|
995 |
+
org/abs/2005.00341.
|
996 |
+
Dieleman, S., van den Oord, A., and Simonyan, K. The
|
997 |
+
challenge of realistic music generation:
|
998 |
+
Modelling
|
999 |
+
raw audio at scale.
|
1000 |
+
In Bengio, S., Wallach, H. M.,
|
1001 |
+
Larochelle, H., Grauman, K., Cesa-Bianchi, N., and
|
1002 |
+
Garnett, R. (eds.), Advances in Neural Information
|
1003 |
+
Processing Systems 31: Annual Conference on Neural
|
1004 |
+
Information Processing Systems 2018, NeurIPS 2018,
|
1005 |
+
December 3-8, 2018, Montréal, Canada, pp. 8000–
|
1006 |
+
8010,
|
1007 |
+
2018.
|
1008 |
+
URL
|
1009 |
+
https://proceedings.
|
1010 |
+
|
1011 |
+
Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion
|
1012 |
+
neurips.cc/paper/2018/hash/
|
1013 |
+
3e441eec3456b703a4fe741005f3981f-Abstract.
|
1014 |
+
html.
|
1015 |
+
Doornbusch, P. Gerhard nierhaus: Algorithmic composi-
|
1016 |
+
tion: Paradigms of automated music generation. Comput.
|
1017 |
+
Music. J., 34(3):70–74, 2010. doi: 10.1162/COMJ\\_r\\_
|
1018 |
+
00008. URL https://doi.org/10.1162/COMJ\
|
1019 |
+
_r\_00008.
|
1020 |
+
Elizalde, B., Deshmukh, S., Ismail, M. A., and Wang, H.
|
1021 |
+
CLAP: learning audio concepts from natural language
|
1022 |
+
supervision.
|
1023 |
+
CoRR, abs/2206.04769, 2022.
|
1024 |
+
doi: 10.
|
1025 |
+
48550/arXiv.2206.04769. URL https://doi.org/
|
1026 |
+
10.48550/arXiv.2206.04769.
|
1027 |
+
Engel, J. H., Agrawal, K. K., Chen, S., Gulrajani, I.,
|
1028 |
+
Donahue, C., and Roberts, A.
|
1029 |
+
Gansynth: Adversar-
|
1030 |
+
ial neural audio synthesis.
|
1031 |
+
In 7th International Con-
|
1032 |
+
ference on Learning Representations, ICLR 2019, New
|
1033 |
+
Orleans, LA, USA, May 6-9, 2019. OpenReview.net,
|
1034 |
+
2019. URL https://openreview.net/forum?
|
1035 |
+
id=H1xQVn09FX.
|
1036 |
+
Esser, P., Rombach, R., and Ommer, B.
|
1037 |
+
Taming trans-
|
1038 |
+
formers for high-resolution image synthesis.
|
1039 |
+
In
|
1040 |
+
IEEE Conference on Computer Vision and Pattern
|
1041 |
+
Recognition, CVPR 2021, virtual, June 19-25, 2021,
|
1042 |
+
pp. 12873–12883. Computer Vision Foundation /
|
1043 |
+
IEEE, 2021.
|
1044 |
+
doi: 10.1109/CVPR46437.2021.01268.
|
1045 |
+
URL
|
1046 |
+
https://openaccess.thecvf.com/
|
1047 |
+
content/CVPR2021/html/Esser_Taming_
|
1048 |
+
Transformers_for_High-Resolution_
|
1049 |
+
Image_Synthesis_CVPR_2021_paper.html.
|
1050 |
+
Forsgren, S. and Martiros, H. Riffusion - Stable diffusion
|
1051 |
+
for real-time music generation. 2022. URL https:
|
1052 |
+
//riffusion.com/about.
|
1053 |
+
Giraudo, S. Generation of musical patterns through operads.
|
1054 |
+
CoRR, abs/2104.12432, 2021. URL https://arxiv.
|
1055 |
+
org/abs/2104.12432.
|
1056 |
+
Hinton, G. E. and Salakhutdinov, R. R. Reducing the di-
|
1057 |
+
mensionality of data with neural networks. science, 313
|
1058 |
+
(5786):504–507, 2006.
|
1059 |
+
Ho, J. and Salimans, T.
|
1060 |
+
Classifier-free diffusion guid-
|
1061 |
+
ance. CoRR, abs/2207.12598, 2022. doi: 10.48550/arXiv.
|
1062 |
+
2207.12598. URL https://doi.org/10.48550/
|
1063 |
+
arXiv.2207.12598.
|
1064 |
+
Ho, J., Chan, W., Saharia, C., Whang, J., Gao, R., Gritsenko,
|
1065 |
+
A. A., Kingma, D. P., Poole, B., Norouzi, M., Fleet, D. J.,
|
1066 |
+
and Salimans, T. Imagen video: High definition video
|
1067 |
+
generation with diffusion models. CoRR, abs/2210.02303,
|
1068 |
+
2022. doi: 10.48550/arXiv.2210.02303. URL https:
|
1069 |
+
//doi.org/10.48550/arXiv.2210.02303.
|
1070 |
+
Kim, M., Hong, J., and Ro, Y. M.
|
1071 |
+
Lip to speech
|
1072 |
+
synthesis with visual context attentional GAN.
|
1073 |
+
In
|
1074 |
+
Ranzato, M., Beygelzimer, A., Dauphin, Y. N., Liang,
|
1075 |
+
P., and Vaughan, J. W. (eds.), Advances in Neural
|
1076 |
+
Information Processing Systems 34: Annual Conference
|
1077 |
+
on Neural Information Processing Systems 2021,
|
1078 |
+
NeurIPS 2021, December 6-14, 2021, virtual, pp. 2758–
|
1079 |
+
2770,
|
1080 |
+
2021.
|
1081 |
+
URL
|
1082 |
+
https://proceedings.
|
1083 |
+
neurips.cc/paper/2021/hash/
|
1084 |
+
16437d40c29a1a7b1e78143c9c38f289-Abstract.
|
1085 |
+
html.
|
1086 |
+
Kingma, D. P. and Welling, M. Auto-encoding variational
|
1087 |
+
bayes. In Bengio, Y. and LeCun, Y. (eds.), 2nd Interna-
|
1088 |
+
tional Conference on Learning Representations, ICLR
|
1089 |
+
2014, Banff, AB, Canada, April 14-16, 2014, Conference
|
1090 |
+
Track Proceedings, 2014. URL http://arxiv.org/
|
1091 |
+
abs/1312.6114.
|
1092 |
+
Kong, Z., Ping, W., Huang, J., Zhao, K., and Catanzaro,
|
1093 |
+
B.
|
1094 |
+
Diffwave: A versatile diffusion model for audio
|
1095 |
+
synthesis. In 9th International Conference on Learn-
|
1096 |
+
ing Representations, ICLR 2021, Virtual Event, Austria,
|
1097 |
+
May 3-7, 2021. OpenReview.net, 2021. URL https:
|
1098 |
+
//openreview.net/forum?id=a-xFK8Ymz5J.
|
1099 |
+
Kreuk, F., Synnaeve, G., Polyak, A., Singer, U., Dé-
|
1100 |
+
fossez, A., Copet, J., Parikh, D., Taigman, Y., and
|
1101 |
+
Adi, Y. Audiogen: Textually guided audio generation.
|
1102 |
+
CoRR, abs/2209.15352, 2022.
|
1103 |
+
doi: 10.48550/arXiv.
|
1104 |
+
2209.15352. URL https://doi.org/10.48550/
|
1105 |
+
arXiv.2209.15352.
|
1106 |
+
Kumar, K., Kumar, R., de Boissiere, T., Gestin, L., Teoh,
|
1107 |
+
W. Z., Sotelo, J., de Brébisson, A., Bengio, Y., and
|
1108 |
+
Courville, A. C.
|
1109 |
+
Melgan:
|
1110 |
+
Generative adversarial
|
1111 |
+
networks for conditional waveform synthesis. In Wallach,
|
1112 |
+
H. M., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F.,
|
1113 |
+
Fox, E. B., and Garnett, R. (eds.), Advances in Neural In-
|
1114 |
+
formation Processing Systems 32: Annual Conference on
|
1115 |
+
Neural Information Processing Systems 2019, NeurIPS
|
1116 |
+
2019, December 8-14, 2019, Vancouver, BC, Canada, pp.
|
1117 |
+
14881–14892, 2019. URL https://proceedings.
|
1118 |
+
neurips.cc/paper/2019/hash/
|
1119 |
+
6804c9bca0a615bdb9374d00a9fcba59-Abstract.
|
1120 |
+
html.
|
1121 |
+
Lam, M. W. Y., Wang, J., Su, D., and Yu, D. BDDM: bilat-
|
1122 |
+
eral denoising diffusion models for fast and high-quality
|
1123 |
+
speech synthesis. In The Tenth International Conference
|
1124 |
+
on Learning Representations, ICLR 2022, Virtual Event,
|
1125 |
+
April 25-29, 2022. OpenReview.net, 2022. URL https:
|
1126 |
+
//openreview.net/forum?id=L7wzpQttNO.
|
1127 |
+
Lee, D., Kim, C., Kim, S., Cho, M., and Han, W. Au-
|
1128 |
+
toregressive image generation using residual quantiza-
|
1129 |
+
tion.
|
1130 |
+
In IEEE/CVF Conference on Computer Vision
|
1131 |
+
|
1132 |
+
Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion
|
1133 |
+
and Pattern Recognition, CVPR 2022, New Orleans, LA,
|
1134 |
+
USA, June 18-24, 2022, pp. 11513–11522. IEEE, 2022.
|
1135 |
+
doi: 10.1109/CVPR52688.2022.01123. URL https://
|
1136 |
+
doi.org/10.1109/CVPR52688.2022.01123.
|
1137 |
+
Leng, Y., Chen, Z., Guo, J., Liu, H., Chen, J., Tan, X.,
|
1138 |
+
Mandic, D. P., He, L., Li, X., Qin, T., Zhao, S., and
|
1139 |
+
Liu, T. Binauralgrad: A two-stage conditional diffu-
|
1140 |
+
sion probabilistic model for binaural audio synthesis.
|
1141 |
+
CoRR, abs/2205.14807, 2022.
|
1142 |
+
doi: 10.48550/arXiv.
|
1143 |
+
2205.14807. URL https://doi.org/10.48550/
|
1144 |
+
arXiv.2205.14807.
|
1145 |
+
Li, M., Xu, R., Wang, S., Zhou, L., Lin, X., Zhu, C., Zeng,
|
1146 |
+
M., Ji, H., and Chang, S. Clip-event: Connecting text
|
1147 |
+
and images with event structures. In IEEE/CVF Confer-
|
1148 |
+
ence on Computer Vision and Pattern Recognition, CVPR
|
1149 |
+
2022, New Orleans, LA, USA, June 18-24, 2022, pp.
|
1150 |
+
16399–16408. IEEE, 2022. doi: 10.1109/CVPR52688.
|
1151 |
+
2022.01593. URL https://doi.org/10.1109/
|
1152 |
+
CVPR52688.2022.01593.
|
1153 |
+
Loshchilov, I. and Hutter, F. Decoupled weight decay regu-
|
1154 |
+
larization. In 7th International Conference on Learning
|
1155 |
+
Representations, ICLR 2019, New Orleans, LA, USA,
|
1156 |
+
May 6-9, 2019. OpenReview.net, 2019. URL https:
|
1157 |
+
//openreview.net/forum?id=Bkg6RiCqY7.
|
1158 |
+
Mehri, S., Kumar, K., Gulrajani, I., Kumar, R., Jain, S.,
|
1159 |
+
Sotelo, J., Courville, A. C., and Bengio, Y. Samplernn:
|
1160 |
+
An unconditional end-to-end neural audio generation
|
1161 |
+
model. In 5th International Conference on Learning
|
1162 |
+
Representations, ICLR 2017, Toulon, France, April 24-26,
|
1163 |
+
2017, Conference Track Proceedings. OpenReview.net,
|
1164 |
+
2017. URL https://openreview.net/forum?
|
1165 |
+
id=SkxKPDv5xl.
|
1166 |
+
Morrison, M., Kumar, R., Kumar, K., Seetharaman, P.,
|
1167 |
+
Courville, A. C., and Bengio, Y. Chunked autoregressive
|
1168 |
+
GAN for conditional waveform synthesis. In The Tenth
|
1169 |
+
International Conference on Learning Representations,
|
1170 |
+
ICLR 2022, Virtual Event, April 25-29, 2022. OpenRe-
|
1171 |
+
view.net, 2022. URL https://openreview.net/
|
1172 |
+
forum?id=v3aeIsY\_vVX.
|
1173 |
+
Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright,
|
1174 |
+
C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K.,
|
1175 |
+
Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller,
|
1176 |
+
L., Simens, M., Askell, A., Welinder, P., Christiano,
|
1177 |
+
P. F., Leike, J., and Lowe, R.
|
1178 |
+
Training language
|
1179 |
+
models to follow instructions with human feedback.
|
1180 |
+
CoRR, abs/2203.02155, 2022.
|
1181 |
+
doi: 10.48550/arXiv.
|
1182 |
+
2203.02155. URL https://doi.org/10.48550/
|
1183 |
+
arXiv.2203.02155.
|
1184 |
+
Pasini, M. and Schlüter, J. Musika! fast infinite waveform
|
1185 |
+
music generation. CoRR, abs/2208.08706, 2022. doi: 10.
|
1186 |
+
48550/arXiv.2208.08706. URL https://doi.org/
|
1187 |
+
10.48550/arXiv.2208.08706.
|
1188 |
+
Pennington, J., Socher, R., and Manning, C.
|
1189 |
+
GloVe:
|
1190 |
+
Global vectors for word representation. In Proceedings
|
1191 |
+
of the 2014 Conference on Empirical Methods in Nat-
|
1192 |
+
ural Language Processing (EMNLP), pp. 1532–1543,
|
1193 |
+
Doha, Qatar, October 2014. Association for Computa-
|
1194 |
+
tional Linguistics. doi: 10.3115/v1/D14-1162. URL
|
1195 |
+
https://aclanthology.org/D14-1162.
|
1196 |
+
Preechakul, K., Chatthee, N., Wizadwongsa, S., and Suwa-
|
1197 |
+
janakorn, S. Diffusion autoencoders: Toward a meaning-
|
1198 |
+
ful and decodable representation. In IEEE/CVF Confer-
|
1199 |
+
ence on Computer Vision and Pattern Recognition, CVPR
|
1200 |
+
2022, New Orleans, LA, USA, June 18-24, 2022, pp.
|
1201 |
+
10609–10619. IEEE, 2022. doi: 10.1109/CVPR52688.
|
1202 |
+
2022.01036. URL https://doi.org/10.1109/
|
1203 |
+
CVPR52688.2022.01036.
|
1204 |
+
Radford, A., Narasimhan, K., Salimans, T., and Sutskever,
|
1205 |
+
I. Improving language understanding by generative pre-
|
1206 |
+
training. Technical report, OpenAI, 2018.
|
1207 |
+
Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S.,
|
1208 |
+
Matena, M., Zhou, Y., Li, W., and Liu, P. J. Exploring the
|
1209 |
+
limits of transfer learning with a unified text-to-text trans-
|
1210 |
+
former. J. Mach. Learn. Res., 21:140:1–140:67, 2020.
|
1211 |
+
URL http://jmlr.org/papers/v21/20-074.
|
1212 |
+
html.
|
1213 |
+
Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., and Chen,
|
1214 |
+
M. Hierarchical text-conditional image generation with
|
1215 |
+
CLIP latents. CoRR, abs/2204.06125, 2022. doi: 10.
|
1216 |
+
48550/arXiv.2204.06125. URL https://doi.org/
|
1217 |
+
10.48550/arXiv.2204.06125.
|
1218 |
+
Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and
|
1219 |
+
Ommer, B. High-resolution image synthesis with latent
|
1220 |
+
diffusion models. In IEEE/CVF Conference on Computer
|
1221 |
+
Vision and Pattern Recognition, CVPR 2022, New Or-
|
1222 |
+
leans, LA, USA, June 18-24, 2022, pp. 10674–10685.
|
1223 |
+
IEEE, 2022.
|
1224 |
+
doi: 10.1109/CVPR52688.2022.01042.
|
1225 |
+
URL https://doi.org/10.1109/CVPR52688.
|
1226 |
+
2022.01042.
|
1227 |
+
Ronneberger, O., Fischer, P., and Brox, T. U-net: Con-
|
1228 |
+
volutional networks for biomedical image segmentation.
|
1229 |
+
In Navab, N., Hornegger, J., III, W. M. W., and Frangi,
|
1230 |
+
A. F. (eds.), Medical Image Computing and Computer-
|
1231 |
+
Assisted Intervention - MICCAI 2015 - 18th International
|
1232 |
+
Conference Munich, Germany, October 5 - 9, 2015, Pro-
|
1233 |
+
ceedings, Part III, volume 9351 of Lecture Notes in
|
1234 |
+
Computer Science, pp. 234–241. Springer, 2015. doi:
|
1235 |
+
10.1007/978-3-319-24574-4\_28. URL https://doi.
|
1236 |
+
org/10.1007/978-3-319-24574-4_28.
|
1237 |
+
|
1238 |
+
Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion
|
1239 |
+
Ruiz, N., Li, Y., Jampani, V., Pritch, Y., Rubinstein, M.,
|
1240 |
+
and Aberman, K.
|
1241 |
+
Dreambooth: Fine tuning text-to-
|
1242 |
+
image diffusion models for subject-driven generation.
|
1243 |
+
CoRR, abs/2208.12242, 2022.
|
1244 |
+
doi: 10.48550/arXiv.
|
1245 |
+
2208.12242. URL https://doi.org/10.48550/
|
1246 |
+
arXiv.2208.12242.
|
1247 |
+
Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J.,
|
1248 |
+
Denton, E., Ghasemipour, S. K. S., Ayan, B. K., Mah-
|
1249 |
+
davi, S. S., Lopes, R. G., Salimans, T., Ho, J., Fleet,
|
1250 |
+
D. J., and Norouzi, M.
|
1251 |
+
Photorealistic text-to-image
|
1252 |
+
diffusion models with deep language understanding.
|
1253 |
+
CoRR, abs/2205.11487, 2022.
|
1254 |
+
doi: 10.48550/arXiv.
|
1255 |
+
2205.11487. URL https://doi.org/10.48550/
|
1256 |
+
arXiv.2205.11487.
|
1257 |
+
Salas, H. A. G., Gelbukh, A. F., Calvo, H., and Soria,
|
1258 |
+
F. G. Automatic music composition with simple proba-
|
1259 |
+
bilistic generative grammars. Polibits, 44:59–65, 2011.
|
1260 |
+
doi: 10.17562/pb-44-9. URL https://doi.org/
|
1261 |
+
10.17562/pb-44-9.
|
1262 |
+
Salimans, T. and Ho, J. Progressive distillation for fast
|
1263 |
+
sampling of diffusion models.
|
1264 |
+
In The Tenth Interna-
|
1265 |
+
tional Conference on Learning Representations, ICLR
|
1266 |
+
2022, Virtual Event, April 25-29, 2022. OpenReview.net,
|
1267 |
+
2022. URL https://openreview.net/forum?
|
1268 |
+
id=TIdIXIpzhoI.
|
1269 |
+
Schneider, F. ArchiSound: Audio generation with diffu-
|
1270 |
+
sion. January 2023. URL https://github.com/
|
1271 |
+
flavioschneider/master-thesis/blob/
|
1272 |
+
main/audio_diffusion_thesis.pdf.
|
1273 |
+
Song, J., Meng, C., and Ermon, S. Denoising diffusion im-
|
1274 |
+
plicit models. In 9th International Conference on Learn-
|
1275 |
+
ing Representations, ICLR 2021, Virtual Event, Austria,
|
1276 |
+
May 3-7, 2021. OpenReview.net, 2021. URL https:
|
1277 |
+
//openreview.net/forum?id=St1giarCHLP.
|
1278 |
+
van den Oord, A., Dieleman, S., Zen, H., Simonyan, K.,
|
1279 |
+
Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A. W.,
|
1280 |
+
and Kavukcuoglu, K. Wavenet: A generative model for
|
1281 |
+
raw audio. In The 9th ISCA Speech Synthesis Workshop,
|
1282 |
+
Sunnyvale, CA, USA, 13-15 September 2016, pp. 125.
|
1283 |
+
ISCA, 2016.
|
1284 |
+
URL http://www.isca-speech.
|
1285 |
+
org/archive/SSW_2016/abstracts/ssw9_
|
1286 |
+
DS-4_van_den_Oord.html.
|
1287 |
+
van den Oord, A., Vinyals, O., and Kavukcuoglu, K. Neural
|
1288 |
+
discrete representation learning.
|
1289 |
+
In Guyon, I., von
|
1290 |
+
Luxburg, U., Bengio, S., Wallach, H. M., Fergus, R.,
|
1291 |
+
Vishwanathan, S. V. N., and Garnett, R. (eds.), Advances
|
1292 |
+
in Neural Information Processing Systems 30: Annual
|
1293 |
+
Conference on Neural Information Processing Systems
|
1294 |
+
2017, December 4-9, 2017, Long Beach, CA, USA, pp.
|
1295 |
+
6306–6315, 2017.
|
1296 |
+
URL https://proceedings.
|
1297 |
+
neurips.cc/paper/2017/hash/
|
1298 |
+
7a98af17e63a0ac09ce2e96d03992fbc-Abstract.
|
1299 |
+
html.
|
1300 |
+
Villegas, R., Babaeizadeh, M., Kindermans, P., Moraldo,
|
1301 |
+
H., Zhang, H., Saffar, M. T., Castro, S., Kunze, J., and
|
1302 |
+
Erhan, D. Phenaki: Variable length video generation from
|
1303 |
+
open domain textual description. CoRR, abs/2210.02399,
|
1304 |
+
2022. doi: 10.48550/arXiv.2210.02399. URL https:
|
1305 |
+
//doi.org/10.48550/arXiv.2210.02399.
|
1306 |
+
Yang, D., Yu, J., Wang, H., Wang, W., Weng, C., Zou,
|
1307 |
+
Y., and Yu, D. Diffsound: Discrete diffusion model for
|
1308 |
+
text-to-sound generation. CoRR, abs/2207.09983, 2022.
|
1309 |
+
doi: 10.48550/arXiv.2207.09983. URL https://doi.
|
1310 |
+
org/10.48550/arXiv.2207.09983.
|
1311 |
+
Yu, B., Lu, P., Wang, R., Hu, W., Tan, X., Ye, W., Zhang,
|
1312 |
+
S., Qin, T., and Liu, T. Museformer: Transformer with
|
1313 |
+
fine- and coarse-grained attention for music generation.
|
1314 |
+
CoRR, abs/2210.10349, 2022a.
|
1315 |
+
doi: 10.48550/arXiv.
|
1316 |
+
2210.10349. URL https://doi.org/10.48550/
|
1317 |
+
arXiv.2210.10349.
|
1318 |
+
Yu, J., Xu, Y., Koh, J. Y., Luong, T., Baid, G., Wang, Z.,
|
1319 |
+
Vasudevan, V., Ku, A., Yang, Y., Ayan, B. K., Hutchinson,
|
1320 |
+
B., Han, W., Parekh, Z., Li, X., Zhang, H., Baldridge, J.,
|
1321 |
+
and Wu, Y. Scaling autoregressive models for content-
|
1322 |
+
rich text-to-image generation. CoRR, abs/2206.10789,
|
1323 |
+
2022b. doi: 10.48550/arXiv.2206.10789. URL https:
|
1324 |
+
//doi.org/10.48550/arXiv.2206.10789.
|
1325 |
+
Zeghidour, N., Luebs, A., Omran, A., Skoglund, J., and
|
1326 |
+
Tagliasacchi, M.
|
1327 |
+
Soundstream: An end-to-end neu-
|
1328 |
+
ral audio codec.
|
1329 |
+
IEEE ACM Trans. Audio Speech
|
1330 |
+
Lang. Process., 30:495–507, 2022.
|
1331 |
+
doi:
|
1332 |
+
10.1109/
|
1333 |
+
TASLP.2021.3129994. URL https://doi.org/10.
|
1334 |
+
1109/TASLP.2021.3129994.
|
1335 |
+
A. Text Prompts
|
1336 |
+
We list all the text prompts composed for the four common
|
1337 |
+
music genres in Table 3.
|
1338 |
+
|
1339 |
+
Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion
|
1340 |
+
Genre = Electronic
|
1341 |
+
– Drops, Kanine Remix, Darkzy, Drops Remixes, bass house,
|
1342 |
+
(Deluxe) (Remix) 3 of 4
|
1343 |
+
– Electronic, Dance, EDM (Deluxe) (Remix) 3 of 4
|
1344 |
+
– Electro House (Remix), 2023, 3 of 4
|
1345 |
+
– Electro Swing Remix 2030 (Deluxe Edition) 3 of 4
|
1346 |
+
– Future Bass, EDM (Remix) 3 of 4, Remix
|
1347 |
+
– EDM (Deluxe) (Remix) 3 of 4
|
1348 |
+
– EDM, Vocal, Relax, Remix, 2023, 8D Audio
|
1349 |
+
– Hardstyle, Drop, 8D, Remix, High Quality, 2 of 4
|
1350 |
+
– Dubstep Insane Drop Remix (Deluxe Edition), 2 of 4
|
1351 |
+
– Drop, French 79, BPM Artist, Vol. 4, Electronica, 2016
|
1352 |
+
Genre = Hip Hop
|
1353 |
+
– Real Hip Hop, 2012, Lil B, Gods Father, escape room, 3 of 4
|
1354 |
+
– C’est toujours pour ceux qui savent, French Hip Hop, 2018
|
1355 |
+
(Deluxe), 3 of 4
|
1356 |
+
– Dejando Claro, Latin Hip Hop 2022 (Deluxe Edition) 3 of 4
|
1357 |
+
– Latin Hip Hop 2022 (Deluxe Edition) 3 of 4
|
1358 |
+
– Alternative Hip Hop Oh-My, 2016, (Deluxe), 3 of 4
|
1359 |
+
– Es Geht Mir Gut, German Hip Hop, 2016, (Deluxe), 3 of 4
|
1360 |
+
– Italian Hip Hop 2022 (Deluxe Edition) 3 of 4
|
1361 |
+
– RUN, Alternative Hip Hop, 2016, (Deluxe), 3 of 4
|
1362 |
+
– Hip Hop, Rap Battle, 2018 (High Quality) (Deluxe Edition) 3
|
1363 |
+
of 4
|
1364 |
+
– Hip Hop Tech, Bandlez, Hot Pursuit, brostep, 3 of 4
|
1365 |
+
Genre = Metal
|
1366 |
+
– Death Metal, 2012, 3 of 4
|
1367 |
+
– Heavy Death Metal (Deluxe Edition), 3 of 4
|
1368 |
+
– Black Alternative Metal, The Pick of Death (Deluxe), 2006, 3
|
1369 |
+
of 4
|
1370 |
+
– Kill For Metal, Iron Fire, To The Grave, melodic metal, 3 of 4
|
1371 |
+
– Melodic Metal, Iron Dust (Deluxe), 2006, 3 of 4
|
1372 |
+
– Possessed Death Metal Stones (Deluxe), 2006, 3 of 4
|
1373 |
+
– Black Metal Venom, 2006, 3 of 4
|
1374 |
+
– The Heavy Death Metal War (Deluxe), 2006, 3 of 4
|
1375 |
+
– Heavy metal (Deluxe Edition), 3 of 4
|
1376 |
+
– Viking Heavy Death Metal (Deluxe), 2006, 3 of 4
|
1377 |
+
Genre = Pop
|
1378 |
+
– (Everything I Do), I Do It For You, Bryan Adams, The Best
|
1379 |
+
Of Me, canadian pop, 3 of 4
|
1380 |
+
– Payphone, Maroon 5, Overexposed, Pop, 2021, 3 of 4
|
1381 |
+
– 24K Magic, Bruno Mars, 24K Magic, dance pop, 3 of 4
|
1382 |
+
– Who Is It, Michael Jackson, Dangerous, Pop (Deluxe), 3 of 4
|
1383 |
+
– Forget Me, Lewis Capaldi, Forget Me, Pop Pop, 2022, 3 of 4
|
1384 |
+
– Pop, Speak Now, Taylor Swift, 2014, (Deluxe), 3 of 4
|
1385 |
+
– Pop Pop, Maroon 5, Overexposed, 2016, 3 of 4
|
1386 |
+
– Pointless, Lewis Capaldi, Pointless, Pop, 2022, 3 of 4
|
1387 |
+
– Saved, Khalid, American Teen, Pop, 2022, 3 of 4
|
1388 |
+
– Deja vu, Fearless, Pop, 2020, (Deluxe), 3 of 4
|
1389 |
+
Table 3. Text prompts composed for the four common music gen-
|
1390 |
+
res: electronic, hip hop, metal, and pop.
|
1391 |
+
|
0tFKT4oBgHgl3EQfOC19/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
0tFST4oBgHgl3EQfVziF/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2c1df763e50deedcbaa0d71c29c258d1f20210413c23fb7a3fae62ed5203a2d4
|
3 |
+
size 3801133
|
2NA0T4oBgHgl3EQfM_8t/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:46373f1c355cc07ce4073650bc571a8b026c1c767106ce7914364e54ba9bf360
|
3 |
+
size 9764909
|
2dFAT4oBgHgl3EQfDhxB/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b1c7b20413ecf0ac352c50f07397bc87961fe81866fd9bfdb814d161a345a7dc
|
3 |
+
size 101282
|
2tE1T4oBgHgl3EQf5gWl/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e8b68b329821934aaa47752e5d4a747c90cb248da66efa0e9b1c02b918fa255c
|
3 |
+
size 9633837
|
39AyT4oBgHgl3EQfo_il/content/2301.00518v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7bc6fe771b7bf37c8e7a2a6993eff74ec0d1c79e7731ce38b60d3512031c6507
|
3 |
+
size 356770
|
39AyT4oBgHgl3EQfo_il/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cbed80713a454b7b53e3a8e52645ae2d3208777132e8f643b15f0bae213d1c26
|
3 |
+
size 137783
|
39E2T4oBgHgl3EQf6Aj8/content/tmp_files/2301.04197v1.pdf.txt
ADDED
@@ -0,0 +1,856 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Disconnected and multiply connected spectra in the 2D attractive Hubbard model
|
2 |
+
Johan Carlström
|
3 |
+
Department of Physics, Stockholm University, 106 91 Stockholm, Sweden
|
4 |
+
(Dated: January 12, 2023)
|
5 |
+
Fermi gases and liquids display an excitation spectrum that is simply connected, ensuring closed Fermi sur-
|
6 |
+
faces. In strongly correlated systems like the cuprate superconductors, the existence of open sheets of Fermi
|
7 |
+
surface known as Fermi arcs indicate a distinctly different topology of the spectrum with no equivalent in Fermi
|
8 |
+
liquid theory. Here, we demonstrate a generic mechanism by which correlation effects in fermionic systems
|
9 |
+
can change the topology of the spectrum. Using diagrammatic Monte Carlo simulations, we demonstrate the
|
10 |
+
existence of disconnected and multiply connected excitation spectra in the attractive Hubbard model in the
|
11 |
+
BCS-BEC cross-over regime. These topologically nontrivial spectra are a prerequisite for Fermi arcs.
|
12 |
+
Landaus Fermi liquid theory [1] is the standard model
|
13 |
+
through which we understand interacting electrons in normal
|
14 |
+
metals. In this paradigm, electronic states evolve adiabatically
|
15 |
+
with increasing interactions so that there remains a direct cor-
|
16 |
+
respondence between the states in a non-interacting Fermi gas,
|
17 |
+
and the quasi-particles of the interacting system. A key con-
|
18 |
+
sequence of this relationship is that the excitation spectrum of
|
19 |
+
the interacting system inherits the topology of the bands as-
|
20 |
+
sociated with the noninteracting state. In the absence of gap-
|
21 |
+
closing points, the energy bands of Fermi gases are generally
|
22 |
+
simply connected sets, and so are consequently the spectra
|
23 |
+
of Fermi liquids. This, in turn, implies a Fermi surface that
|
24 |
+
is closed (this point also holds with nodes in the spectrum).
|
25 |
+
Strongly correlated systems often display phenomena that fall
|
26 |
+
decidedly outside of the Fermi liquid regime. In the cuprates,
|
27 |
+
superconductivity is nucleated from a pseudogap state with
|
28 |
+
open sheets of Fermi surface, which persist over a wide range
|
29 |
+
of doping levels [2]. The physical origin of these Fermi arcs
|
30 |
+
remains highly contested.
|
31 |
+
It has been observed in the cuprates that superconducting
|
32 |
+
fluctuations persist above the critical temperature [3–5], and
|
33 |
+
it has been proposed that this fact may explain the origin of
|
34 |
+
the pseudogap state [6].
|
35 |
+
This in turn raises key questions
|
36 |
+
about the pairing regime, which also remains disputed: If the
|
37 |
+
cuprates are BCS-like, then the fluctuating region should be
|
38 |
+
understood in terms of a paired state without global phase
|
39 |
+
coherence [7]. In the BEC limit, the electrons form bound
|
40 |
+
pairs which give rise to a bosonic normal liquid at tempera-
|
41 |
+
tures far above Tc [8]. The onset of superconductivity would
|
42 |
+
then occur as these pairs condense at a much lower temper-
|
43 |
+
ature. While these two scenarios are often both referred to
|
44 |
+
by the term “preformed pairs”, they are remarkably different.
|
45 |
+
Between these two extrema lies the an extensive BCS-BEC
|
46 |
+
crossover regime [9].
|
47 |
+
A directly opposing point of view is that preformed pairs
|
48 |
+
have no part in the emergence of Fermi arcs, and that the
|
49 |
+
pseudogap and paired states are instead antagonistic to each
|
50 |
+
other. ARPES imaging is claimed to show direct competition
|
51 |
+
between superconductivity, and a distinctly different order pa-
|
52 |
+
rameter that is associated with the pseudogap state [10, 11]. A
|
53 |
+
candidate for this order parameter is provided by a breaking of
|
54 |
+
translation symmetry [12], which is observed in STM imaging
|
55 |
+
[13, 14].
|
56 |
+
Theoretically predicting the existence of Fermi arcs in
|
57 |
+
model Hamiltonians is challenging due to a lack of reli-
|
58 |
+
able numerical techniques for strongly correlated fermions.
|
59 |
+
Nonetheless, recent variational Monte Carlo calculations sug-
|
60 |
+
gest that the pseudogap physics observed in the cuprates is at
|
61 |
+
least qualitatively captured by the single band Hubbard model.
|
62 |
+
For Hubbard clusters up to 64 sites, Fermi arcs are observed
|
63 |
+
at a carrier concentration of 6.25%, and remnants of these are
|
64 |
+
present at 12.5% doping [15]. This may be compared to the
|
65 |
+
cuprates, where pseudogap physics persist up to a carrier con-
|
66 |
+
centration of ∼ 20% [2, 16]. The existence of Fermi arcs in a
|
67 |
+
simple model Hamiltonian like the Hubbard model is encour-
|
68 |
+
aging since it may indicate that this is a generic phenomena.
|
69 |
+
A second theoretical challenge is to qualitatively explain
|
70 |
+
how Fermi liquid theory fails in strongly correlated systems,
|
71 |
+
and connect this insight with the emergence of Fermi arcs.
|
72 |
+
Here, a key observation is that a simply connected excitation
|
73 |
+
spectrum does not permit open sheets of Fermi surface. This
|
74 |
+
relationship implies that the electronic state’s adiabatic depen-
|
75 |
+
dence on interaction strength must necessarily break down in
|
76 |
+
such a way that the connectivity of the spectrum changes, see
|
77 |
+
also Fig. 1.
|
78 |
+
In this work, we discuss how strong interactions can give
|
79 |
+
rise to non-Fermi-liquid phases which are characterized by
|
80 |
+
band fractionalization [17]. Using the attractive-interaction
|
81 |
+
Hubbard model as an example, we demonstrate that that the
|
82 |
+
operators associated with these fractional bands exhibit van-
|
83 |
+
ishing phase spaces in parts of the Brillouin zone, which
|
84 |
+
leads to disconnected or multiply connected excitation spec-
|
85 |
+
tra. These topologically nontrivial spectra are a fundamental
|
86 |
+
prerequisite for the existence of Fermi arcs.
|
87 |
+
Band fractionalization and spectral topology—To illustrate
|
88 |
+
the breakdown of Fermi liquid theory, we consider the attrac-
|
89 |
+
tive Hubbard model (AHM), which is given by
|
90 |
+
H =
|
91 |
+
�
|
92 |
+
⟨ij⟩σ
|
93 |
+
tc†
|
94 |
+
iσcjσ +
|
95 |
+
�
|
96 |
+
i
|
97 |
+
(Uni↓ni↑ − µni), U < 0.
|
98 |
+
(1)
|
99 |
+
Because of the interaction, the energy bands are generally split
|
100 |
+
into two sub-bands, [18], a phenomena that is also referred to
|
101 |
+
as band fractionalization [17]. For strong contact interaction,
|
102 |
+
these sub-bands are generally singlon-like and doublon-like
|
103 |
+
respectively, prompting us to introduce the corresponding op-
|
104 |
+
arXiv:2301.04197v1 [cond-mat.str-el] 10 Jan 2023
|
105 |
+
|
106 |
+
2
|
107 |
+
Spectrum
|
108 |
+
Fermi level
|
109 |
+
Fermi arc
|
110 |
+
Figure 1.
|
111 |
+
Relationship between spectral topology and Fermi
|
112 |
+
arcs. The multiply connected spectrum intersects the Fermi level on
|
113 |
+
a set of open and disconnected lines which constitute Fermi arcs. By
|
114 |
+
contrast, a simply connected spectrum, must necessarily intersect the
|
115 |
+
Fermi level on a set of closed lines. This implies that a topologically
|
116 |
+
nontrivial spectrum is a prerequisite of Fermi arcs.
|
117 |
+
erators and associated spinors:
|
118 |
+
c†
|
119 |
+
iσ = s†
|
120 |
+
iσ + d†
|
121 |
+
iσ, s†
|
122 |
+
iσ = c†
|
123 |
+
iσ(1 − ni¯σ), d†
|
124 |
+
i = c†
|
125 |
+
iσni¯σ
|
126 |
+
Ψ†
|
127 |
+
iσ =
|
128 |
+
�
|
129 |
+
s†
|
130 |
+
iσ d†
|
131 |
+
iσ
|
132 |
+
�
|
133 |
+
,
|
134 |
+
Ψiσ =
|
135 |
+
�siσ
|
136 |
+
diσ
|
137 |
+
�
|
138 |
+
.
|
139 |
+
(2)
|
140 |
+
Here, s† and d† are the singlon and doublon creation operators
|
141 |
+
while ¯σ = −σ. We can then define a “quasi-particle” (QP)
|
142 |
+
greens function based on the outer product of the spinors:
|
143 |
+
Γσ(x2 − x1) = ⟨TτΨ†
|
144 |
+
iσ(x1) ⊗ Ψiσ(x2)⟩,
|
145 |
+
(3)
|
146 |
+
from which the ordinary electronic Greens function is ob-
|
147 |
+
tained by the summation
|
148 |
+
Gσ(x) =
|
149 |
+
�
|
150 |
+
αβ
|
151 |
+
Γαβσ(x).
|
152 |
+
(4)
|
153 |
+
In the atomic limit, the QP Greens function is diagonal, with
|
154 |
+
a frequency space representation given by
|
155 |
+
ΓA
|
156 |
+
σ (ω) =
|
157 |
+
� 1+eµ
|
158 |
+
ZA
|
159 |
+
1
|
160 |
+
iω+µ
|
161 |
+
0
|
162 |
+
0
|
163 |
+
eµ+e2µ−U
|
164 |
+
ZA
|
165 |
+
1
|
166 |
+
iω+µ−U
|
167 |
+
�
|
168 |
+
.
|
169 |
+
(5)
|
170 |
+
Here, the energy is for simplicity given in units of the tem-
|
171 |
+
perature (corresponding to the case of unit temperature). The
|
172 |
+
Greens function (5) resembles that of a two-component sys-
|
173 |
+
tem, except that it is rescaled by two “quasiparticle weights”.
|
174 |
+
To pursue this analogy we introduce the weight W according
|
175 |
+
to
|
176 |
+
W =
|
177 |
+
� 1+eµ
|
178 |
+
ZA
|
179 |
+
0
|
180 |
+
0
|
181 |
+
eµ+e2µ−U
|
182 |
+
ZA
|
183 |
+
�
|
184 |
+
= w0σ0 + wzσz,
|
185 |
+
(6)
|
186 |
+
where we note that (6) must satisfy
|
187 |
+
w0 ≥ |wz|.
|
188 |
+
(7)
|
189 |
+
In the limit wz → w0, the system is effectively Gutzwiller
|
190 |
+
projected, and doublons can be regarded as “forbidden”. In
|
191 |
+
this scenario, the doublon operators can be said to have a van-
|
192 |
+
ishing phase space in the sense that they have a domain or
|
193 |
+
codomain which does not overlap with the sub-space on which
|
194 |
+
we project. The same can be said abut the singlon operator in
|
195 |
+
the limit wz → −w0. In these cases, the doublon or singlon
|
196 |
+
parts do not contribute to the Greens function, and thus not to
|
197 |
+
the spectrum either.
|
198 |
+
We may then express the atomic Greens function (5) in
|
199 |
+
terms of a reweighted two-component system according to
|
200 |
+
ΓA
|
201 |
+
σ (ω) =
|
202 |
+
W
|
203 |
+
iω − V ,
|
204 |
+
V =
|
205 |
+
�U
|
206 |
+
2 − µ
|
207 |
+
�
|
208 |
+
σ0 − U
|
209 |
+
2 σz,
|
210 |
+
(8)
|
211 |
+
where V is the effective two-component Hamiltonian.
|
212 |
+
Next, we note that the tunneling term may be written
|
213 |
+
tc†
|
214 |
+
iσcjσ = Ψ†
|
215 |
+
iσKΨjσ, K = t(σ0 + σx).
|
216 |
+
(9)
|
217 |
+
Thus, including the first correction of the strong-coupling ex-
|
218 |
+
pansion [19], we obtain a Greens function
|
219 |
+
Γσ(ω) = ΓA
|
220 |
+
σ (ω) + ΓA
|
221 |
+
σ (ω)K(k)ΓA
|
222 |
+
σ (ω) + ...
|
223 |
+
=
|
224 |
+
1
|
225 |
+
iω − V − WK(k)W.
|
226 |
+
(10)
|
227 |
+
At this point, the effective two-component Hamiltonian He =
|
228 |
+
V + WK(k) is no longer diagonal, and the dispersion thus
|
229 |
+
mixes the singlon and doublon components. Additionally, He
|
230 |
+
is non-Hermitian, and does not generally exhibit an orthonor-
|
231 |
+
mal eigenbasis. However, due to a combination of PT sym-
|
232 |
+
metry and the condition (7), the eigenvalues remain real.
|
233 |
+
Due to the factor W, the spectral weight of the two sub-
|
234 |
+
bands are generally not equal, and one of them may even van-
|
235 |
+
ish asymptotically. This points is central to the spectral topol-
|
236 |
+
ogy: If we conduct a strong coupling expansion to higher or-
|
237 |
+
der, then we will find that the QP weight W becomes momen-
|
238 |
+
tum dependent. If the phase space for a sub-band operator of
|
239 |
+
the type (2) vanishes in part of Brillouin zone, then so does
|
240 |
+
the corresponding spectral weight, implying that the spectrum
|
241 |
+
is no longer simply connected. Strong-coupling expansion by
|
242 |
+
hand is however not feasible beyond first order, and to explore
|
243 |
+
this concept we have to employ numerical techniques.
|
244 |
+
Numerical treatment—To test the preceding conjecture,
|
245 |
+
we employ bold-line diagrammatic Monte Carlo simulations,
|
246 |
+
specifically focusing on the attractive Hubbard model in the
|
247 |
+
BCS-BEC cross over regime.
|
248 |
+
This method is based on
|
249 |
+
stochastic sampling of Feynman type graphs [20], and is un-
|
250 |
+
biased in the sense that the only systematic source of error is
|
251 |
+
truncation of the series. For a convergent series, asymptot-
|
252 |
+
ically exact results are obtained directly in the macroscopic
|
253 |
+
limit. To be able to address systems with strong interactions
|
254 |
+
we use a particular formulation known as strong-coupling di-
|
255 |
+
agrammatic Monte Carlo (SCDMC) [19, 21–24], where the
|
256 |
+
|
257 |
+
3
|
258 |
+
diagrammatic elements are connected vertices of propagating
|
259 |
+
electrons that are non-perturbative in U. The computational
|
260 |
+
protocol employed here is outlined in detail in [19].
|
261 |
+
In SCDMC, the expansion parameter is the hopping integral
|
262 |
+
t. The principal observable that we compute is the polariza-
|
263 |
+
tion operator of the hopping integral, here denoted Πt(ω, k).
|
264 |
+
From the polarization operator, we obtain the dressed hopping
|
265 |
+
integral via the Bethe Salpiter equation:
|
266 |
+
˜t(ω, k) =
|
267 |
+
1
|
268 |
+
t−1(k) − Πt(ω, k).
|
269 |
+
(11)
|
270 |
+
We expand in the dressed hopping ˜t, while retaining only the
|
271 |
+
skeleton diagrams. By iterating until convergence, we obtain
|
272 |
+
a self-consistent solution for ˜t which implicitly takes into ac-
|
273 |
+
count certain classes of diagrams to infinite order.
|
274 |
+
The Greens function of the interacting system is closely re-
|
275 |
+
lated to the dressed hopping integral, and can be obtained from
|
276 |
+
the equation
|
277 |
+
G(ω, k) =
|
278 |
+
1
|
279 |
+
Π−1
|
280 |
+
t (ω, k) − tk
|
281 |
+
.
|
282 |
+
(12)
|
283 |
+
To the lowest order, the polarization operator is given by the
|
284 |
+
atomic-limit Greens function, meaning that eq. (10) is repro-
|
285 |
+
duced. We conduct a self-consistent summation of all dia-
|
286 |
+
grams to order 7 which permits us to asses convergence prop-
|
287 |
+
erties of the series–for a discussion, see Appendix I.
|
288 |
+
We compute a discrete approximation for the spectrum us-
|
289 |
+
ing numerical analytical continuation [25]: First, we define
|
290 |
+
a spectral reconstruction of the Greens function and a corre-
|
291 |
+
sponding error metric according to
|
292 |
+
GR(τ, k) =
|
293 |
+
nmax
|
294 |
+
�
|
295 |
+
n=1
|
296 |
+
An(k) e−ϵnτ
|
297 |
+
1 + eβϵn ,
|
298 |
+
τ < 0,
|
299 |
+
(13)
|
300 |
+
∆[k, {An(k)}] =
|
301 |
+
�
|
302 |
+
1
|
303 |
+
β
|
304 |
+
�
|
305 |
+
dτ[G(τ, k) − GR(τ, k)]2. (14)
|
306 |
+
We use nmax = 121 as a compromise between accuracy and
|
307 |
+
computational cost. To obtain the best estimate for the spec-
|
308 |
+
tral function A(k), we minimize the error metric ∆ through
|
309 |
+
a process of simulated annealing followed by a line-search
|
310 |
+
tecnhique: In the first stage, we use Monte Carlo to update
|
311 |
+
{An(k)} with an acceptance ration ∼ e−κ∆, while succes-
|
312 |
+
sively increasing the inverse pseudo temperature κ. In the
|
313 |
+
second stage, we minimize ∆ using Newton-Raphson. This
|
314 |
+
reduces the error only very slightly, but tends to result in a
|
315 |
+
smoother spectrum.
|
316 |
+
From the spectrum, we obtain a (discretized) estimate for
|
317 |
+
the density of states via the integral
|
318 |
+
dos(ϵn) =
|
319 |
+
�
|
320 |
+
dk
|
321 |
+
(2π)D An(k).
|
322 |
+
(15)
|
323 |
+
The normalization of Eq. (13) is such that the summations
|
324 |
+
over An and dos(ϵn) are unity.
|
325 |
+
We consider the Hubbard model with an attractive contact
|
326 |
+
interaction given by U = −5|t|, at temperatures t/T = 1 and
|
327 |
+
t/T = 4. We examine the cases of half-filling and a particle
|
328 |
+
density of ⟨ˆn⟩ ≈ 1.88. The results of our simulations are
|
329 |
+
summarized in Fig. 2.
|
330 |
+
At half-filling and a higher temperature of t/T = 1, we find
|
331 |
+
that the density of states (a) has a minimum at the Fermi level,
|
332 |
+
though the system remains gapless. The momentum-resolved
|
333 |
+
particle density (b) attains minima and maxima at ∼ 0.4 and
|
334 |
+
∼ 1.6. The spectral density (c) exhibits two smeared sub-
|
335 |
+
bands, with densities that are visibly momentum-dependent.
|
336 |
+
Reducing the temperature, the density of states (d) vanishes
|
337 |
+
at the Fermi level, indicating that the system is gapped against
|
338 |
+
fermionic excitations. The particle density extrema (e) are
|
339 |
+
now close to 0 and 2.0 respectively. The spectral density (f)
|
340 |
+
is sharply peaked, with a weight that is strongly dependent on
|
341 |
+
momentum.
|
342 |
+
If we also increase the particle density, then the upper sub-
|
343 |
+
band is strongly suppressed as a result (g). The system is now
|
344 |
+
completely filled in a large fraction of the Brillouin zone (h),
|
345 |
+
and the lower sub-band carries most of the spectral weight (i).
|
346 |
+
The momentum-dependent spectral weights can be under-
|
347 |
+
stood from the fact that the two sub-bands originate in singlon-
|
348 |
+
like and doublon-like degrees of freedom: For sufficiently
|
349 |
+
strong attraction, the Hubbard model prefers to have occupa-
|
350 |
+
tion numbers of 0 or 2. Singly occupied sites are situated at
|
351 |
+
high energy, implying that the upper sub-band is singlon-like.
|
352 |
+
At small momenta, k ≈ (0, 0), the particle density is smaller,
|
353 |
+
and the singlon operator has a substantial phase space allow-
|
354 |
+
ing for a high spectral density. Near k = (π, π), the particle
|
355 |
+
density approaches 2, meaning that the phase space for the
|
356 |
+
singlon operator vanishes, along with the spectral weight of
|
357 |
+
this sub-band. For the doublon-like component, the situation
|
358 |
+
is the opposite, with a vanishing spectral density when the
|
359 |
+
density is small.
|
360 |
+
To quantify the suppression of the spectral density, we de-
|
361 |
+
fine the spectral weight of a sub-band according to
|
362 |
+
ρ(k) =
|
363 |
+
n=nmax
|
364 |
+
�
|
365 |
+
n=nmin
|
366 |
+
An(k),
|
367 |
+
(16)
|
368 |
+
where the range of indices n should be taken to include the en-
|
369 |
+
tire sub-band, but nothing else. At a temperature of t/T = 4
|
370 |
+
and halffilling, the system remains gapped so that we can iden-
|
371 |
+
tify the upper sub-band with positive energies and the lower
|
372 |
+
sub-band with negative energies. Doping the system, the two
|
373 |
+
sub-bands are still well separated with the density of states
|
374 |
+
vanishing at ϵ ≈ 1.5t, suggesting we use this energy as the
|
375 |
+
dividing point. At the higher temperature, the two sub-bands
|
376 |
+
overlap. We can still calculate spectral weights based on ϵ = 0
|
377 |
+
as our dividing point, though this approximation may slightly
|
378 |
+
underestimate the spectral weight at its minimum, while over-
|
379 |
+
estimating it at the maximum.
|
380 |
+
The spectral weight of the singlon-like component is shown
|
381 |
+
in Fig.
|
382 |
+
3.
|
383 |
+
At a temperature of t/T = 1 and half-filling
|
384 |
+
(a), the singlon-like component is suppressed to ≈ 16% at
|
385 |
+
|
386 |
+
4
|
387 |
+
1.6
|
388 |
+
1.2
|
389 |
+
0.8
|
390 |
+
0.4
|
391 |
+
12
|
392 |
+
-12
|
393 |
+
0
|
394 |
+
1.6
|
395 |
+
1.2
|
396 |
+
0.8
|
397 |
+
0.4
|
398 |
+
Dos
|
399 |
+
12
|
400 |
+
(b)
|
401 |
+
(a)
|
402 |
+
(c)
|
403 |
+
-12
|
404 |
+
0
|
405 |
+
1.6
|
406 |
+
1.2
|
407 |
+
0.8
|
408 |
+
0.4
|
409 |
+
Dos
|
410 |
+
12
|
411 |
+
-12
|
412 |
+
0
|
413 |
+
1.6
|
414 |
+
2.0
|
415 |
+
1.2
|
416 |
+
0.8
|
417 |
+
0.4
|
418 |
+
Dos
|
419 |
+
Dos
|
420 |
+
15
|
421 |
+
-15
|
422 |
+
0
|
423 |
+
(e)
|
424 |
+
(d)
|
425 |
+
(f)
|
426 |
+
Spectral density
|
427 |
+
Spectral density
|
428 |
+
1.6
|
429 |
+
1.2
|
430 |
+
0.8
|
431 |
+
0.4
|
432 |
+
Dos
|
433 |
+
12
|
434 |
+
-12
|
435 |
+
0
|
436 |
+
1.6
|
437 |
+
2.0
|
438 |
+
1.2
|
439 |
+
0.8
|
440 |
+
Dos
|
441 |
+
15
|
442 |
+
-15
|
443 |
+
0
|
444 |
+
(h)
|
445 |
+
(g)
|
446 |
+
(i)
|
447 |
+
Spectral density
|
448 |
+
Figure 2. Spectra and equation of state for the attractive Hubbard model with U = −5|t|, at temperatures of t/T = 1 (a-c) and t/T = 4
|
449 |
+
(d-i). The figures (a-f) corresponds to half-filling, while (g-i) corresponds to ⟨ˆn⟩ ≈ 1.88. At high temperature, the spectrum (a) reveals
|
450 |
+
a suppression of the density of states at the Fermi level. The particle density (b) exhibits a minimum at k = (0, 0) with ⟨ˆn⟩ ≈ 0.4 and
|
451 |
+
a maximum at k = (π, π) with ⟨ˆn⟩ ≈ 1.6. The momentum-resolved spectral density (c) taken along the dashed line in (b), reveals two
|
452 |
+
sub-bands. Decreasing the temperature, the density of states (d) vanishes at the Fermi level, implying that the system is gapped with respect
|
453 |
+
to fermionic excitations. The particle density (e) now has minima and maxima close 0 and 2.0 respectively. The spectral density (f) reveals
|
454 |
+
sharp families of excitations with a spectral weight that is strongly dependent on momentum and almost vanishes in part of the Brillouin zone.
|
455 |
+
Increasing the particle density to ⟨ˆn⟩ ≈ 1.88, the density of states (g) reveals a large peak that is doublon-like, and a much suppressed peak
|
456 |
+
corresponding to singlons. The peaks are well separated, and the density of states vanishes at ϵ ≈ 1.5t. The spectral density reveals a large
|
457 |
+
doublon-like peak, though the singlon peak has a presence mainly near k = (0, 0). This data was obtained using an expansion order O = 6.
|
458 |
+
k ≈ (π, π). At a temperature of t/T = 4 (b), this mini-
|
459 |
+
mum drops below 1%. The strong temperature dependence is
|
460 |
+
consistent with the notion of a vanishing phase space for the
|
461 |
+
singlon operator: At k = (π, π), the system has a preference
|
462 |
+
for double occupation, and the singlon operator can only act
|
463 |
+
in the presence of thermal fluctuations. As the temperature
|
464 |
+
is reduced, these are exponentially suppressed together with
|
465 |
+
the spectral weight. Asymptotically, this results in a multiply
|
466 |
+
connected spectrum which lacks states in part of the Brillouin
|
467 |
+
zone. Increasing the particle density (c), the spectral weight
|
468 |
+
attains a maximum at k = (0, 0) while asymptotically vanish-
|
469 |
+
ing between these. The result is a disconnected spectrum.
|
470 |
+
It should be noted that we do not reach the point where the
|
471 |
+
spectrum completely vanishes since we are limited to finite
|
472 |
+
temperatures. Diagrammatic Monte Carlo generally requires
|
473 |
+
that the series converges, and this is often not the case at suffi-
|
474 |
+
ciently low temperatures. Real condensed matter systems are
|
475 |
+
also generally realized at finite temperature. However, ther-
|
476 |
+
mal fluctuations are exponentially suppressed with the inverse
|
477 |
+
temperature. If the relevant energy scale is large compared to
|
478 |
+
the temperature, then we can for all practical purposes regard
|
479 |
+
the systems as being in the asymptotic limit where the spec-
|
480 |
+
|
481 |
+
5
|
482 |
+
10
|
483 |
+
10
|
484 |
+
20
|
485 |
+
30
|
486 |
+
40
|
487 |
+
500.,6
|
488 |
+
0.6
|
489 |
+
0.4
|
490 |
+
0.20.8
|
491 |
+
0.6
|
492 |
+
0.4
|
493 |
+
0.215
|
494 |
+
10
|
495 |
+
5
|
496 |
+
0
|
497 |
+
5
|
498 |
+
10
|
499 |
+
15
|
500 |
+
10
|
501 |
+
20
|
502 |
+
30
|
503 |
+
40
|
504 |
+
5010
|
505 |
+
20K15
|
506 |
+
10
|
507 |
+
5
|
508 |
+
0
|
509 |
+
5
|
510 |
+
10
|
511 |
+
15
|
512 |
+
0
|
513 |
+
10
|
514 |
+
20
|
515 |
+
30
|
516 |
+
40
|
517 |
+
5010
|
518 |
+
2+
|
519 |
+
:*20
|
520 |
+
10
|
521 |
+
20
|
522 |
+
30
|
523 |
+
4015
|
524 |
+
10
|
525 |
+
5
|
526 |
+
10
|
527 |
+
0
|
528 |
+
10
|
529 |
+
20
|
530 |
+
30
|
531 |
+
40
|
532 |
+
505
|
533 |
+
0.0
|
534 |
+
(c)
|
535 |
+
0.0
|
536 |
+
0.075
|
537 |
+
0.125
|
538 |
+
0.5
|
539 |
+
1.0
|
540 |
+
0.2
|
541 |
+
(a)
|
542 |
+
0.5
|
543 |
+
0.8
|
544 |
+
(b)
|
545 |
+
Figure 3.
|
546 |
+
Spectral weight of the singlon-like sub-band, obtained
|
547 |
+
from equation (16). At half-filling and a temperature of t/T = 1 (a),
|
548 |
+
the weight is suppressed near k = (π, π) and reaches a minimum
|
549 |
+
of ≈ 16%. Reducing the temperature (b), this minimum falls below
|
550 |
+
1%. Increasing the particle density to ⟨ˆn⟩ ≈ 1.88 (c), the spectrum
|
551 |
+
retains a finite weight near k = (0, 0) but almost vanishes elsewhere.
|
552 |
+
The strong suppression of the spectral weight at certain momenta can
|
553 |
+
be understood from a vanishing phase space of singlon-like excita-
|
554 |
+
tions.
|
555 |
+
tral density vanishes in part of the Brillouin zone. Once the
|
556 |
+
spectrum has a nontrivial connectivity, there are no topologi-
|
557 |
+
cal obstacles to an intersection with the Fermi level that is an
|
558 |
+
open line in 2D, as shown in Fig. 1, or an open surface in 3D.
|
559 |
+
Conclusions—In non-Fermi-liquids, band fractionalization
|
560 |
+
effectively splits the electron energy into a distribution of
|
561 |
+
quasiparticle energies. The spectral weight of these sub-bands
|
562 |
+
is determined by the phase space of the corresponding oper-
|
563 |
+
ators, implying that it is generally momentum dependent. In
|
564 |
+
strongly correlated systems, this phase space may–to expo-
|
565 |
+
nential accuracy–vanish, creating voids in parts of the Bril-
|
566 |
+
louin zone which change the topology of the excitation spec-
|
567 |
+
trum. This effect is a prerequisite for Fermi arcs, and spectral
|
568 |
+
topology should therefore be regarded as an essential property
|
569 |
+
of strongly correlated phases.
|
570 |
+
This work was supported by the Swedish Research Coun-
|
571 |
+
cil (VR) through grant 2018-03882. Computations were per-
|
572 |
+
formed on resources provided by the Swedish National Infras-
|
573 |
+
tructure for Computing (SNIC) at the National Supercomputer
|
574 |
+
Centre in Linköping, Sweden.
|
575 |
+
[1] L.D. Landau, E.M. Lifshitz, and L.P. Pitaevskii, Course of The-
|
576 |
+
oretical Physics: Statistical Physics, Part 2 : by E.M. Lifshitz
|
577 |
+
and L.P. Pitaevskii, v. 9 (1980).
|
578 |
+
[2] Su-Di Chen, Makoto Hashimoto, Yu He, Dongjoon Song,
|
579 |
+
Ke-Jun Xu,
|
580 |
+
Jun-Feng He,
|
581 |
+
Thomas P. Devereaux,
|
582 |
+
Hi-
|
583 |
+
roshi Eisaki, Dong-Hui Lu, Jan Zaanen,
|
584 |
+
and Zhi-Xun
|
585 |
+
Shen, “Incoherent strange metal sharply bounded by a crit-
|
586 |
+
ical doping in bi2212,” Science 366, 1099–1102 (2019),
|
587 |
+
https://www.science.org/doi/pdf/10.1126/science.aaw8850.
|
588 |
+
[3] Takeshi
|
589 |
+
Kondo,
|
590 |
+
W.
|
591 |
+
Malaeb,
|
592 |
+
Y.
|
593 |
+
Ishida,
|
594 |
+
T.
|
595 |
+
Sasagawa,
|
596 |
+
H. Sakamoto, Tsunehiro Takeuchi, T. Tohyama, and S. Shin,
|
597 |
+
“Point nodes persisting far beyond tc in bi2212,” Nature Com-
|
598 |
+
munications 6, 7699 (2015).
|
599 |
+
[4] Yu He, Su-Di Chen, Zi-Xiang Li, Dan Zhao, Dongjoon Song,
|
600 |
+
Yoshiyuki Yoshida, Hiroshi Eisaki, Tao Wu, Xian-Hui Chen,
|
601 |
+
Dong-Hui Lu, Christoph Meingast, Thomas P. Devereaux,
|
602 |
+
Robert J. Birgeneau, Makoto Hashimoto, Dung-Hai Lee, and
|
603 |
+
Zhi-Xun Shen, “Superconducting fluctuations in overdoped
|
604 |
+
bi2sr2cacu2o8+δ,” Phys. Rev. X 11, 031068 (2021).
|
605 |
+
[5] N. Bergeal, J. Lesueur, M. Aprili, G. Faini, J. P. Contour,
|
606 |
+
and B. Leridon, “Pairing fluctuations in the pseudogap state of
|
607 |
+
copper-oxide superconductors probed by the josephson effect,”
|
608 |
+
Nature Physics 4, 608–611 (2008).
|
609 |
+
[6] Y. I. Seo, W. J. Choi, Shin-ichi Kimura, and Yong Seung Kwon,
|
610 |
+
“Evidence for a preformed cooper pair model in the pseudogap
|
611 |
+
spectra of a ca10(pt4as8)(fe2as2)5 single crystal with a nodal
|
612 |
+
superconducting gap,” Scientific Reports 9, 3987 (2019).
|
613 |
+
[7] John Sous, Yu He, and Steven A. Kivelson, “Absence of a bcs-
|
614 |
+
bec crossover in the cuprate superconductors,” (2022).
|
615 |
+
[8] Shengtao Jiang, Long Zou,
|
616 |
+
and Wei Ku, “Non-fermi-liquid
|
617 |
+
scattering against an emergent bose liquid: Manifestations in
|
618 |
+
the kink and other exotic quasiparticle behavior in the normal-
|
619 |
+
state cuprate superconductors,” Phys. Rev. B 99, 104507
|
620 |
+
(2019).
|
621 |
+
[9] N. Harrison and M. K. Chan, “Magic gap ratio for optimally ro-
|
622 |
+
bust fermionic condensation and its implications for High−Tc
|
623 |
+
superconductivity,” Phys. Rev. Lett. 129, 017001 (2022).
|
624 |
+
[10] Makoto Hashimoto, Rui-Hua He, Kiyohisa Tanaka, Jean-Pierre
|
625 |
+
Testaud, Worawat Meevasana, Rob G. Moore, Donghui Lu,
|
626 |
+
Hong Yao, Yoshiyuki Yoshida, Hiroshi Eisaki, Thomas P. De-
|
627 |
+
vereaux, Zahid Hussain,
|
628 |
+
and Zhi-Xun Shen, “Particle–hole
|
629 |
+
symmetry breaking in the pseudogap state of bi2201,” Nature
|
630 |
+
Physics 6, 414–418 (2010).
|
631 |
+
[11] Makoto Hashimoto, Elizabeth A. Nowadnick, Rui-Hua He,
|
632 |
+
Inna M. Vishik, Brian Moritz, Yu He, Kiyohisa Tanaka,
|
633 |
+
Robert G. Moore, Donghui Lu, Yoshiyuki Yoshida, Motoyuki
|
634 |
+
Ishikado, Takao Sasagawa, Kazuhiro Fujita, Shigeyuki Ishida,
|
635 |
+
Shinichi Uchida, Hiroshi Eisaki, Zahid Hussain, Thomas P. De-
|
636 |
+
vereaux, and Zhi-Xun Shen, “Direct spectroscopic evidence for
|
637 |
+
phase competition between the pseudogap and superconductiv-
|
638 |
+
ity in bi2sr2cacu2o8+δ,” Nature Materials 14, 37–42 (2015).
|
639 |
+
[12] J.-H. Ma, Z.-H. Pan, F. C. Niestemski, M. Neupane, Y.-M. Xu,
|
640 |
+
P. Richard, K. Nakayama, T. Sato, T. Takahashi, H.-Q. Luo,
|
641 |
+
L. Fang, H.-H. Wen, Ziqiang Wang, H. Ding, and V. Madhavan,
|
642 |
+
“Coexistence of competing orders with two energy gaps in real
|
643 |
+
and momentum space in the high temperature superconductor
|
644 |
+
bi2sr2−xlaxcuo6+δ,” Phys. Rev. Lett. 101, 207002 (2008).
|
645 |
+
[13] W. D. Wise, M. C. Boyer, Kamalesh Chatterjee, Takeshi Kondo,
|
646 |
+
T. Takeuchi, H. Ikuta, Yayu Wang, and E. W. Hudson, “Charge-
|
647 |
+
density-wave origin of cuprate checkerboard visualized by
|
648 |
+
scanning tunnelling microscopy,” Nature Physics 4, 696–699
|
649 |
+
(2008).
|
650 |
+
[14] J. E. Hoffman, E. W. Hudson, K. M. Lang, V. Madhavan,
|
651 |
+
H. Eisaki, S. Uchida, and J. C. Davis, “A four unit cell peri-
|
652 |
+
odic pattern of quasi-particle states surrounding vortex cores in
|
653 |
+
bi2sr2cacu2o8+δ,” Science 295, 466–469 (2002).
|
654 |
+
[15] P. Rosenberg,
|
655 |
+
D. Sénéchal,
|
656 |
+
A. M. S. Tremblay,
|
657 |
+
and
|
658 |
+
M. Charlebois, “Fermi arcs from dynamical variational monte
|
659 |
+
carlo,” (2022).
|
660 |
+
[16] S. Badoux, W. Tabis, F. Laliberté, G. Grissonnanche, B. Vi-
|
661 |
+
gnolle, D. Vignolles, J. Béard, D. A. Bonn, W. N. Hardy,
|
662 |
+
R. Liang, N. Doiron-Leyraud, Louis Taillefer, and Cyril Proust,
|
663 |
+
“Change of carrier density at the pseudogap critical point of a
|
664 |
+
cuprate superconductor,” Nature 531, 210–214 (2016).
|
665 |
+
[17] Masatoshi Imada and Takafumi J. Suzuki, “Excitons and
|
666 |
+
dark fermions as origins of mott gap, pseudogap and su-
|
667 |
+
perconductivity in cuprate superconductors - general con-
|
668 |
+
cept and basic formalism based on gap physics,” Jour-
|
669 |
+
nal of the Physical Society of Japan 88, 024701 (2019),
|
670 |
+
|
671 |
+
20
|
672 |
+
3:06
|
673 |
+
https://doi.org/10.7566/JPSJ.88.024701.
|
674 |
+
[18] K. A. Chao, J. Spałek, and A. M. Ole´s, “Canonical perturbation
|
675 |
+
expansion of the hubbard model,” Phys. Rev. B 18, 3453–3464
|
676 |
+
(1978).
|
677 |
+
[19] Johan Carlström, “Strong-coupling diagrammatic monte carlo
|
678 |
+
technique for correlated fermions and frustrated spins,” Phys.
|
679 |
+
Rev. B 103, 195147 (2021).
|
680 |
+
[20] Kris Van Houcke, Evgeny Kozik, N. Prokof’ev, and B. Svis-
|
681 |
+
tunov, “Diagrammatic monte carlo,” Physics Procedia 6, 95–
|
682 |
+
105 (2010).
|
683 |
+
[21] Johan
|
684 |
+
Carlström,
|
685 |
+
“Spin-charge
|
686 |
+
transformation
|
687 |
+
of
|
688 |
+
lattice
|
689 |
+
fermion models: duality approach for diagrammatic simulation
|
690 |
+
of strongly correlated systems,” Journal of Physics: Condensed
|
691 |
+
Matter 29, 385602 (2017).
|
692 |
+
[22] Johan Carlström, “Diagrammatic monte carlo procedure for
|
693 |
+
the spin-charge transformed hubbard model,” Phys. Rev. B 97,
|
694 |
+
075119 (2018).
|
695 |
+
[23] Johan Carlström, “Spectral shift technique for strongly cor-
|
696 |
+
related lattice fermions,”
|
697 |
+
(2021), arXiv:2111.05877 [cond-
|
698 |
+
mat.str-el].
|
699 |
+
[24] Johan Carlström, “In situ controllable magnetic phases in doped
|
700 |
+
twisted bilayer transition metal dichalcogenides,” Phys. Rev.
|
701 |
+
Research 4, 043126 (2022).
|
702 |
+
[25] Olga Goulko, Andrey S. Mishchenko, Lode Pollet, Nikolay
|
703 |
+
Prokof’ev, and Boris Svistunov, “Numerical analytic contin-
|
704 |
+
uation: Answers to well-posed questions,” Phys. Rev. B 95,
|
705 |
+
014102 (2017).
|
706 |
+
|
707 |
+
7
|
708 |
+
APPENDIX I
|
709 |
+
To asses how truncation of the series affects the results,
|
710 |
+
we compare the density of states and spectral function for
|
711 |
+
the cases reported in the article at different expansion orders.
|
712 |
+
In Fig. 4 we show the case of half-filling and temperatures
|
713 |
+
t/T = 1 and t/T = 4 for expansion orders O = 5, 6, 7. At
|
714 |
+
the higher temperature, we observe that the dos changes very
|
715 |
+
little, though a small correction at ϵ = 0 is visible. The spec-
|
716 |
+
trum is qualitatively very similar, and we conclude that the
|
717 |
+
impart of truncation is very small.
|
718 |
+
At the lower temperature, we see some changes in the shape
|
719 |
+
of the dos when increasing the order from 5 to 6, though
|
720 |
+
the systems consistently remains gapped. The spectra show
|
721 |
+
a weight that does not completely vanish at O = 5, but is
|
722 |
+
strongly suppressed at higher orders. At O = 7, we begin to
|
723 |
+
see noise in the spectrum as a result of the computational cost
|
724 |
+
associated with expansions to high order. For this data set, we
|
725 |
+
can conclude that truncation of the series has a limited quanti-
|
726 |
+
tative impact, but the it does not affect any of the conclusions
|
727 |
+
derived in the paper.
|
728 |
+
In Fig. 5, we see the dos and spectra for the doped case at
|
729 |
+
expansion orders O = 5, 6, 7. In this scenario, truncation of
|
730 |
+
the series has no impact visible to the naked eye, and we can
|
731 |
+
conclude that the result is virtually exact.
|
732 |
+
In conclusion, we find that the diagrammatic Monte Carlo
|
733 |
+
simulations reported do accurately capture the physics of the
|
734 |
+
attractive Hubbard model. The results are qualitatively not
|
735 |
+
affected by truncation of the series, yet a small quantitative
|
736 |
+
uncertainty remains for one of the data sets.
|
737 |
+
|
738 |
+
8
|
739 |
+
(b)
|
740 |
+
(a)
|
741 |
+
(c)
|
742 |
+
(e)
|
743 |
+
(d)
|
744 |
+
(f)
|
745 |
+
(h)
|
746 |
+
(g)
|
747 |
+
(i)
|
748 |
+
(k)
|
749 |
+
(j)
|
750 |
+
(l)
|
751 |
+
Figure 4.
|
752 |
+
Convergence of the series at half-filling. The left column corresponds to an expansion order O = 5, the center corresponds to
|
753 |
+
O = 6 and the right corresponds to O = 7. (a-c) give the dos at a temperature of t/T = 1, while (d-f) give the corresponding spectra. (g-i)
|
754 |
+
give the dos at a temperature of t/T = 4, while (j-l) give the corresponding spectra. At the higher temperature, the corrections when changing
|
755 |
+
the expansion order is very small, though a slight shift in dos at the Fermi level can be observed when comparing O = 5 (a) and O = 6 (b).
|
756 |
+
At the lower temperature, we do see quantitative difference in dos between orders 5 (g) and 6 (h) while the correction at order 7 (i) is smaller.
|
757 |
+
The small peaks in the dos near the Fermi level in (g) are reflected in a suppressed fractionalized sub-band visible in (j). At orders 6 and 7, this
|
758 |
+
fractionalized sub-band vanishes.
|
759 |
+
|
760 |
+
10
|
761 |
+
5
|
762 |
+
10
|
763 |
+
0
|
764 |
+
20
|
765 |
+
30
|
766 |
+
5015
|
767 |
+
10
|
768 |
+
5
|
769 |
+
0
|
770 |
+
n
|
771 |
+
10
|
772 |
+
5
|
773 |
+
0
|
774 |
+
10
|
775 |
+
20
|
776 |
+
30
|
777 |
+
40
|
778 |
+
5015
|
779 |
+
10
|
780 |
+
5
|
781 |
+
0
|
782 |
+
n
|
783 |
+
10
|
784 |
+
15
|
785 |
+
0
|
786 |
+
10
|
787 |
+
20
|
788 |
+
30
|
789 |
+
40
|
790 |
+
5015
|
791 |
+
10
|
792 |
+
5
|
793 |
+
0
|
794 |
+
n
|
795 |
+
10
|
796 |
+
5
|
797 |
+
0
|
798 |
+
10
|
799 |
+
20
|
800 |
+
30
|
801 |
+
40
|
802 |
+
5010
|
803 |
+
5
|
804 |
+
5
|
805 |
+
10
|
806 |
+
10
|
807 |
+
20
|
808 |
+
30
|
809 |
+
40
|
810 |
+
5010
|
811 |
+
5
|
812 |
+
10
|
813 |
+
10
|
814 |
+
20
|
815 |
+
30
|
816 |
+
509
|
817 |
+
(b)
|
818 |
+
(a)
|
819 |
+
(c)
|
820 |
+
(e)
|
821 |
+
(d)
|
822 |
+
(f)
|
823 |
+
Figure 5. Convergence of the series in the strongly doped case. The density is ⟨ˆn⟩ ≈ 1.88 and the temperature is t/T = 4. The left column
|
824 |
+
(a,d) corresponds to an expansion order O = 5, the center column to O = 6 and the right columns to O = 7. The dos (a-c) does not change
|
825 |
+
visibly with expansion order, and neither does the spectrum (d-f). We can therefore conclude that the observables have converged.
|
826 |
+
|
827 |
+
15
|
828 |
+
10
|
829 |
+
5
|
830 |
+
5
|
831 |
+
0
|
832 |
+
10
|
833 |
+
20
|
834 |
+
30
|
835 |
+
40
|
836 |
+
5015
|
837 |
+
10
|
838 |
+
5
|
839 |
+
5
|
840 |
+
10
|
841 |
+
0
|
842 |
+
10
|
843 |
+
20
|
844 |
+
30
|
845 |
+
40
|
846 |
+
5015
|
847 |
+
10
|
848 |
+
5
|
849 |
+
5
|
850 |
+
10
|
851 |
+
0
|
852 |
+
10
|
853 |
+
20
|
854 |
+
30
|
855 |
+
40
|
856 |
+
50
|
39E2T4oBgHgl3EQf6Aj8/content/tmp_files/load_file.txt
ADDED
@@ -0,0 +1,402 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf,len=401
|
2 |
+
page_content='Disconnected and multiply connected spectra in the 2D attractive Hubbard model Johan Carlström Department of Physics, Stockholm University, 106 91 Stockholm, Sweden (Dated: January 12, 2023) Fermi gases and liquids display an excitation spectrum that is simply connected, ensuring closed Fermi sur- faces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
3 |
+
page_content=' In strongly correlated systems like the cuprate superconductors, the existence of open sheets of Fermi surface known as Fermi arcs indicate a distinctly different topology of the spectrum with no equivalent in Fermi liquid theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
4 |
+
page_content=' Here, we demonstrate a generic mechanism by which correlation effects in fermionic systems can change the topology of the spectrum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
5 |
+
page_content=' Using diagrammatic Monte Carlo simulations, we demonstrate the existence of disconnected and multiply connected excitation spectra in the attractive Hubbard model in the BCS-BEC cross-over regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
6 |
+
page_content=' These topologically nontrivial spectra are a prerequisite for Fermi arcs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
7 |
+
page_content=' Landaus Fermi liquid theory [1] is the standard model through which we understand interacting electrons in normal metals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
8 |
+
page_content=' In this paradigm, electronic states evolve adiabatically with increasing interactions so that there remains a direct cor- respondence between the states in a non-interacting Fermi gas, and the quasi-particles of the interacting system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
9 |
+
page_content=' A key con- sequence of this relationship is that the excitation spectrum of the interacting system inherits the topology of the bands as- sociated with the noninteracting state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
10 |
+
page_content=' In the absence of gap- closing points, the energy bands of Fermi gases are generally simply connected sets, and so are consequently the spectra of Fermi liquids.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
11 |
+
page_content=' This, in turn, implies a Fermi surface that is closed (this point also holds with nodes in the spectrum).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
12 |
+
page_content=' Strongly correlated systems often display phenomena that fall decidedly outside of the Fermi liquid regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
13 |
+
page_content=' In the cuprates, superconductivity is nucleated from a pseudogap state with open sheets of Fermi surface, which persist over a wide range of doping levels [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
14 |
+
page_content=' The physical origin of these Fermi arcs remains highly contested.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
15 |
+
page_content=' It has been observed in the cuprates that superconducting fluctuations persist above the critical temperature [3–5], and it has been proposed that this fact may explain the origin of the pseudogap state [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
16 |
+
page_content=' This in turn raises key questions about the pairing regime, which also remains disputed: If the cuprates are BCS-like, then the fluctuating region should be understood in terms of a paired state without global phase coherence [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
17 |
+
page_content=' In the BEC limit, the electrons form bound pairs which give rise to a bosonic normal liquid at tempera- tures far above Tc [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
18 |
+
page_content=' The onset of superconductivity would then occur as these pairs condense at a much lower temper- ature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
19 |
+
page_content=' While these two scenarios are often both referred to by the term “preformed pairs”, they are remarkably different.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
20 |
+
page_content=' Between these two extrema lies the an extensive BCS-BEC crossover regime [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
21 |
+
page_content=' A directly opposing point of view is that preformed pairs have no part in the emergence of Fermi arcs, and that the pseudogap and paired states are instead antagonistic to each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
22 |
+
page_content=' ARPES imaging is claimed to show direct competition between superconductivity, and a distinctly different order pa- rameter that is associated with the pseudogap state [10, 11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
23 |
+
page_content=' A candidate for this order parameter is provided by a breaking of translation symmetry [12], which is observed in STM imaging [13, 14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
24 |
+
page_content=' Theoretically predicting the existence of Fermi arcs in model Hamiltonians is challenging due to a lack of reli- able numerical techniques for strongly correlated fermions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
25 |
+
page_content=' Nonetheless, recent variational Monte Carlo calculations sug- gest that the pseudogap physics observed in the cuprates is at least qualitatively captured by the single band Hubbard model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
26 |
+
page_content=' For Hubbard clusters up to 64 sites, Fermi arcs are observed at a carrier concentration of 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
27 |
+
page_content='25%, and remnants of these are present at 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
28 |
+
page_content='5% doping [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
29 |
+
page_content=' This may be compared to the cuprates, where pseudogap physics persist up to a carrier con- centration of ∼ 20% [2, 16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
30 |
+
page_content=' The existence of Fermi arcs in a simple model Hamiltonian like the Hubbard model is encour- aging since it may indicate that this is a generic phenomena.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
31 |
+
page_content=' A second theoretical challenge is to qualitatively explain how Fermi liquid theory fails in strongly correlated systems, and connect this insight with the emergence of Fermi arcs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
32 |
+
page_content=' Here, a key observation is that a simply connected excitation spectrum does not permit open sheets of Fermi surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
33 |
+
page_content=' This relationship implies that the electronic state’s adiabatic depen- dence on interaction strength must necessarily break down in such a way that the connectivity of the spectrum changes, see also Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
34 |
+
page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
35 |
+
page_content=' In this work, we discuss how strong interactions can give rise to non-Fermi-liquid phases which are characterized by band fractionalization [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
36 |
+
page_content=' Using the attractive-interaction Hubbard model as an example, we demonstrate that that the operators associated with these fractional bands exhibit van- ishing phase spaces in parts of the Brillouin zone, which leads to disconnected or multiply connected excitation spec- tra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
37 |
+
page_content=' These topologically nontrivial spectra are a fundamental prerequisite for the existence of Fermi arcs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
38 |
+
page_content=' Band fractionalization and spectral topology—To illustrate the breakdown of Fermi liquid theory, we consider the attrac- tive Hubbard model (AHM), which is given by H = � ⟨ij⟩σ tc† iσcjσ + � i (Uni↓ni↑ − µni), U < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
39 |
+
page_content=' (1) Because of the interaction, the energy bands are generally split into two sub-bands, [18], a phenomena that is also referred to as band fractionalization [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
40 |
+
page_content=' For strong contact interaction, these sub-bands are generally singlon-like and doublon-like respectively, prompting us to introduce the corresponding op- arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
41 |
+
page_content='04197v1 [cond-mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
42 |
+
page_content='str-el] 10 Jan 2023 2 Spectrum Fermi level Fermi arc Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
43 |
+
page_content=' Relationship between spectral topology and Fermi arcs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
44 |
+
page_content=' The multiply connected spectrum intersects the Fermi level on a set of open and disconnected lines which constitute Fermi arcs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
45 |
+
page_content=' By contrast, a simply connected spectrum, must necessarily intersect the Fermi level on a set of closed lines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
46 |
+
page_content=' This implies that a topologically nontrivial spectrum is a prerequisite of Fermi arcs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
47 |
+
page_content=' erators and associated spinors: c† iσ = s† iσ + d† iσ, s† iσ = c† iσ(1 − ni¯σ), d† i = c† iσni¯σ Ψ† iσ = � s† iσ d† iσ � , Ψiσ = �siσ diσ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
48 |
+
page_content=' (2) Here, s† and d† are the singlon and doublon creation operators while ¯σ = −σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
49 |
+
page_content=' We can then define a “quasi-particle” (QP) greens function based on the outer product of the spinors: Γσ(x2 − x1) = ⟨TτΨ† iσ(x1) ⊗ Ψiσ(x2)⟩, (3) from which the ordinary electronic Greens function is ob- tained by the summation Gσ(x) = � αβ Γαβσ(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
50 |
+
page_content=' (4) In the atomic limit, the QP Greens function is diagonal, with a frequency space representation given by ΓA σ (ω) = � 1+eµ ZA 1 iω+µ 0 0 eµ+e2µ−U ZA 1 iω+µ−U � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
51 |
+
page_content=' (5) Here, the energy is for simplicity given in units of the tem- perature (corresponding to the case of unit temperature).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
52 |
+
page_content=' The Greens function (5) resembles that of a two-component sys- tem, except that it is rescaled by two “quasiparticle weights”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
53 |
+
page_content=' To pursue this analogy we introduce the weight W according to W = � 1+eµ ZA 0 0 eµ+e2µ−U ZA � = w0σ0 + wzσz, (6) where we note that (6) must satisfy w0 ≥ |wz|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
54 |
+
page_content=' (7) In the limit wz → w0, the system is effectively Gutzwiller projected, and doublons can be regarded as “forbidden”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
55 |
+
page_content=' In this scenario, the doublon operators can be said to have a van- ishing phase space in the sense that they have a domain or codomain which does not overlap with the sub-space on which we project.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
56 |
+
page_content=' The same can be said abut the singlon operator in the limit wz → −w0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
57 |
+
page_content=' In these cases, the doublon or singlon parts do not contribute to the Greens function, and thus not to the spectrum either.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
58 |
+
page_content=' We may then express the atomic Greens function (5) in terms of a reweighted two-component system according to ΓA σ (ω) = W iω − V , V = �U 2 − µ � σ0 − U 2 σz, (8) where V is the effective two-component Hamiltonian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
59 |
+
page_content=' Next, we note that the tunneling term may be written tc† iσcjσ = Ψ† iσKΨjσ, K = t(σ0 + σx).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
60 |
+
page_content=' (9) Thus, including the first correction of the strong-coupling ex- pansion [19], we obtain a Greens function Γσ(ω) = ΓA σ (ω) + ΓA σ (ω)K(k)ΓA σ (ω) + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
61 |
+
page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
62 |
+
page_content=' = 1 iω − V − WK(k)W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
63 |
+
page_content=' (10) At this point, the effective two-component Hamiltonian He = V + WK(k) is no longer diagonal, and the dispersion thus mixes the singlon and doublon components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
64 |
+
page_content=' Additionally, He is non-Hermitian, and does not generally exhibit an orthonor- mal eigenbasis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
65 |
+
page_content=' However, due to a combination of PT sym- metry and the condition (7), the eigenvalues remain real.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
66 |
+
page_content=' Due to the factor W, the spectral weight of the two sub- bands are generally not equal, and one of them may even van- ish asymptotically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
67 |
+
page_content=' This points is central to the spectral topol- ogy: If we conduct a strong coupling expansion to higher or- der, then we will find that the QP weight W becomes momen- tum dependent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
68 |
+
page_content=' If the phase space for a sub-band operator of the type (2) vanishes in part of Brillouin zone, then so does the corresponding spectral weight, implying that the spectrum is no longer simply connected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
69 |
+
page_content=' Strong-coupling expansion by hand is however not feasible beyond first order, and to explore this concept we have to employ numerical techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
70 |
+
page_content=' Numerical treatment—To test the preceding conjecture, we employ bold-line diagrammatic Monte Carlo simulations, specifically focusing on the attractive Hubbard model in the BCS-BEC cross over regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
71 |
+
page_content=' This method is based on stochastic sampling of Feynman type graphs [20], and is un- biased in the sense that the only systematic source of error is truncation of the series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
72 |
+
page_content=' For a convergent series, asymptot- ically exact results are obtained directly in the macroscopic limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
73 |
+
page_content=' To be able to address systems with strong interactions we use a particular formulation known as strong-coupling di- agrammatic Monte Carlo (SCDMC) [19, 21–24], where the 3 diagrammatic elements are connected vertices of propagating electrons that are non-perturbative in U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
74 |
+
page_content=' The computational protocol employed here is outlined in detail in [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
75 |
+
page_content=' In SCDMC, the expansion parameter is the hopping integral t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
76 |
+
page_content=' The principal observable that we compute is the polariza- tion operator of the hopping integral, here denoted Πt(ω, k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
77 |
+
page_content=' From the polarization operator, we obtain the dressed hopping integral via the Bethe Salpiter equation: ˜t(ω, k) = 1 t−1(k) − Πt(ω, k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
78 |
+
page_content=' (11) We expand in the dressed hopping ˜t, while retaining only the skeleton diagrams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
79 |
+
page_content=' By iterating until convergence, we obtain a self-consistent solution for ˜t which implicitly takes into ac- count certain classes of diagrams to infinite order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
80 |
+
page_content=' The Greens function of the interacting system is closely re- lated to the dressed hopping integral, and can be obtained from the equation G(ω, k) = 1 Π−1 t (ω, k) − tk .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
81 |
+
page_content=' (12) To the lowest order, the polarization operator is given by the atomic-limit Greens function, meaning that eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
82 |
+
page_content=' (10) is repro- duced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
83 |
+
page_content=' We conduct a self-consistent summation of all dia- grams to order 7 which permits us to asses convergence prop- erties of the series–for a discussion, see Appendix I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
84 |
+
page_content=' We compute a discrete approximation for the spectrum us- ing numerical analytical continuation [25]: First, we define a spectral reconstruction of the Greens function and a corre- sponding error metric according to GR(τ, k) = nmax � n=1 An(k) e−ϵnτ 1 + eβϵn , τ < 0, (13) ∆[k, {An(k)}] = � 1 β � dτ[G(τ, k) − GR(τ, k)]2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
85 |
+
page_content=' (14) We use nmax = 121 as a compromise between accuracy and computational cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
86 |
+
page_content=' To obtain the best estimate for the spec- tral function A(k), we minimize the error metric ∆ through a process of simulated annealing followed by a line-search tecnhique: In the first stage, we use Monte Carlo to update {An(k)} with an acceptance ration ∼ e−κ∆, while succes- sively increasing the inverse pseudo temperature κ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
87 |
+
page_content=' In the second stage, we minimize ∆ using Newton-Raphson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
88 |
+
page_content=' This reduces the error only very slightly, but tends to result in a smoother spectrum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
89 |
+
page_content=' From the spectrum, we obtain a (discretized) estimate for the density of states via the integral dos(ϵn) = � dk (2π)D An(k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
90 |
+
page_content=' (15) The normalization of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
91 |
+
page_content=' (13) is such that the summations over An and dos(ϵn) are unity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
92 |
+
page_content=' We consider the Hubbard model with an attractive contact interaction given by U = −5|t|, at temperatures t/T = 1 and t/T = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
93 |
+
page_content=' We examine the cases of half-filling and a particle density of ⟨ˆn⟩ ≈ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
94 |
+
page_content='88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
95 |
+
page_content=' The results of our simulations are summarized in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
96 |
+
page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
97 |
+
page_content=' At half-filling and a higher temperature of t/T = 1, we find that the density of states (a) has a minimum at the Fermi level, though the system remains gapless.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
98 |
+
page_content=' The momentum-resolved particle density (b) attains minima and maxima at ∼ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
99 |
+
page_content='4 and ∼ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
100 |
+
page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
101 |
+
page_content=' The spectral density (c) exhibits two smeared sub- bands, with densities that are visibly momentum-dependent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
102 |
+
page_content=' Reducing the temperature, the density of states (d) vanishes at the Fermi level, indicating that the system is gapped against fermionic excitations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
103 |
+
page_content=' The particle density extrema (e) are now close to 0 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
104 |
+
page_content='0 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
105 |
+
page_content=' The spectral density (f) is sharply peaked, with a weight that is strongly dependent on momentum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
106 |
+
page_content=' If we also increase the particle density, then the upper sub- band is strongly suppressed as a result (g).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
107 |
+
page_content=' The system is now completely filled in a large fraction of the Brillouin zone (h), and the lower sub-band carries most of the spectral weight (i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
108 |
+
page_content=' The momentum-dependent spectral weights can be under- stood from the fact that the two sub-bands originate in singlon- like and doublon-like degrees of freedom: For sufficiently strong attraction, the Hubbard model prefers to have occupa- tion numbers of 0 or 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
109 |
+
page_content=' Singly occupied sites are situated at high energy, implying that the upper sub-band is singlon-like.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
110 |
+
page_content=' At small momenta, k ≈ (0, 0), the particle density is smaller, and the singlon operator has a substantial phase space allow- ing for a high spectral density.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
111 |
+
page_content=' Near k = (π, π), the particle density approaches 2, meaning that the phase space for the singlon operator vanishes, along with the spectral weight of this sub-band.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
112 |
+
page_content=' For the doublon-like component, the situation is the opposite, with a vanishing spectral density when the density is small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
113 |
+
page_content=' To quantify the suppression of the spectral density, we de- fine the spectral weight of a sub-band according to ρ(k) = n=nmax � n=nmin An(k), (16) where the range of indices n should be taken to include the en- tire sub-band, but nothing else.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
114 |
+
page_content=' At a temperature of t/T = 4 and halffilling, the system remains gapped so that we can iden- tify the upper sub-band with positive energies and the lower sub-band with negative energies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
115 |
+
page_content=' Doping the system, the two sub-bands are still well separated with the density of states vanishing at ϵ ≈ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
116 |
+
page_content='5t, suggesting we use this energy as the dividing point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
117 |
+
page_content=' At the higher temperature, the two sub-bands overlap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
118 |
+
page_content=' We can still calculate spectral weights based on ϵ = 0 as our dividing point, though this approximation may slightly underestimate the spectral weight at its minimum, while over- estimating it at the maximum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
119 |
+
page_content=' The spectral weight of the singlon-like component is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
120 |
+
page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
121 |
+
page_content=' At a temperature of t/T = 1 and half-filling (a), the singlon-like component is suppressed to ≈ 16% at 4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
122 |
+
page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
123 |
+
page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
124 |
+
page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
125 |
+
page_content='4 12 12 0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
126 |
+
page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
127 |
+
page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
128 |
+
page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
129 |
+
page_content='4 Dos 12 (b) (a) (c) 12 0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
130 |
+
page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
131 |
+
page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
132 |
+
page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
133 |
+
page_content='4 Dos 12 12 0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
134 |
+
page_content='6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
135 |
+
page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
136 |
+
page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
137 |
+
page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
138 |
+
page_content='4 Dos Dos 15 15 0 (e) (d) (f) Spectral density Spectral density 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
139 |
+
page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
140 |
+
page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
141 |
+
page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
142 |
+
page_content='4 Dos 12 12 0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
143 |
+
page_content='6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
144 |
+
page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
145 |
+
page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
146 |
+
page_content='8 Dos 15 15 0 (h) (g) (i) Spectral density Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
147 |
+
page_content=' Spectra and equation of state for the attractive Hubbard model with U = −5|t|, at temperatures of t/T = 1 (a-c) and t/T = 4 (d-i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
148 |
+
page_content=' The figures (a-f) corresponds to half-filling, while (g-i) corresponds to ⟨ˆn⟩ ≈ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
149 |
+
page_content='88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
150 |
+
page_content=' At high temperature, the spectrum (a) reveals a suppression of the density of states at the Fermi level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
151 |
+
page_content=' The particle density (b) exhibits a minimum at k = (0, 0) with ⟨ˆn⟩ ≈ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
152 |
+
page_content='4 and a maximum at k = (π, π) with ⟨ˆn⟩ ≈ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
153 |
+
page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
154 |
+
page_content=' The momentum-resolved spectral density (c) taken along the dashed line in (b), reveals two sub-bands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
155 |
+
page_content=' Decreasing the temperature, the density of states (d) vanishes at the Fermi level, implying that the system is gapped with respect to fermionic excitations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
156 |
+
page_content=' The particle density (e) now has minima and maxima close 0 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
157 |
+
page_content='0 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
158 |
+
page_content=' The spectral density (f) reveals sharp families of excitations with a spectral weight that is strongly dependent on momentum and almost vanishes in part of the Brillouin zone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
159 |
+
page_content=' Increasing the particle density to ⟨ˆn⟩ ≈ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
160 |
+
page_content='88, the density of states (g) reveals a large peak that is doublon-like, and a much suppressed peak corresponding to singlons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
161 |
+
page_content=' The peaks are well separated, and the density of states vanishes at ϵ ≈ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
162 |
+
page_content='5t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
163 |
+
page_content=' The spectral density reveals a large doublon-like peak, though the singlon peak has a presence mainly near k = (0, 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
164 |
+
page_content=' This data was obtained using an expansion order O = 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
165 |
+
page_content=' k ≈ (π, π).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
166 |
+
page_content=' At a temperature of t/T = 4 (b), this mini- mum drops below 1%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
167 |
+
page_content=' The strong temperature dependence is consistent with the notion of a vanishing phase space for the singlon operator: At k = (π, π), the system has a preference for double occupation, and the singlon operator can only act in the presence of thermal fluctuations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
168 |
+
page_content=' As the temperature is reduced, these are exponentially suppressed together with the spectral weight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
169 |
+
page_content=' Asymptotically, this results in a multiply connected spectrum which lacks states in part of the Brillouin zone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
170 |
+
page_content=' Increasing the particle density (c), the spectral weight attains a maximum at k = (0, 0) while asymptotically vanish- ing between these.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
171 |
+
page_content=' The result is a disconnected spectrum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
172 |
+
page_content=' It should be noted that we do not reach the point where the spectrum completely vanishes since we are limited to finite temperatures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
173 |
+
page_content=' Diagrammatic Monte Carlo generally requires that the series converges, and this is often not the case at suffi- ciently low temperatures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
174 |
+
page_content=' Real condensed matter systems are also generally realized at finite temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
175 |
+
page_content=' However, ther- mal fluctuations are exponentially suppressed with the inverse temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
176 |
+
page_content=' If the relevant energy scale is large compared to the temperature, then we can for all practical purposes regard the systems as being in the asymptotic limit where the spec- 5 10 10 20 30 40 500.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
177 |
+
page_content=',6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
178 |
+
page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
179 |
+
page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
180 |
+
page_content='20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
181 |
+
page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
182 |
+
page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
183 |
+
page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
184 |
+
page_content='215 10 5 0 5 10 15 10 20 30 40 5010 20K15 10 5 0 5 10 15 0 10 20 30 40 5010 2+ :*20 10 20 30 4015 10 5 10 0 10 20 30 40 505 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
185 |
+
page_content='0 (c) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
186 |
+
page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
187 |
+
page_content='075 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
188 |
+
page_content='125 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
189 |
+
page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
190 |
+
page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
191 |
+
page_content='2 (a) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
192 |
+
page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
193 |
+
page_content='8 (b) Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
194 |
+
page_content=' Spectral weight of the singlon-like sub-band, obtained from equation (16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
195 |
+
page_content=' At half-filling and a temperature of t/T = 1 (a), the weight is suppressed near k = (π, π) and reaches a minimum of ≈ 16%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
196 |
+
page_content=' Reducing the temperature (b), this minimum falls below 1%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
197 |
+
page_content=' Increasing the particle density to ⟨ˆn⟩ ≈ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
198 |
+
page_content='88 (c), the spectrum retains a finite weight near k = (0, 0) but almost vanishes elsewhere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
199 |
+
page_content=' The strong suppression of the spectral weight at certain momenta can be understood from a vanishing phase space of singlon-like excita- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
200 |
+
page_content=' tral density vanishes in part of the Brillouin zone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
201 |
+
page_content=' Once the spectrum has a nontrivial connectivity, there are no topologi- cal obstacles to an intersection with the Fermi level that is an open line in 2D, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
202 |
+
page_content=' 1, or an open surface in 3D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
203 |
+
page_content=' Conclusions—In non-Fermi-liquids, band fractionalization effectively splits the electron energy into a distribution of quasiparticle energies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
204 |
+
page_content=' The spectral weight of these sub-bands is determined by the phase space of the corresponding oper- ators, implying that it is generally momentum dependent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
205 |
+
page_content=' In strongly correlated systems, this phase space may–to expo- nential accuracy–vanish, creating voids in parts of the Bril- louin zone which change the topology of the excitation spec- trum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
206 |
+
page_content=' This effect is a prerequisite for Fermi arcs, and spectral topology should therefore be regarded as an essential property of strongly correlated phases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
207 |
+
page_content=' This work was supported by the Swedish Research Coun- cil (VR) through grant 2018-03882.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
208 |
+
page_content=' Computations were per- formed on resources provided by the Swedish National Infras- tructure for Computing (SNIC) at the National Supercomputer Centre in Linköping, Sweden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
209 |
+
page_content=' [1] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
210 |
+
page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
211 |
+
page_content=' Landau, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
212 |
+
page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
213 |
+
page_content=' Lifshitz, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
214 |
+
page_content='P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
215 |
+
page_content=' Pitaevskii, Course of The- oretical Physics: Statistical Physics, Part 2 : by E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
216 |
+
page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
217 |
+
page_content=' Lifshitz and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
218 |
+
page_content='P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
219 |
+
page_content=' Pitaevskii, v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
220 |
+
page_content=' 9 (1980).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
221 |
+
page_content=' [2] Su-Di Chen, Makoto Hashimoto, Yu He, Dongjoon Song, Ke-Jun Xu, Jun-Feng He, Thomas P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
222 |
+
page_content=' Devereaux, Hi- roshi Eisaki, Dong-Hui Lu, Jan Zaanen, and Zhi-Xun Shen, “Incoherent strange metal sharply bounded by a crit- ical doping in bi2212,” Science 366, 1099–1102 (2019), https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
223 |
+
page_content='science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
224 |
+
page_content='org/doi/pdf/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
225 |
+
page_content='1126/science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
226 |
+
page_content='aaw8850.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
227 |
+
page_content=' [3] Takeshi Kondo, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
228 |
+
page_content=' Malaeb, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
229 |
+
page_content=' Ishida, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
230 |
+
page_content=' Sasagawa, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
231 |
+
page_content=' Sakamoto, Tsunehiro Takeuchi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
232 |
+
page_content=' Tohyama, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
233 |
+
page_content=' Shin, “Point nodes persisting far beyond tc in bi2212,” Nature Com- munications 6, 7699 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
234 |
+
page_content=' [4] Yu He, Su-Di Chen, Zi-Xiang Li, Dan Zhao, Dongjoon Song, Yoshiyuki Yoshida, Hiroshi Eisaki, Tao Wu, Xian-Hui Chen, Dong-Hui Lu, Christoph Meingast, Thomas P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
235 |
+
page_content=' Devereaux, Robert J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
236 |
+
page_content=' Birgeneau, Makoto Hashimoto, Dung-Hai Lee, and Zhi-Xun Shen, “Superconducting fluctuations in overdoped bi2sr2cacu2o8+δ,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
237 |
+
page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
238 |
+
page_content=' X 11, 031068 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
239 |
+
page_content=' [5] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
240 |
+
page_content=' Bergeal, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
241 |
+
page_content=' Lesueur, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
242 |
+
page_content=' Aprili, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
243 |
+
page_content=' Faini, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
244 |
+
page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
245 |
+
page_content=' Contour, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
246 |
+
page_content=' Leridon, “Pairing fluctuations in the pseudogap state of copper-oxide superconductors probed by the josephson effect,” Nature Physics 4, 608–611 (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
247 |
+
page_content=' [6] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
248 |
+
page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
249 |
+
page_content=' Seo, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
250 |
+
page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
251 |
+
page_content=' Choi, Shin-ichi Kimura, and Yong Seung Kwon, “Evidence for a preformed cooper pair model in the pseudogap spectra of a ca10(pt4as8)(fe2as2)5 single crystal with a nodal superconducting gap,” Scientific Reports 9, 3987 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
252 |
+
page_content=' [7] John Sous, Yu He, and Steven A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
253 |
+
page_content=' Kivelson, “Absence of a bcs- bec crossover in the cuprate superconductors,” (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
254 |
+
page_content=' [8] Shengtao Jiang, Long Zou, and Wei Ku, “Non-fermi-liquid scattering against an emergent bose liquid: Manifestations in the kink and other exotic quasiparticle behavior in the normal- state cuprate superconductors,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
255 |
+
page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
256 |
+
page_content=' B 99, 104507 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
257 |
+
page_content=' [9] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
258 |
+
page_content=' Harrison and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
259 |
+
page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
260 |
+
page_content=' Chan, “Magic gap ratio for optimally ro- bust fermionic condensation and its implications for High−Tc superconductivity,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
261 |
+
page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
262 |
+
page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
263 |
+
page_content=' 129, 017001 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
264 |
+
page_content=' [10] Makoto Hashimoto, Rui-Hua He, Kiyohisa Tanaka, Jean-Pierre Testaud, Worawat Meevasana, Rob G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
265 |
+
page_content=' Moore, Donghui Lu, Hong Yao, Yoshiyuki Yoshida, Hiroshi Eisaki, Thomas P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
266 |
+
page_content=' De- vereaux, Zahid Hussain, and Zhi-Xun Shen, “Particle–hole symmetry breaking in the pseudogap state of bi2201,” Nature Physics 6, 414–418 (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
267 |
+
page_content=' [11] Makoto Hashimoto, Elizabeth A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
268 |
+
page_content=' Nowadnick, Rui-Hua He, Inna M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
269 |
+
page_content=' Vishik, Brian Moritz, Yu He, Kiyohisa Tanaka, Robert G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
270 |
+
page_content=' Moore, Donghui Lu, Yoshiyuki Yoshida, Motoyuki Ishikado, Takao Sasagawa, Kazuhiro Fujita, Shigeyuki Ishida, Shinichi Uchida, Hiroshi Eisaki, Zahid Hussain, Thomas P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
271 |
+
page_content=' De- vereaux, and Zhi-Xun Shen, “Direct spectroscopic evidence for phase competition between the pseudogap and superconductiv- ity in bi2sr2cacu2o8+δ,” Nature Materials 14, 37–42 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
272 |
+
page_content=' [12] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
273 |
+
page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
274 |
+
page_content=' Ma, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
275 |
+
page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
276 |
+
page_content=' Pan, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
277 |
+
page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
278 |
+
page_content=' Niestemski, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
279 |
+
page_content=' Neupane, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
280 |
+
page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
281 |
+
page_content=' Xu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
282 |
+
page_content=' Richard, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
283 |
+
page_content=' Nakayama, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
284 |
+
page_content=' Sato, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
285 |
+
page_content=' Takahashi, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
286 |
+
page_content='-Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
287 |
+
page_content=' Luo, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
288 |
+
page_content=' Fang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
289 |
+
page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
290 |
+
page_content=' Wen, Ziqiang Wang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
291 |
+
page_content=' Ding, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
292 |
+
page_content=' Madhavan, “Coexistence of competing orders with two energy gaps in real and momentum space in the high temperature superconductor bi2sr2−xlaxcuo6+δ,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
293 |
+
page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
294 |
+
page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
295 |
+
page_content=' 101, 207002 (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
296 |
+
page_content=' [13] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
297 |
+
page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
298 |
+
page_content=' Wise, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
299 |
+
page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
300 |
+
page_content=' Boyer, Kamalesh Chatterjee, Takeshi Kondo, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
301 |
+
page_content=' Takeuchi, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
302 |
+
page_content=' Ikuta, Yayu Wang, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
303 |
+
page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
304 |
+
page_content=' Hudson, “Charge- density-wave origin of cuprate checkerboard visualized by scanning tunnelling microscopy,” Nature Physics 4, 696–699 (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
305 |
+
page_content=' [14] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
306 |
+
page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
307 |
+
page_content=' Hoffman, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
308 |
+
page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
309 |
+
page_content=' Hudson, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
310 |
+
page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
311 |
+
page_content=' Lang, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
312 |
+
page_content=' Madhavan, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
313 |
+
page_content=' Eisaki, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
314 |
+
page_content=' Uchida, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
315 |
+
page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
316 |
+
page_content=' Davis, “A four unit cell peri- odic pattern of quasi-particle states surrounding vortex cores in bi2sr2cacu2o8+δ,” Science 295, 466–469 (2002).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
317 |
+
page_content=' [15] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
318 |
+
page_content=' Rosenberg, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
319 |
+
page_content=' Sénéchal, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
320 |
+
page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
321 |
+
page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
322 |
+
page_content=' Tremblay, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
323 |
+
page_content=' Charlebois, “Fermi arcs from dynamical variational monte carlo,” (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
324 |
+
page_content=' [16] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
325 |
+
page_content=' Badoux, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
326 |
+
page_content=' Tabis, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
327 |
+
page_content=' Laliberté, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
328 |
+
page_content=' Grissonnanche, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
329 |
+
page_content=' Vi- gnolle, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
330 |
+
page_content=' Vignolles, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
331 |
+
page_content=' Béard, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
332 |
+
page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
333 |
+
page_content=' Bonn, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
334 |
+
page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
335 |
+
page_content=' Hardy, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
336 |
+
page_content=' Liang, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
337 |
+
page_content=' Doiron-Leyraud, Louis Taillefer, and Cyril Proust, “Change of carrier density at the pseudogap critical point of a cuprate superconductor,” Nature 531, 210–214 (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
338 |
+
page_content=' [17] Masatoshi Imada and Takafumi J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
339 |
+
page_content=' Suzuki, “Excitons and dark fermions as origins of mott gap, pseudogap and su- perconductivity in cuprate superconductors - general con- cept and basic formalism based on gap physics,” Jour- nal of the Physical Society of Japan 88, 024701 (2019), 20 3:06 https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
340 |
+
page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
341 |
+
page_content='7566/JPSJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
342 |
+
page_content='88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
343 |
+
page_content='024701.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
344 |
+
page_content=' [18] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
345 |
+
page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
346 |
+
page_content=' Chao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
347 |
+
page_content=' Spałek, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
348 |
+
page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
349 |
+
page_content=' Ole´s, “Canonical perturbation expansion of the hubbard model,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
350 |
+
page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
351 |
+
page_content=' B 18, 3453–3464 (1978).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
352 |
+
page_content=' [19] Johan Carlström, “Strong-coupling diagrammatic monte carlo technique for correlated fermions and frustrated spins,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
353 |
+
page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
354 |
+
page_content=' B 103, 195147 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
355 |
+
page_content=' [20] Kris Van Houcke, Evgeny Kozik, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
356 |
+
page_content=' Prokof’ev, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
357 |
+
page_content=' Svis- tunov, “Diagrammatic monte carlo,” Physics Procedia 6, 95– 105 (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
358 |
+
page_content=' [21] Johan Carlström, “Spin-charge transformation of lattice fermion models: duality approach for diagrammatic simulation of strongly correlated systems,” Journal of Physics: Condensed Matter 29, 385602 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
359 |
+
page_content=' [22] Johan Carlström, “Diagrammatic monte carlo procedure for the spin-charge transformed hubbard model,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
360 |
+
page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
361 |
+
page_content=' B 97, 075119 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
362 |
+
page_content=' [23] Johan Carlström, “Spectral shift technique for strongly cor- related lattice fermions,” (2021), arXiv:2111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
363 |
+
page_content='05877 [cond- mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
364 |
+
page_content='str-el].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
365 |
+
page_content=' [24] Johan Carlström, “In situ controllable magnetic phases in doped twisted bilayer transition metal dichalcogenides,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
366 |
+
page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
367 |
+
page_content=' Research 4, 043126 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
368 |
+
page_content=' [25] Olga Goulko, Andrey S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
369 |
+
page_content=' Mishchenko, Lode Pollet, Nikolay Prokof’ev, and Boris Svistunov, “Numerical analytic contin- uation: Answers to well-posed questions,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
370 |
+
page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
371 |
+
page_content=' B 95, 014102 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
372 |
+
page_content=' 7 APPENDIX I To asses how truncation of the series affects the results, we compare the density of states and spectral function for the cases reported in the article at different expansion orders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
373 |
+
page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
374 |
+
page_content=' 4 we show the case of half-filling and temperatures t/T = 1 and t/T = 4 for expansion orders O = 5, 6, 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
375 |
+
page_content=' At the higher temperature, we observe that the dos changes very little, though a small correction at ϵ = 0 is visible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
376 |
+
page_content=' The spec- trum is qualitatively very similar, and we conclude that the impart of truncation is very small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
377 |
+
page_content=' At the lower temperature, we see some changes in the shape of the dos when increasing the order from 5 to 6, though the systems consistently remains gapped.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
378 |
+
page_content=' The spectra show a weight that does not completely vanish at O = 5, but is strongly suppressed at higher orders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
379 |
+
page_content=' At O = 7, we begin to see noise in the spectrum as a result of the computational cost associated with expansions to high order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
380 |
+
page_content=' For this data set, we can conclude that truncation of the series has a limited quanti- tative impact, but the it does not affect any of the conclusions derived in the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
381 |
+
page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
382 |
+
page_content=' 5, we see the dos and spectra for the doped case at expansion orders O = 5, 6, 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
383 |
+
page_content=' In this scenario, truncation of the series has no impact visible to the naked eye, and we can conclude that the result is virtually exact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
384 |
+
page_content=' In conclusion, we find that the diagrammatic Monte Carlo simulations reported do accurately capture the physics of the attractive Hubbard model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
385 |
+
page_content=' The results are qualitatively not affected by truncation of the series, yet a small quantitative uncertainty remains for one of the data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
386 |
+
page_content=' 8 (b) (a) (c) (e) (d) (f) (h) (g) (i) (k) (j) (l) Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
387 |
+
page_content=' Convergence of the series at half-filling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
388 |
+
page_content=' The left column corresponds to an expansion order O = 5, the center corresponds to O = 6 and the right corresponds to O = 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
389 |
+
page_content=' (a-c) give the dos at a temperature of t/T = 1, while (d-f) give the corresponding spectra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
390 |
+
page_content=' (g-i) give the dos at a temperature of t/T = 4, while (j-l) give the corresponding spectra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
391 |
+
page_content=' At the higher temperature, the corrections when changing the expansion order is very small, though a slight shift in dos at the Fermi level can be observed when comparing O = 5 (a) and O = 6 (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
392 |
+
page_content=' At the lower temperature, we do see quantitative difference in dos between orders 5 (g) and 6 (h) while the correction at order 7 (i) is smaller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
393 |
+
page_content=' The small peaks in the dos near the Fermi level in (g) are reflected in a suppressed fractionalized sub-band visible in (j).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
394 |
+
page_content=' At orders 6 and 7, this fractionalized sub-band vanishes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
395 |
+
page_content=' 10 5 10 0 20 30 5015 10 5 0 n 10 5 0 10 20 30 40 5015 10 5 0 n 10 15 0 10 20 30 40 5015 10 5 0 n 10 5 0 10 20 30 40 5010 5 5 10 10 20 30 40 5010 5 10 10 20 30 509 (b) (a) (c) (e) (d) (f) Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
396 |
+
page_content=' Convergence of the series in the strongly doped case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
397 |
+
page_content=' The density is ⟨ˆn⟩ ≈ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
398 |
+
page_content='88 and the temperature is t/T = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
399 |
+
page_content=' The left column (a,d) corresponds to an expansion order O = 5, the center column to O = 6 and the right columns to O = 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
400 |
+
page_content=' The dos (a-c) does not change visibly with expansion order, and neither does the spectrum (d-f).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
401 |
+
page_content=' We can therefore conclude that the observables have converged.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
402 |
+
page_content=' 15 10 5 5 0 10 20 30 40 5015 10 5 5 10 0 10 20 30 40 5015 10 5 5 10 0 10 20 30 40 50' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'}
|
4NFRT4oBgHgl3EQfozeT/content/tmp_files/2301.13611v1.pdf.txt
ADDED
@@ -0,0 +1,1176 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Bringing Stellar Evolution & Feedback Together
|
2 |
+
Summary of proposals from the Lorentz Center Workshop, 2022
|
3 |
+
Co-authors: (names and institutions)
|
4 |
+
Sam Geen1,2
|
5 |
+
,Poojan Agrawal3
|
6 |
+
,Paul A. Crowther4
|
7 |
+
,B.W. Keller5,18
|
8 |
+
,Alex de Koter1,6
|
9 |
+
,
|
10 |
+
Zsolt Keszthelyi1,7
|
11 |
+
, Freeke van de Voort8
|
12 |
+
,Ahmad A. Ali9
|
13 |
+
,Frank Backs1
|
14 |
+
,Lars Bonne24
|
15 |
+
,Vittoria
|
16 |
+
Brugaletta10
|
17 |
+
,Annelotte Derkink 1
|
18 |
+
,Sylvia Ekström 11
|
19 |
+
,Yvonne A. Fichtner12
|
20 |
+
,Luca Grassitelli12,Ylva
|
21 |
+
Götberg23
|
22 |
+
, Erin R. Higgins13
|
23 |
+
,Eva Laplace14
|
24 |
+
,Kong You Liow9
|
25 |
+
,Marta Lorenzo15,27
|
26 |
+
,Anna F.
|
27 |
+
McLeod16,18
|
28 |
+
,Georges Meynet 11
|
29 |
+
, Megan Newsome25,26G. André Oliva18
|
30 |
+
,Varsha Ramachandran19
|
31 |
+
,Martin
|
32 |
+
P. Rey,20
|
33 |
+
,Steven Rieder11
|
34 |
+
, Emilio Romano-Díaz12
|
35 |
+
, Gautham Sabhahit13
|
36 |
+
,Andreas A.C. Sander19
|
37 |
+
,Rafia
|
38 |
+
Sarwar21
|
39 |
+
,Hanno Stinshoff 10,21
|
40 |
+
,Mitchel Stoop1
|
41 |
+
,Dorottya Szécsi21
|
42 |
+
, Maxime Trebitsch 22
|
43 |
+
,Jorick S.
|
44 |
+
Vink13
|
45 |
+
,Ethan Winch13
|
46 |
+
(Author contact details and full list of institutions at end of paper)
|
47 |
+
Keywords: Stellar physics: Stellar atmospheres, Stellar evolution, Stellar processes; Stellar populations; Interstellar
|
48 |
+
medium: nebulae, Protostars, Supernova remnants, Stellar-interstellar interactions; Interdisciplinary astronomy
|
49 |
+
Abstract: Stars strongly impact their environment, and shape structures on all scales throughout the universe, in a
|
50 |
+
process known as “feedback”. Due to the complexity of both stellar evolution and the physics of larger astrophysical
|
51 |
+
structures, there remain many unanswered questions about how feedback operates, and what we can learn about stars
|
52 |
+
by studying their imprint on the wider universe. In this white paper, we summarize discussions from the Lorentz
|
53 |
+
Center meeting ‘Bringing Stellar Evolution and Feedback Together’ in April 2022, and identify key areas where
|
54 |
+
further dialogue can bring about radical changes in how we view the relationship between stars and the universe they
|
55 |
+
live in.
|
56 |
+
1
|
57 |
+
Introduction on Scales: From the Birth of Stars to the Wider Universe
|
58 |
+
Astrophysics spans many orders of magnitude in both physical distances and time. Researchers from different fields
|
59 |
+
have varying definitions for what are considered “small” and ”large” scales. Typically, “small” refers to processes
|
60 |
+
smaller than those typically resolved in studies, whether observational or theoretical. Meanwhile, “large” typically
|
61 |
+
refers to scales outside the boundaries of the problem domain. In Figure 1 we show a diagram depicting the range of
|
62 |
+
relevant spatial and temporal scales, from stars to galaxies and beyond, in order to define and motivate discussions
|
63 |
+
around the boundaries of domains of study considered in this work.
|
64 |
+
The galactic scale, i.e. the largest physical scale considered here below the “cosmological” scale, is about 1 – 100s of
|
65 |
+
kpc. A spiral galaxy like our Milky Way contains many (giant) molecular clouds of length scale 10 – 100 pc, which
|
66 |
+
from their dense cores can form star clusters at scales of 0.1 – 10 pc. Within those dense cores, the gravitational
|
67 |
+
collapse that results in the formation of individual stars takes place. Protostars are typically surrounded by accretion
|
68 |
+
disks of sizes that range between 1 – 1000 au, and outflows. On the smallest physical scales considered here, we can
|
69 |
+
regard the (intra)-stellar structure. Within the star itself, we have the nuclear burning in the core, convection zones,
|
70 |
+
envelope and stellar surface at 0.1 – 10 R⊙.
|
71 |
+
In numerical simulations, the connection between small and large scales is crucial because it is computationally
|
72 |
+
expensive to set up and perform simulations that encompass the whole range of scales relevant to astrophysics within
|
73 |
+
a reasonable amount of computing time. Despite this, an understanding of how the scales couple is important. various
|
74 |
+
physical processes connect the smallest and largest scales with flows moving to both smaller and larger scales, often
|
75 |
+
driven by the action of stars, in a cycle of material termed “feedback”.
|
76 |
+
During the star formation process at stellar scales, the outflows launched by the disk and jet can influence the
|
77 |
+
surrounding material. Ionizing radiation, stellar winds and eventual supernovae produced by the massive stars shape
|
78 |
+
their natal molecular clouds and the interstellar medium, impacting subsequent generations of star formation. In this
|
79 |
+
work we focus primarily on processes from stars after their formation phase ends, although protostellar outflows can
|
80 |
+
be important both in themselves (Federrath et al. 2014) and in concert with other feedback processes (Kuiper &
|
81 |
+
Hosokawa 2018) as stars form in molecular clouds (Grudi´c et al. 2022; Verliat et al. 2022). Feedback processes often
|
82 |
+
1
|
83 |
+
arXiv:2301.13611v1 [astro-ph.SR] 31 Jan 2023
|
84 |
+
|
85 |
+
IDgravitat. collapse
|
86 |
+
stellar feedback
|
87 |
+
0
|
88 |
+
1
|
89 |
+
2
|
90 |
+
3
|
91 |
+
4
|
92 |
+
5
|
93 |
+
-1
|
94 |
+
-2
|
95 |
+
-3
|
96 |
+
-4
|
97 |
+
-5
|
98 |
+
-6
|
99 |
+
-7
|
100 |
+
-8
|
101 |
+
1 au
|
102 |
+
stellar
|
103 |
+
structure
|
104 |
+
circumstellar
|
105 |
+
cloud
|
106 |
+
core
|
107 |
+
cloud
|
108 |
+
galactic
|
109 |
+
cosmological
|
110 |
+
small scales
|
111 |
+
spatial scales
|
112 |
+
star
|
113 |
+
disk/
|
114 |
+
outflows
|
115 |
+
cloud
|
116 |
+
dense core
|
117 |
+
galaxy
|
118 |
+
log(size [pc])
|
119 |
+
1000 au
|
120 |
+
100
|
121 |
+
10
|
122 |
+
1
|
123 |
+
expanding
|
124 |
+
bubble
|
125 |
+
filament
|
126 |
+
timescales
|
127 |
+
log(time [yr])
|
128 |
+
3
|
129 |
+
6
|
130 |
+
9
|
131 |
+
stellar
|
132 |
+
evolution
|
133 |
+
low
|
134 |
+
high
|
135 |
+
disks (~100 kyr)
|
136 |
+
1–100 Myr
|
137 |
+
> 1 Gyr
|
138 |
+
Figure 1: The different length scales of star formation in log-parsec.
|
139 |
+
2
|
140 |
+
|
141 |
+
act in concert, e.g. in the case of supernova feedback efficiency increasing if dense star-forming environments are
|
142 |
+
dispersed by pre-supernova feedback (Geen et al. 2015; Lucas et al. 2020).
|
143 |
+
Several techniques have been developed to bridge the different length scales. From larger to smaller scales,
|
144 |
+
zoomed-in simulations are performed, such that the regions from larger scale simulations are taken as initial
|
145 |
+
conditions and the resolution of the regions is enhanced (e.g. Carlberg & Keating 2022; Dobbs et al. 2022; Rey &
|
146 |
+
Starkenburg 2022). This allows the regions of interest to be followed and studied more closely.
|
147 |
+
For example, zoom-in simulations of dense cloud cores can be used to follow their gravitational collapse into
|
148 |
+
individual stars. On the other hand, prescriptions are used to import the physics of smaller scales to the larger scales
|
149 |
+
(e.g. Gutcke et al. 2021). This is generally done using empirical relations, analytical solutions, or parametric tables.
|
150 |
+
Some recent simulations employ multiple techniques to bridge the different scales (e.g. Rieder et al. 2022).
|
151 |
+
Critical tasks for the useful presentation and communication of the results of numerical simulations are: the
|
152 |
+
determination of reliable intervals where a given quantity is valid or expected (e.g., the densities or angular
|
153 |
+
momentum content of dense cores expected from simulations at the cloud scales), and the expression, whenever
|
154 |
+
possible, of results that impact neighbouring scales using analytical formulae so that they can be used as prescriptions
|
155 |
+
(e.g., evolutionary tracks for protostars that are used in larger-scale simulations).
|
156 |
+
With the advance of observational sites (e.g. Extremely Large Telescope, James Webb Space Telescope, Athena) with
|
157 |
+
higher angular resolution, we come closer to resolving astrophysical structures large and small scales for regions in
|
158 |
+
the Local Group and beyond. Many of these sites will be able to resolve individual stars (of lower masses) for which
|
159 |
+
before, we were only able to probe the large scale structures. Observations and simulations of large and small scales
|
160 |
+
in the (near) future will provide us with essential knowledge to connect these scales.
|
161 |
+
2
|
162 |
+
Introduction to Feedback: The Physics Connecting the Scales
|
163 |
+
Once the protostellar phase has ended, stars impact their surroundings in a number of ways. We highlight some of the
|
164 |
+
key processes by which stellar evolution processes drive feedback into the interstellar medium and beyond.
|
165 |
+
2.1
|
166 |
+
Stellar Winds
|
167 |
+
Stellar winds refer to the ejection of matter from a star’s surface driven by radiation pressure on the gas in the star’s
|
168 |
+
atmosphere. Stellar winds impact their surroundings through a mixture of the mass loss rate ˙M and terminal velocity
|
169 |
+
vw, i.e. the velocity that the stellar wind reaches once it is fully accelerated by radiation pressure.
|
170 |
+
Observations by Groenewegen et al. (1989), Prinja et al. (1990), Crowther et al. (2016) and others confirm that these
|
171 |
+
winds leave massive stars with terminal velocities that exceed 1000 km/s. This shocks the gas around the star to
|
172 |
+
millions of degrees Kelvin, creating hot bubbles that drive strong flows into the interstellar medium (Weaver et al.
|
173 |
+
1977).
|
174 |
+
The rate of deposition of kinetic energy of stellar winds, 1/2 ˙Mv2
|
175 |
+
w, is an important quantity in stellar feedback, where
|
176 |
+
the energy in the wind bubble accumulates over time (Weaver et al. 1977). In the mode where stellar winds cool
|
177 |
+
efficiently through thermal conduction or, more plausibly, turbulent mixing (e.g. Lancaster et al. 2021), the
|
178 |
+
momentum deposition rate ˙Mvw becomes more important. This mode is considerably weaker at driving large-scale
|
179 |
+
flows since stored energy is lost. We examine in further detail how stellar wind bubbles impact nearby star-forming
|
180 |
+
regions in Section 5.
|
181 |
+
The properties of stars play a crucial role in setting ˙M and vw (Puls et al. 2015). Factors such as metallicity (Vink
|
182 |
+
et al. 2001), rotation (Cranmer & Owocki 1995), clumping (Puls et al. 2008) and magnetic fields (ud-Doula &
|
183 |
+
Owocki 2002) are thought to play an important role in setting the precise wind properties. We return to these
|
184 |
+
processes in detail later in the paper.
|
185 |
+
One of the most important stellar properties for determining ˙M and vw is stellar mass. At solar metallicity, stars with
|
186 |
+
masses larger than around 25M⊙ do not make it to the cool red supergiant phase, but instead lose a lot of mass in
|
187 |
+
line-driven winds (Castor et al. 1975; Kudritzki & Puls 2000; Vink 2022). At lower metallcity, winds become
|
188 |
+
significantly weaker due to the lack of metal lines to couple radiation to the gas and drive material from the stellar
|
189 |
+
surface.
|
190 |
+
A significant impediment to a better understanding of stellar winds is the uncertainty in mass loss rates. For stars
|
191 |
+
below 25M⊙, mass-loss rates are uncertain by 1-2 orders of magnitude in the so-called "weak-wind regime" (Martins
|
192 |
+
et al. 2005).
|
193 |
+
3
|
194 |
+
|
195 |
+
For those massive stars where mass-loss starts to dominate the evolution (at about 40M⊙) the uncertainties are about
|
196 |
+
a factor 2-3 (e.g. Björklund et al. 2021). Such uncertainties were investigated in evolutionary models by Keszthelyi
|
197 |
+
et al. (2017b), finding that the discrepancies may be resolved by studying the rotational velocities of B-type
|
198 |
+
supergiants (Vink et al. 2010), given that mass loss leads to angular momentum removal and spin-down of the stellar
|
199 |
+
surface (Langer 1998; Maeder & Meynet 2000).
|
200 |
+
Stars of order 80-100M⊙ are in the transition region of Vink & Gräfener (2012), where mass-loss rates are known
|
201 |
+
very accurately, but above this transition point, mass-loss rates included in most stellar evolution and population
|
202 |
+
synthesis models are thought to be underestimated.
|
203 |
+
2.2
|
204 |
+
Ionizing Radiation
|
205 |
+
Stellar ionizing radiation can propagate and deposit energy on a large variety of scales, starting in the stars’ own
|
206 |
+
atmospheres and extending to the intergalactic medium across the Universe, where they “reionized” the universe after
|
207 |
+
cosmic recombination. Pinpointing how much, and when, hard ionizing photons are released is thus a key input to
|
208 |
+
model how stars affect their surroundings on all scales. We highlight here recent developments, open questions, and
|
209 |
+
uncertainties in predicting the budget of ionizing photons from stellar evolution, and their coupling to galactic and
|
210 |
+
intergalactic scales.
|
211 |
+
Ionizing fluxes of stars strongly depend on the star’s temperature. Therefore, the fact that main-sequence stars are
|
212 |
+
hotter at lower metallicities has a direct impact on the resulting ionizing photon budget. However, this effect could
|
213 |
+
potentially be drastically or even totally altered by stellar evolution effects relating to rotation and binary interaction.
|
214 |
+
Binary interaction can lead to mass exchange between the two stars, resulting in “envelope-stripped”, and thus even
|
215 |
+
hotter, helium stars. Rapid rotation is also thought to efficiently mix massive stars that cannot spin down at low
|
216 |
+
metallicity, leading to the creation of helium-enriched, finally pure helium, stars, referred to as chemically
|
217 |
+
homogeneous stars (Yoon & Langer 2005a; Szécsi et al. 2015). When determining the feedback for a resolved
|
218 |
+
population of stars, it is therefore crucial to not miss the “earliest” (i.e. hottest) stars of the population as they
|
219 |
+
dominate the ionizing feedback (see, e.g., Ramachandran et al. 2018b, 2019, for recent examples). In addition,
|
220 |
+
accreting compact objects are known to emit X-rays and ionizing radiation which have been considered, to aid
|
221 |
+
photoionization of interstellar or even intergalactic gas (Chen et al. 2015; Schaerer et al. 2019; Senchyna et al. 2020).
|
222 |
+
Moreover, cluster winds and superbubbles have recently been suggested as a source of additional ionizing flux
|
223 |
+
(Oskinova & Schaerer 2022). While most of their emitted photons are too energetic to efficiently ionize gas, a
|
224 |
+
fraction of them can contribute to the total budget of hydrogen and helium-ionizing photons in the universe.
|
225 |
+
While the effective temperatures of stars can give some clues to their spectrum and ionizing power, black bodies only
|
226 |
+
provide limited representations for the ionizing fluxes of hot stars. The absorption of radiation by recombination
|
227 |
+
fronts inside the stellar wind can significantly reshape the spectral energy distribution, thereby considerably affecting
|
228 |
+
the resulting quantities of ionizing photons emitted by the star. This is particularly striking for the He II ionizing flux
|
229 |
+
that is reduced by many orders of magnitude – effectively vanishing – if the stars manage to launch an optically thick
|
230 |
+
(Wolf-Rayet type) wind (e.g. Sander & Vink 2020). This effect is not an issue for hydrogen-ionizing photons, even
|
231 |
+
though part of their flux budget is still consumed to drive stellar winds.
|
232 |
+
Direct constraints of the ionizing flux of individual stars in the local Universe would be invaluable to constrain
|
233 |
+
uncertainties of the sources of photoionization of interstellar gas, but is unfortunately limited by the unavailability of
|
234 |
+
extreme UV (EUV) observational tools. Hence, other indirect methods are necessary, for example (1) inferring the
|
235 |
+
ionizing emission from nebular spectra using scaling relations for recombination line luminosities, and (2) using the
|
236 |
+
ionizing emission from computed stellar atmosphere models that sufficiently reproduce the spectrum at other
|
237 |
+
wavelengths (UV, optical, IR). Since the stellar He II-ionizing flux is considerably affected by winds from the star,
|
238 |
+
UV observations remain an important tool to correctly determine the sources of these photons.
|
239 |
+
Radiative feedback plays a key role in regulating the lifecycle of star-forming regions, and in providing an early
|
240 |
+
mechanism to modify the phase and thermodynamics of gas in which massive stars then explode as supernovae to
|
241 |
+
drive galactic outflows. The coupling between ionizing radiation, other sources of feedback and the surrounding gas
|
242 |
+
however remains uncertain, due to the inherent challenges in modelling and observing these non-linear physical
|
243 |
+
processes occurring on multiple spatial and time scales. Quantifying the balance between feedback budgets within
|
244 |
+
H II regions has now become possible (e.g., Lopez et al. 2014; McLeod et al. 2019, 2020, 2021; Olivier et al. 2021a;
|
245 |
+
Barnes et al. 2020). However, uncertainties pointed out above in stellar evolution and synthesizing stellar population
|
246 |
+
outputs propagate into these measurements, making their interpretation challenging. Furthermore, the interaction
|
247 |
+
between radiative, wind, and supernova feedback is a strongly non-linear process, which can lead to positive
|
248 |
+
4
|
249 |
+
|
250 |
+
reinforcement and strong galactic outflow driving (e.g. Lucas et al. 2020) or by contrast diminish the clustering of SN
|
251 |
+
explosions and reduce their efficiency at expelling gas from a galaxy (e.g. Agertz et al. 2020; Smith et al. 2021;
|
252 |
+
Fichtner et al. 2022). Pinpointing the sign and strength of these couplings, both observationally and theoretically will
|
253 |
+
be key to interpreting galaxies in observations, understanding how they regulate their star-formation, how they enrich
|
254 |
+
their surrounding environment in metals, and how radiation escapes from them to larger, cosmological scales.
|
255 |
+
H I reionization of the universe is mostly powered by stellar sources in low-mass star-forming galaxies (e.g.
|
256 |
+
Robertson et al. 2015; Dayal et al. 2020; Yung et al. 2020; Trebitsch et al. 2021), so having a good handle of their
|
257 |
+
ionizing production is crucial, while keeping in mind that other sources of uncertainties (e.g. how much of this
|
258 |
+
ionizing radiation escapes the ISM) still needs to be addressed. Even prior to H I reionization, X-rays from the very
|
259 |
+
early stellar populations in star-forming galaxies contribute to heating the IGM, but the rate of production of these
|
260 |
+
X-rays is still uncertain. Most emission comes from X-ray binaries (e.g. Eide et al. 2018), whose populations are
|
261 |
+
poorly constrained at the highest redshifts. 21cm all-sky measurements are starting to put limits on the beginning of
|
262 |
+
this heating era (Bowman et al. 2018), although other experiments are needed to confirm this result (see e.g. Singh
|
263 |
+
et al. 2022). Next-generation facilities like the SKA will soon constrain the early heating of the Universe, making the
|
264 |
+
need for detailed models timely. In this context, detailed understanding of binary evolution of stars (and in particular
|
265 |
+
massive stars) is required to assess properly the early heating of the IGM. While He II reionization, which happens at
|
266 |
+
z ∼ 3 (e.g. Worseck et al. 2016) is thought to be mostly dominated by AGN sources (e.g. Puchwein et al. 2019;
|
267 |
+
Faucher-Giguère 2020), the contribution from stellar populations remains mostly unconstrained. Notwithstanding the
|
268 |
+
uncertainties on the escape fraction of He II-ionizing photons, the uncertainties in the stellar population models
|
269 |
+
pointed out above will translate to the contribution of these stellar populations to the He II background. In particular,
|
270 |
+
the presence of very massive stars or hydrogen-stripped stars (e.g. Götberg et al. 2020) could strongly enhance the
|
271 |
+
contribution of the overall stellar populations to He II reionization.
|
272 |
+
2.3
|
273 |
+
Supernovae
|
274 |
+
Feedback from supernovae (SN) has long been considered a key ingredient in studies of interstellar gas (e.g. McKee
|
275 |
+
& Ostriker 1977) and galaxy evolution (e.g. Larson 1974). SNe, especially core-collapse Type II SNe, release
|
276 |
+
significant (∼ 1051 erg) energy in the initial blastwave: sufficient to destroy molecular clouds (White & Long 1991),
|
277 |
+
drive turbulence in the ISM (McCray & Snow 1979), and power galactic winds and outflows (Mathews & Baker
|
278 |
+
1971). These explosions are also major sources of metals, producing (for example) the vast bulk of interstellar
|
279 |
+
oxygen (Burbidge et al. 1957). Beyond core-collapse supernovae, thermonuclear (Type Ia) supernovae may also be a
|
280 |
+
source of feedback energy, and also contribute to the cosmic metal budget (Kawata 2001). From the cloud- and
|
281 |
+
galaxy-scale feedback perspective, the key questions connecting stellar evolution to supernovae feedback are as
|
282 |
+
follows. Which stars will end their lives as supernovae? When will these stars detonate their supernovae? What will
|
283 |
+
be the energy, mass, and metal returns of these supernovae events (and which form will the energy take at larger
|
284 |
+
scales - kinetic or thermal)? Traditionally, very simple assumptions have been made about these questions: all stars
|
285 |
+
above a certain mass (5 − 10 M⊙) detonate, with each ccSNe event depositing ∼ 1051 erg of energy and
|
286 |
+
∼ 7 − 100 M⊙ of mass into the surrounding ISM (e.g. Katz 1992). It has long been assumed that, at least on galactic
|
287 |
+
scales, uncertainties in how this energy propagates through the ISM dominates over any uncertainties in stellar
|
288 |
+
evolution models (Naab & Ostriker 2017; Rosdahl et al. 2017), and that questions relating to the details of ccSNe
|
289 |
+
detonation are swamped by uncertainties in the cooling and mixing rates of SN remnants. However, recent studies
|
290 |
+
(Keller & Kruijssen 2022) and higher-resolution simulations (Gutcke et al. 2021) have begun to reveal that the details
|
291 |
+
of stellar evolution can detectably manifest themselves on galactic scales.
|
292 |
+
Temporal evolution of the stellar structure, subject to internal and surface physical processes described in Section 3
|
293 |
+
will lead to a stellar structure for which internal pressure gradients at some point will no longer be able to withstand
|
294 |
+
the force of gravity. Understanding these processes will allow us to ultimately answer the three key questions
|
295 |
+
identified above. Hydrodynamical models of SNe detonation predict that the occurrence of underluminous (e.g.
|
296 |
+
Lovegrove & Woosley 2013) and hyperluminous (e.g. Woosley & Heger 2006) supernovae may occur for certain
|
297 |
+
combinations of initial stellar mass, metallicity, and rotation. Adding to this is the strong theoretical predictions for
|
298 |
+
“islands of explodability”, where SN progenitors will either produce very weak SN or in some cases directly collapse
|
299 |
+
to form black holes (BHs) with no significant energy return whatsoever (Smartt 2009; Horiuchi et al. 2014; Sukhbold
|
300 |
+
& Adams 2020). Recent theoretical studies of binary star interactions have found that the significant changes induced
|
301 |
+
to both the surface and core structure also will impact which stars detonate, and the energy of the subsequent SN
|
302 |
+
(Müller et al. 2019; Laplace et al. 2021; Vartanyan et al. 2021). Despite these theoretical uncertainties, it is highly
|
303 |
+
5
|
304 |
+
|
305 |
+
likely that theoretical models of galaxy evolution have in general over-estimated the SN energy budget, though this
|
306 |
+
recently may be changing (Emerick et al. 2019; Gutcke et al. 2021). Better observational constraints are needed to
|
307 |
+
begin pinning down the true budget of energy for SN feedback.
|
308 |
+
Observationally, determining the SNe budget for stars across the IMF is extremely challenging, owing to the difficult
|
309 |
+
problem of connecting SNe progenitors to individual SN events. Red Supergiants (RSG) constitute the most common
|
310 |
+
SN-progenitor stage, during which the star may experience a type IIP/L explosion (Smartt 2009). However, the RSG
|
311 |
+
phase may last ∼ 2.5 × 106 to 3 × 105 yrs for stars ranging in initial mass between 9 and 20 M⊙ (Meynet et al.
|
312 |
+
2015), more massive stars, at high metallicity at least, potentially suffering from such intense mass loss that the entire
|
313 |
+
envelope is lost and the stars first become yellow or blue supergiants before experiencing core collapse (e.g.,
|
314 |
+
Gräfener & Vink 2016; Kee et al. 2021). At lower metallicities, higher mass supergiants may exist and explode as
|
315 |
+
e.g. pair-instability supernovae ejecting a peculiar chemical yield (Martínez-González et al. 2022). The RSG
|
316 |
+
Betelgeuse experienced an unprecedented dimming of its visual brightness from December 2019 until April 2020,
|
317 |
+
speculated to forewarn an imminent core-collapse. Though it appears that this event likely reflected a combination of
|
318 |
+
surface activity and dust formation in a previously ejected gas cloud positioned in the line of sight (Montargès et al.
|
319 |
+
2021), the need for a dedicated monitoring campaign of a population of RSG stars for unexpected variability is
|
320 |
+
clearly opportune and may help to identify systems for which an explosion may happen within about a human
|
321 |
+
lifetime. Alternatively, the collapse of such massive stars may lead to direct black hole formation with no or only
|
322 |
+
little ejecta being expelled, consequently, with a very faint or undetectable supernova. The most promising candidate
|
323 |
+
for a disappearing star directly collapsing into a black hole showed evidence for an estimated ∼ 0.5 M⊙ of ejecta
|
324 |
+
(Gerke et al. 2015; Sukhbold & Adams 2020; Basinger et al. 2021). Wolf-Rayet stars, evolved stars that have lost or
|
325 |
+
have been stripped from their hydrogen rich envelopes are alternative candidates for an impending Ib/c (or
|
326 |
+
gamma-ray-burst) supernova explosion (e.g., Groh et al. 2013b). Within this group, Wolf-Rayet Oxygen (WO) stars
|
327 |
+
are thought to be particularly evolved and in a post core-helium burning phase of evolution where timescales until
|
328 |
+
core collapse are down to a few times ∼ 103 or 104 yrs (Meynet et al. 2015). So far, only nine WO stars are known,
|
329 |
+
the one thought to be closest to ending its life being WR102 with ∼ 1500 yrs left. Other post-main sequence objects
|
330 |
+
have been suggested as potential SN-progenitors, including Luminous Blue Variable (LBV) stars (Kotak & Vink
|
331 |
+
2006; Groh et al. 2013a) and Wolf-Rayet Nitrogen (WN) stars (Groh et al. 2013b). The former possibility is
|
332 |
+
supported by evidence that the progenitor of SN 2005gl was possibly an LBV star (Gal-Yam & Leonard 2009).
|
333 |
+
2.4
|
334 |
+
Chemical Enrichment
|
335 |
+
Nuclear processed material may be ejected from the star/system, and thus influence the chemical abundance of the
|
336 |
+
surroundings, via at least three mechanisms; (i) stellar winds; (ii) supernova ejecta (discussed in detail in Sect. 2.3);
|
337 |
+
(iii) (non-conservative) binary interaction (discussed further in Sect. 4). Consequently, whether nuclear processed
|
338 |
+
material ends up in the interstellar medium after being created inside a star is a complex question. For example,
|
339 |
+
elements that stay inside the star for a longer time (due to not being immediately ejected in the wind) may be able to
|
340 |
+
undergo further nuclear processing. In the same way, elements may be “saved” by the wind from being processed
|
341 |
+
further. This makes the topic of chemical evolution a highly complex area of research with a number of impediments
|
342 |
+
to our understanding of it.
|
343 |
+
A deeper understanding of how and on which timescale elements are released in the interstellar medium is of great
|
344 |
+
importance for modern stellar feedback simulations. Elements being ejected during the entire lifetime of a massive
|
345 |
+
star could determine a different chemical evolution in the surrounding gas compared to the case in which they are
|
346 |
+
“instantaneously” ejected in the supernova explosion. If we had a clearer view of these processes, we could also
|
347 |
+
model more accurately how this enriched material spreads to larger scales, meaning the interstellar medium and the
|
348 |
+
rest of the galaxy, because of turbulence and other mixing processes.
|
349 |
+
Another important aspect in this regard is the comparison of the timescale in which the mixing of the newly-enriched
|
350 |
+
material occurs in the gas with that of star formation. Will the mixing be fast enough to make the metallicity of the
|
351 |
+
medium almost uniform, before a second generation of massive stars is born? As stars inherit their initial metallicity
|
352 |
+
from the gas they have formed in, understanding how the timescales for chemical evolution and mixing relate to the
|
353 |
+
time needed to form a new generation of stars would help to better understand their future evolution. Moreover, all
|
354 |
+
these processes could be very different in low-metallicity environments, for which further analysis is recommended
|
355 |
+
(see Section 5).
|
356 |
+
The efficiency of mass-loss through stellar winds is highly dependent on the mass of the star (Sect. 2.1). The higher
|
357 |
+
the mass, the higher the core temperature, leading to the activation of specific nuclear reactions. Massive and
|
358 |
+
6
|
359 |
+
|
360 |
+
Figure 2: Abundances relative to the Solar value plotted over time (in Gyr) for all the elements in the periodic table.
|
361 |
+
This enables the reader to follow the different ways for evolution of the elements to take place via various processes.
|
362 |
+
These processes include Big Bang Nucleosynthesis, AGB stars, Core-collapse Supernovae, Type Ia Supernovae and
|
363 |
+
Neutron star mergers. Observations are depicted as dotted lines. From Kobayashi et al. (2020), reproduced with
|
364 |
+
permission.
|
365 |
+
intermediate mass stars are known to have strong enough winds to eject nuclear-processed material. In particular,
|
366 |
+
Asymptotic Giant Branch stars (AGBs) are important contributors to carbon and nitrogen via convective dredge up of
|
367 |
+
nuclear products from the stellar core (Romano et al. 2010).
|
368 |
+
For the stellar wind (or interactions with a companion star) to be able to remove nuclear burning products, these
|
369 |
+
products – originally created in deep, hot burning regions – need to already be found at the stellar surface. This can
|
370 |
+
happen in two ways. Either the mixing between the deep layers and the surface needs to be strong (see Section 3.1);
|
371 |
+
or the layers from the top need to be first removed so the deeper layers are uncovered (see Section 2.1). In particular,
|
372 |
+
mixing induced by rotation (or rotational mixing) has been shown to lead to extremely well mixed stars which evolve
|
373 |
+
(quasi-)chemically homogeneously (Maeder 1987). But in less extreme cases, mixing (not only by rotation) can help
|
374 |
+
bringing deeper layers upwards, to be lost in the wind eventually. The decay of some isotopes serves as a counter to
|
375 |
+
this process. This can be seen in the case of 26Al, which decays rather quickly (around 6 s) into 26Mg (cf. Finlay
|
376 |
+
et al. 2012).
|
377 |
+
Figure 2 shows the elements in the periodic table together with their cosmic origin (Kobayashi et al. 2020). While the
|
378 |
+
figure shows the state-of-the-art of our current knowledge, other possible avenues for the generation of elements are
|
379 |
+
thought to exist. For example, gold has been proposed to form in kilonovae (Kasen et al. 2017a).
|
380 |
+
Subsequent generations of stars have enriched interstellar gas with nuclear-processed elements. However, chemical
|
381 |
+
enhancement is not only a time-dependent process but can be spatially traced as well. For example, the Milky Way
|
382 |
+
displays a metallicity gradient (Peimbert et al. 1978; Afflerbach et al. 1997) which decreases outwards, but other
|
383 |
+
galaxies show other trends.
|
384 |
+
Another source of uncertainty is the discrepancy between the yields found at the scale of stellar evolution modelling
|
385 |
+
and those calculated at larger scales. To connect these two quantities, investigations are required with a varying
|
386 |
+
degree of resolution as well as an understanding of the uncertainties involved in both calculations. Uncertainties
|
387 |
+
include mixing and convection for single stars, tidal effects for binaries, and in general the handling of the Eddington
|
388 |
+
limit. As one can see for example in Agrawal et al. (2022), different approaches with multiple codes can lead to
|
389 |
+
different predictions. Tracers such as CNO abundances may help resolve these discrepancies.
|
390 |
+
3
|
391 |
+
Internal Stellar Processes
|
392 |
+
Stars are places where the four fundamental forces in physics interact (viz., gravitational, electromagnetic, strong,
|
393 |
+
and weak nuclear forces). Most global properties of stars can be inferred from the stellar structure equations, with the
|
394 |
+
7
|
395 |
+
|
396 |
+
He
|
397 |
+
Big Bang Nucleosynthesis
|
398 |
+
uns
|
399 |
+
Be
|
400 |
+
Core-collapse Supernovae
|
401 |
+
B
|
402 |
+
the
|
403 |
+
Type Ia Supernovae
|
404 |
+
Na
|
405 |
+
M.
|
406 |
+
Neutron Star Mergers
|
407 |
+
A1
|
408 |
+
ive
|
409 |
+
relati
|
410 |
+
Abundance
|
411 |
+
13.8
|
412 |
+
C.Kobayashi 2020
|
413 |
+
>Time「Gyrassumption of hydrostatic equilibrium. However, there are several key quantities e.g., nuclear reaction rates and
|
414 |
+
opacity measurements (especially Iron or Fe), and internal processes in stars e.g., convection and overshooting, that
|
415 |
+
remain highly uncertain in modelling stars, especially massive stars. Moreover, building accurate stellar models
|
416 |
+
requires including the contribution of hydro- and magneto-hydrodynamical processes in the stellar interior such as
|
417 |
+
stellar pulsations, stellar rotation and magnetic fields. These processes are not so well-understood and remain highly
|
418 |
+
approximated in stellar models.
|
419 |
+
Despite the recent progress in these areas in the last decade, several challenges remain in stellar physics. These
|
420 |
+
include treatment of convection and the determination of the sizes of the convective zones, a proper account of all the
|
421 |
+
processes that can induce mass loss at the different phases of evolution, the instabilities triggered in radiative zones
|
422 |
+
that can transport angular momentum and chemical species (some of them likely triggered by rotation), and the
|
423 |
+
impact of magnetic field in stellar interior and at the surface. Each of these uncertainties can severely impact stellar
|
424 |
+
outputs and alter the feedback they inject into the interstellar medium. Below we discuss two significant internal
|
425 |
+
processes.
|
426 |
+
3.1
|
427 |
+
Internal Mixing
|
428 |
+
Energy produced in stars due to nuclear burning and other processes needs to be transported away to outer layers.
|
429 |
+
The three main mechanisms responsible for this process are convection, conduction and radiation. In most stellar
|
430 |
+
evolution codes, convection is modelled using a simple but successful formalism called mixing length theory (MLT;
|
431 |
+
Böhm-Vitense 1958). If energy is carried through convection, then owing to the actual movement of particles in the
|
432 |
+
star, angular momentum and chemical species are also transported within the star. This can change the stellar
|
433 |
+
structure and radius, which in turn affects the ionization, mass-loss rates and pre-supernova structure of the star
|
434 |
+
(Dessart et al. 2013; Kaiser et al. 2020).
|
435 |
+
Convective boundary mixing (CBM) dictates the extension of the convective core and shell burning regions. There
|
436 |
+
are multiple methods of implementing CBM with various mixing profiles such as core overshooting via step,
|
437 |
+
exponential, or convective entrainment (Scott et al. 2021). The extension of the convective core via overshooting
|
438 |
+
during core H-burning has various consequences leading to stars evolving at higher luminosities with increased mass
|
439 |
+
loss over the integrated main sequence lifetime. Together, convection and associated mixing mechanisms contribute
|
440 |
+
to the internal mixing in stars.
|
441 |
+
Mixing processes can alter energy transport and the hydrogen content in the envelope, driving the evolution of
|
442 |
+
massive stars towards red and blue supergiant phases and thus dictating red to blue supergiant ratios (Schootemeijer
|
443 |
+
et al. 2019). On the main sequence, the effects of internal mixing and mass loss dominate the evolutionary pathways
|
444 |
+
which govern the fates of massive stars towards forming black holes and neutron stars. In the mass range ∼ 8–30 M⊙
|
445 |
+
interior mixing processes dominate the lives of massive stars, and in the mass range ∼ 30–60 M⊙ stellar winds drive
|
446 |
+
the evolution towards Wolf-Rayet (WR) stripped Helium stars. The indirect effect of mass loss on interior mixing
|
447 |
+
also plays a role in the switch of evolutionary path during core He-burning (Higgins & Vink 2020; Sabhahit et al.
|
448 |
+
2021). The switch in evolutionary channels in post-MS evolution is key for predicting SNe progenitor populations.
|
449 |
+
Internal mixing mechanisms are one of the largest uncertainties in stellar physics. For example, the extent of core
|
450 |
+
overshooting, which determines the length of the main sequence may itself be mass dependent (Castro et al. 2014)
|
451 |
+
which will also influence the post-main sequence evolutionary channels that form black holes. In fact, maintaining a
|
452 |
+
sufficiently low core mass at the highest mass range can be critical in forming black holes and avoiding the pair
|
453 |
+
instability supernovae regime (Vink et al. 2021). Similarly, radiative envelopes with subsurface convective layers can
|
454 |
+
drive clumps in the wind, altering the mass-loss rates and having a large impact on SNe progenitors (Davies et al.
|
455 |
+
2007; Cantiello et al. 2009; Jiang et al. 2015), although there remain large uncertainties in these predictions.
|
456 |
+
Convection, as given by MLT, becomes highly inefficient in energy transport within the radiation dominated, low
|
457 |
+
density envelopes of massive stars with Minit > 40M⊙ whose luminosities approaches the Eddington limit (e.g.,
|
458 |
+
Langer 1997; Maeder 2009), and only worsens for cooler supergiants owing to the hydrogen opacity bump at
|
459 |
+
Teff ∼ 104K. Such a situation can cause stellar evolution codes to either crash or become stuck very small time-steps
|
460 |
+
(Paxton et al. 2013). What happens in reality in such conditions, e.g., whether stars in close proximity to the
|
461 |
+
Eddington limit inflate (Gräfener et al. 2012) or not remains yet another unresolved problem. However stellar
|
462 |
+
evolution models can predict widely different post-main sequence evolution when treating these highly inflated layers
|
463 |
+
(Agrawal et al. 2022), which can have far reaching consequences in predicting the feedback properties of massive
|
464 |
+
stars. Perhaps 2D or 3D simulations, or observational constraints such as the Humphreys-Davidson limit might shed
|
465 |
+
light on what happens in such inflated, low density envelopes.
|
466 |
+
8
|
467 |
+
|
468 |
+
Asteroseismology may provide calibrations for the efficiencies of internal mixing processes, but main sequence stars
|
469 |
+
are usually fast rotators, and this can blur the period spacing. Low mass, slower rotators are more accessible for
|
470 |
+
providing constraints with asteroseismology (Pedersen et al. 2021; Bowman 2021). Rotation and rotational mixing
|
471 |
+
play a major role in the enrichment of massive stars. The chemical enrichment of massive stars is dominated by
|
472 |
+
rotational mixing instabilities, particularly whether the angular momentum is maintained via solid-body rotation,
|
473 |
+
which is also important for determining neutron star spin.
|
474 |
+
3.2
|
475 |
+
Stellar magnetic fields
|
476 |
+
Stars form in a magnetised medium, and recent simulations have demonstrated the large impact that magnetic fields
|
477 |
+
play in the formation process (Oliva & Kuiper, in prep.). However, the acquisition of stellar magnetic fields is largely
|
478 |
+
unconstrained. There are two different kinds of magnetic fields that can be harboured by the massive stars. One
|
479 |
+
possible branch is dynamos, either in the convective core driven by the α-Ω cycle (similar to the surface of the Sun),
|
480 |
+
or in the radiative layers driven by differential rotation (e.g., the mechanism proposed by Spruit 2002). Such
|
481 |
+
dynamos are small-scale and vary on a short Alfvén timescale. In evolutionary models of massive stars,
|
482 |
+
dynamo-generated magnetic fields in the radiative zones are commonly invoked (Maeder & Meynet 2003, 2004,
|
483 |
+
2005; Heger et al. 2005; Potter et al. 2012; Fuller et al. 2019; Takahashi & Langer 2021).
|
484 |
+
Another branch of possibilities is relaxed, equilibrium fossil magnetic fields in the stellar radiative envelopes (e.g.,
|
485 |
+
Braithwaite & Spruit 2004; Braithwaite & Nordlund 2006), which are large-scale and stable over the long-term
|
486 |
+
evolution (Ohmic timescale). Such fields are now routinely observed via spectropolarimetry (exploiting the Zeeman
|
487 |
+
effect) in a fraction of Galactic massive stars. Although no detections outside of the Galaxy have been made yet,
|
488 |
+
largely due to the limitations of current instrumentation capabilities.
|
489 |
+
The impact of fossil magnetic fields is far-reaching. These fields form a magnetosphere around the star, which
|
490 |
+
channels the stellar outflow (ud-Doula & Owocki 2002; Owocki 2004). The presence of magnetic fields can lead to
|
491 |
+
two other important effects on mass loss: magnetic mass loss quenching (reducing the mass loss rate of the star, by up
|
492 |
+
to an order of magnitude for a field of ∼ kG strength), and magnetic braking (removing angular momentum from the
|
493 |
+
star and hence leading to an observable decrease of its surface rotation). Mass-loss quenching is a powerful
|
494 |
+
mechanism that, independent of the metallicity, allows the star to retain most of its mass (Georgy et al. 2017;
|
495 |
+
Keszthelyi et al. 2017a, 2019, 2020, 2021; Petit et al. 2017). The implementation of these processes in stellar
|
496 |
+
evolution models has shown that magnetic braking very efficiently spins down the stellar surface and, depending on
|
497 |
+
the internal coupling, may also produce observable surface nitrogen enrichment (Meynet et al. 2011; Keszthelyi et al.
|
498 |
+
2019, 2021), with a grid of stellar structure and evolution models available that take account of these processes
|
499 |
+
(Keszthelyi et al. 2022).
|
500 |
+
Magnetic fields are thus a key component of stars. These are either built internally through internal dynamos or else
|
501 |
+
retained as fossil fields from the time of the star’s formation. While determining their presence and effect is difficult,
|
502 |
+
recent advances can help us to better constrain and understand this problem.
|
503 |
+
4
|
504 |
+
External Stellar Processes: Binaries
|
505 |
+
Similar to internal processes, external processes specific to the evolution of stars in multiple systems like tidal
|
506 |
+
interactions, mass exchange, common envelope phases, stellar mergers can also impact the evolution and feedback of
|
507 |
+
the stars. It is now established that binaries play a major role in the evolution of stellar populations (Eldridge &
|
508 |
+
Stanway 2020, 2022). The majority of stars are born in binary or multiple systems and the binary fraction increases
|
509 |
+
with stellar mass (Moe & Di Stefano 2017). In addition, we now know that a significant fraction of these binaries will
|
510 |
+
interact during their lifetime and initiate mass transfer, which has a significant impact on their structure and evolution
|
511 |
+
(Sana et al. 2012). As a result of mass transfer, primaries can be stripped of their hydrogen envelope, which is
|
512 |
+
accreted onto the secondary, spinning it up, or the system may merge. Consequently, their lifetimes and core
|
513 |
+
properties change, affecting the final fate and stellar remnant.
|
514 |
+
The picture is further complicated by the fact that both internal and external stellar processes, that are by themselves
|
515 |
+
complex to properly model, can hardly be studied in isolation, as they all interact. For example, stellar rotation,
|
516 |
+
which can affect the evolution of stars, is strongly affected by tidal interactions in close binary systems. Indeed, tides
|
517 |
+
can set up exchanges between two reservoirs of angular momentum, the orbital one and the rotational one, causing
|
518 |
+
the star to spin-up or spin down depending on the circumstances and thus modifying the whole evolution of the two
|
519 |
+
9
|
520 |
+
|
521 |
+
components by changing the rotation rates of the star and the radius of their orbits. A great diversity in evolutionary
|
522 |
+
histories and stellar structures, for example at the time of core collapse, can be obtained through binary evolution.
|
523 |
+
Likely some of the stellar pathways made possible by binary evolution are still to be discovered. Binary evolution
|
524 |
+
impacts stellar feedback in three main ways: winds, ionizing radiation and supernovae rates.
|
525 |
+
4.1
|
526 |
+
Impact on stellar winds
|
527 |
+
The interstellar medium continuously receives mechanical energy and chemical feedback from stellar winds of the
|
528 |
+
massive stars. Mass transfer in a close binary system will modify the nature of the wind from both components. The
|
529 |
+
stripped primary (helium star) will likely possess a faster, lower density wind than its evolved (red supergiant)
|
530 |
+
isolated counterpart, boosting the mechanical feedback. In addition, the mass-gaining secondary will usually produce
|
531 |
+
a stronger wind as a result of its increased luminosity.
|
532 |
+
Helium stars (WR stars at high mass) contribute considerable energy to the total energy budget of a population
|
533 |
+
(Fichtner et al. 2022). By way of example, in the SMC the collective wind of one multiple system (HD 5980)
|
534 |
+
dominates over hundreds of OB stars in NGC 346. Stellar populations consisting of rotating stars in a binary system
|
535 |
+
give raise to strong feedback processes specifically in low metallicities environment.
|
536 |
+
4.2
|
537 |
+
Impact on the ionizing radiation
|
538 |
+
It is well established that the ionizing radiation from a population of exclusively single (non-rotating) stars declines
|
539 |
+
rapidly once the highest mass stars evolve off the main sequence, with a secondary (high energy) peak coinciding
|
540 |
+
with the Wolf-Rayet phase (Schmutz et al. 1992; Smith et al. 2002). Since close binary evolution is capable of
|
541 |
+
stripping the primary component of its hydrogen envelope, the effect of binary evolution on the ionizing budget of
|
542 |
+
young stellar populations is dramatic (Götberg et al. 2019), especially at high energies (helium ionizing photons), and
|
543 |
+
at low metallicities for which only exceptionally massive single stars are capable of producing WR stars, whereas
|
544 |
+
binary evolution leads to a prominent population of hot, stripped stars.
|
545 |
+
Rosdahl et al. (2018) found that, on average, binaries lead to escape fractions of ∼7–10 percent in the early universe,
|
546 |
+
about three times higher than that produced by single stars only. With such a difference in ionizing escape fractions,
|
547 |
+
their simulation of binary systems gives a cosmic reionization epoch before z∼7, while the single-star escape
|
548 |
+
fractions are not able to reionize their simulation volumes by z∼6. Observationally, these findings have major
|
549 |
+
implications for linking stellar evolution to cosmological-scale feedback.
|
550 |
+
4.3
|
551 |
+
Impact on core-collapse supernovae
|
552 |
+
Binary evolution affects supernovae in three main ways: their energy budget, timing (location), and chemical yields.
|
553 |
+
Zapartas et al. (2017) found that the inclusion of binaries in massive stellar systems substantially increases the
|
554 |
+
number of supernovae expected among a stellar population, largely because of “late" events originating from
|
555 |
+
intermediate-mass (4 − 8M⊙) stars which would have otherwise evolved to white dwarfs, and whose binary
|
556 |
+
interactions uniquely create the conditions for supernovae. The possibility of late events affects the delay-time
|
557 |
+
distribution of supernovae: the maximum time expected for a single star to go supernova is 50 Myr, but late events
|
558 |
+
occur on scales of 50 − 200 Myr after birth. This stands in contrast with current prescriptions of supernovae timing in
|
559 |
+
feedback simulations, which often assume an instantaneous explosion within 50 Myr for massive stars.
|
560 |
+
Similarly, more massive stars that might otherwise be expected to collapse into black holes instead may experience
|
561 |
+
mass stripping and common envelope interactions that create supernova conditions on the high-mass end as well. The
|
562 |
+
widened range of initial masses that can experience supernovae from binary interactions will change the range of
|
563 |
+
energetics expected and the properties of the supernova progenitors (e.g., Podsiadlowski et al. 1992). Moreover, mass
|
564 |
+
transfer affects the structure and chemical composition of stars (e.g., Laplace et al. 2021), ultimately changing their
|
565 |
+
chemical yields. For example, Farmer et al. (2021) showed recently that at solar metallicity, binary-stripped stars can
|
566 |
+
eject twice as much carbon into their surroundings than single stars. In addition, binary systems can be the
|
567 |
+
progenitors of gravitational wave sources, which are responsible for enriching stars in r-process elements (Kasen
|
568 |
+
et al. 2017b, see also Sect. 2.4). The supernova kick imparted at the moment of explosion of one binary component
|
569 |
+
can result in a population of runaway and walkaways stars that explode in a location different from their birth
|
570 |
+
environment (e.g., Renzo et al. 2019).
|
571 |
+
10
|
572 |
+
|
573 |
+
4.4
|
574 |
+
Impact of larger scales on binary formation
|
575 |
+
Feedback processes in galaxies are thought to affect the formation of binaries and stellar multiples, through
|
576 |
+
perturbations of gas clouds, feedback from stars and magnetic fields. Turbulence injected into molecular clouds
|
577 |
+
through feedback from jets, winds and ionising radiation may affect when and how stellar multiples are formed. The
|
578 |
+
quantity of angular momentum in protostar formation plays an important role in the mass of the protostellar disk,
|
579 |
+
with more rotation leading to a more massive disk that fragments earlier. Bycontrast, if more mass is concentrated at
|
580 |
+
the centre of the disk, a single massive star and/or a less massive companion will form. UV radiation and the
|
581 |
+
propagation of heavy elements can also shape the formation of protostars as well as protoplanets.
|
582 |
+
Magnetic fields are important both in star-forming regions and also in stars (see Section 3.2), and can play a role in
|
583 |
+
coupling cloud scales to stellar scales. For example, sufficiently strong magnetic field will diminish fragmentation
|
584 |
+
which then prevents but does not fully suppress binary formation. However, due to difficulties in resolution on a
|
585 |
+
cloud-scale and the cost of small-scale simulations of protostar formation, simulations have not yet converged on the
|
586 |
+
role that magnetic fields play in shaping in-situ binary formation.
|
587 |
+
Currently, most simulations do not generally take binary evolution into account in their feedback yields, however this
|
588 |
+
is slowly changing in fields such as reionization studies (Rosdahl et al. 2018) at z > 6, but recently in lower redshift
|
589 |
+
galaxies such as Fichtner et al. (2022) for a sub-L* galaxy at z = 3.
|
590 |
+
5
|
591 |
+
Varying Metallicity in our Local Group: The Effect of Z
|
592 |
+
The Local Group is a complex environment with average present-day metallicities varying from ∼ 0.2 Z⊙ in SagDIG
|
593 |
+
(Saviane et al. 2002), to ∼ 2 Z⊙ in the Milky Way’s Galactic Centre (e.g. Nogueras-Lara et al. 2018). Additionally,
|
594 |
+
significant metallicity gradients exist within galaxies (Searle 1971; Vila-Costas & Edmunds 1992; Henry & Worthey
|
595 |
+
1999), including the Milky Way (e.g., Lemasle et al. 2018) - by metallicity of a galaxy, we typically refer to a radially
|
596 |
+
averaged quantity. Stellar evolution and small-scale feedback models usually adopt the averaged values for a given
|
597 |
+
galaxy when referencing their metallicities.
|
598 |
+
Within the Local Group, there are also large differences in densities and pressures, and star-forming mechanisms and
|
599 |
+
rates. For example, the Large Magellanic Cloud hosts a million Solar-mass starburst region in 30 Doradus (e.g. Doran
|
600 |
+
et al. 2013), while Sextans A and the SMC appear to host isolated OB stars (Garcia et al. 2019; Lorenzo et al. 2022).
|
601 |
+
Our local universe thus presents a useful testbed for studying how stellar feedback operates in a variety of conditions.
|
602 |
+
The role of metallicity applies to both the behaviour of stars themselves and the conditions in the gas in galaxies and
|
603 |
+
hence shapes the interplay between the two (Brugaletta et al. in prep.).
|
604 |
+
In general, we assume that massive stars form with roughly the same metallicity as their local environment. Their
|
605 |
+
surface abundances over their lifetime are shaped by chemical evolution as well as mixing and other processes such
|
606 |
+
as envelope self-stripping, which drastically change the feedback properties of these stars.
|
607 |
+
5.1
|
608 |
+
Impact on Stellar Evolution and Feedback
|
609 |
+
As discussed earlier, decreasing metallicity generally decreases the impact of stellar winds on an environment (Vink
|
610 |
+
et al. 2001), since winds are driven by metal lines in the stellar atmosphere. This is largely a consequence of processes
|
611 |
+
inside the star rather than the physics of the interstellar gas. Conversely, due to reduced photon absorption in the
|
612 |
+
atmosphere, the ionizing photon emission rates are typically higher at lower stellar metallicity (Martins et al. 2005).
|
613 |
+
The effect on the gas around stars at lower metallicity is two-fold. The efficiency of mechanical and photoionization
|
614 |
+
feedback is further enhanced by the fact that metal-line cooling in photoionized gas (Ferland 2003) and
|
615 |
+
collisionally-ionized gas (Sutherland & Dopita 1993) is less efficient at low metallicity. However, lower dust
|
616 |
+
fractions mean that the strength of radiation pressure decreases (Ali 2021).
|
617 |
+
The consequence of this on feedback depends on how these feedback processes couple, and if and when any given
|
618 |
+
process dominates. Winds and supernovae create hot X-ray emitting bubbles (106 – 108 K), while photoionized
|
619 |
+
regions are heated to ∼ 104 K. These regions co-exist within nebulae (Guedel et al. 2008), and their relative position
|
620 |
+
and impact within feedback-driven nebulae remains a subject of active study. Analysis of observations in the Galactic
|
621 |
+
Centre and compact H II regions shows that dust-processed radiation pressure dominates over other processes (Barnes
|
622 |
+
et al. 2020; Olivier et al. 2021b), while in the LMC/SMC/nearby galaxies, thermal pressure from photoionized gas
|
623 |
+
dominates (Lopez et al. 2014; McLeod et al. 2019, 2021). However, in addition to metallicity, these analyses are also
|
624 |
+
affected by other environmental factors such as filling factors, ambient densities and pressures. Similarly, thermal
|
625 |
+
11
|
626 |
+
|
627 |
+
losses are generally believed to have an important impact on wind bubbles in order to explain the missing energy in
|
628 |
+
observed hot plasmas (Townsley et al. 2003; Lopez et al. 2014). These thermal losses may be more affected by
|
629 |
+
turbulent mixing with cold gas in the environment of the wind bubble than by metal line cooling in the wind bubbles
|
630 |
+
themselves (Rosen et al. 2014; Lancaster et al. 2021).
|
631 |
+
5.2
|
632 |
+
Low metallicity
|
633 |
+
There remain many unknowns concerning stellar evolution in extremely low metallicity environments due to the
|
634 |
+
current limited observational capabilities and uncertain numerical ingredients, even in the case of single-star models.
|
635 |
+
Depending on their metallicity, stars follow different evolutionary paths, resulting in different spectral subtypes
|
636 |
+
dominating the mechanical and radiative yields. Between ∼ 1/10 Z⊙ and Z⊙, the mechanical luminosity during
|
637 |
+
stellar evolution is both theoretically and observationally expected to be dominated by Wolf-Rayet stars, despite their
|
638 |
+
relatively short lifetimes and rarity (Ramachandran et al. 2018a; Fichtner et al. 2022). Instead, the more abundant
|
639 |
+
stars with initial masses in the range ∼ 10-30 M⊙ are expected to end their lives as SNe, hence dominate the
|
640 |
+
mechanical luminosity after ∼ 107 yrs, i.e. at timescales comparable with the free-fall timescale of a young stellar
|
641 |
+
cluster (Krumholz & Burkhart 2016). At even lower metallicities, single-star evolution and wind models are not
|
642 |
+
expected to lead to the appearance of the WR phenomenon, with the evolutionary channel leading to H-depleted stars
|
643 |
+
being dominated by binary interaction (Shenar et al. 2020).
|
644 |
+
Their lower metal content may also lead to different evolutionary pathways that are not predicted at higher
|
645 |
+
metallicities. Evolutionary models (Brott et al. 2011) predict that, at metallicities lower than 1/10 Z⊙, fast-rotating
|
646 |
+
massive stars may evolve chemically homogeneously. In this evolutionary pathway, they can achieve temperatures
|
647 |
+
hotter than the zero-age main sequence (Yoon & Langer 2005b) and generally produce ∼ 5-10 times more ionizing
|
648 |
+
energy than their normally-evolving counterparts (Szécsi et al. 2015).
|
649 |
+
The implications arising from the evidence that the majority of massive stars are in binary systems, and the lower
|
650 |
+
angular momentum losses in low metallicity stellar models, are largely unconstrained. These effects are expected to
|
651 |
+
attenuate the otherwise steeper decrease in kinetic energy feedback in the early phases of cluster formation at low
|
652 |
+
metallicities (Fichtner et al. 2022). However, the different evolutionary pathways do not only affect the yields
|
653 |
+
estimated directly from evolutionary models. Stellar feedback, in fact, couples with the hydrodynamic evolution of
|
654 |
+
the circumstellar gas. The slow and dense stellar outflows characteristic of cool supergiants are outside the line-driven
|
655 |
+
regime and are only empirically constrained for stars in the Galactic Neighbourhood. It is likely that such slow gas
|
656 |
+
can lead to thermal dissipation at sub-parsec scales, with a growing impact at low metallicities. Stars close to their
|
657 |
+
Eddington limit during a Luminous Blue Variable phase (LBVs) are known to lose a significant fraction of their
|
658 |
+
H-rich envelope during phases of high variability (Humphreys & Davidson 1994; Vink & Gräfener 2012). Given the
|
659 |
+
metallicity-independence of the HD limit (Davies et al. 2018; McDonald et al. 2022), and the higher expected
|
660 |
+
number of redward-evolving stars at low-metallicities, one can expect that a larger fraction of the energy yield is
|
661 |
+
dissipated well-before reaching the cluster scales (Geen et al. 2015; Mackey et al. 2015; Lancaster et al. 2021). Any
|
662 |
+
systematic estimate must overcome our inability to convincingly model important stellar evolution phases such as the
|
663 |
+
LBV phase (however, see Grassitelli et al. 2021) and non-conservative mass-transfer phases in binary systems.
|
664 |
+
6
|
665 |
+
Stars over Cosmic Time: The Effect of z
|
666 |
+
In this Section we summarise discussions concerning how stellar evolution and feedback evolve over redshift. We
|
667 |
+
focus our discussion here on redshifts up to z ∼ 2, the peak of cosmological star formation. There are likely to be
|
668 |
+
significant differences between z ∼ 2 and very high redshift, in particular the role of the first (Population III) stars in
|
669 |
+
the very early universe. As discussed earlier, aspects of stellar evolution such as binary evolution are likely to have a
|
670 |
+
strong impact on cosmological processes such as reionization around z ∼ 6 − 11.
|
671 |
+
Typical z ∼ 2 galaxies are moderately massive, deficient in iron-peak elements albeit α/Fe enhanced (Steidel et al.
|
672 |
+
2016). Their nebular properties are relatively hard, and individual star forming knots (from lensing studies) indicate
|
673 |
+
high star-formation intensities – of order ∼ 0.1M⊙/yr within a region of a few hundred parsecs (Jones et al. 2010;
|
674 |
+
Livermore et al. 2015). Within the Local Group, only 30 Doradus (Tarantula Nebula) in the LMC displays such
|
675 |
+
properties, albeit with a higher metallicity of ∼ 0.5Z⊙ (Crowther 2019).
|
676 |
+
12
|
677 |
+
|
678 |
+
6.1
|
679 |
+
Star formation at low redshift (z ∼ 0 − 0.3)
|
680 |
+
Within the Local Group, where individual massive stars can generally be well spatially resolved, there are only a
|
681 |
+
small number of actively star-forming galaxies whose current metallicity is ≤ 0.2Z⊙, including the SMC, NGC
|
682 |
+
3109, IC 1613, Sextans A, WLM. Of these, the SMC has the highest star formation rate (Kennicutt et al. 2008), so is
|
683 |
+
host to several hundred O stars, albeit with only a few dozen above 40 M⊙ (Schootemeijer et al. 2021). Sextans A
|
684 |
+
has an even lower metallicity (van Zee & Haynes 2006) though also a lower star formation rate. In the context of
|
685 |
+
star-forming knots at high redshift, these are modest, since such region will host thousands of O stars, hundreds of
|
686 |
+
which are expected to exceed 40–50 M⊙. The SMC and Sextans A therefore provide our only direct route to
|
687 |
+
studying the evolution of massive stars at 0.1-0.2 Z⊙, except at the highest masses, which are poorly sampled due to
|
688 |
+
stochasticity. Sub-grid models employed in galaxy simulations (IMF, stellar models) are mainly constrained by local
|
689 |
+
observations and then applied to simulations at high-z, or rely on theoretical predictions for low metallicity stars.
|
690 |
+
Metal poor massive stellar populations beyond the Local Group have been studied via integrated stellar populations,
|
691 |
+
with the supergiant HII region Mrk 71 within NGC 2366 at 3 Mpc a striking example since it hosts massive super star
|
692 |
+
clusters and has a metallicity of ∼ 0.15Z⊙ (Gonzalez-Delgado et al. 1994; Micheva et al. 2017). This allows very
|
693 |
+
massive metal poor stars to be observed at low metallicity, albeit in an integrated stellar population. In particular UV
|
694 |
+
spectroscopy of the very young super star cluster Mrk 71-A with HST reveals strong HeII 1640 emission, providing a
|
695 |
+
direct indicator of the presence of very massive stars (LJ Smith, priv. comm.). Mrk 71 is also notable in having
|
696 |
+
evidence of leaking Lyman continuum photons (Micheva et al. 2017).
|
697 |
+
A sizeable population of Green Pea (GP) galaxies has been identified from SDSS observations whose properties
|
698 |
+
overlap with high-redshift galaxies, i.e. both are metal-poor, possess high specific star formation rates plus hard
|
699 |
+
nebular conditions in the BPT diagram (Cardamone et al. 2009), plus direct evidence for Lyman continuum leakage
|
700 |
+
in some instances (Izotov et al. 2016) and an excess soft X-ray emission (Franeck et al. 2022). In addition, there are
|
701 |
+
examples of very metal-poor star forming galaxies locally with metallicities of only a few percent of the Solar
|
702 |
+
Neighbourhood (I Zw 18, SBS 0335 Lequeux et al. 1979; Izotov et al. 1990) which are potential analogues of
|
703 |
+
star-forming galaxies in the very early Universe. Madau & Dickinson (2014) present the evolution of the average
|
704 |
+
metal-content of the Universe through its history (their Fig. 14). For example, the metallicity of Sextans A (1/10 Z⊙)
|
705 |
+
equates to ∼4 Gyr after the Big Bang.
|
706 |
+
6.2
|
707 |
+
Star formation at z ∼ 2
|
708 |
+
Overall whilst there are some commonalities between metal-poor star forming regions locally and those at high
|
709 |
+
redshift, some key differences remain, including composition (Fe-poor, α-enhanced, Steidel et al. 2016), higher
|
710 |
+
specific star formation intensities potentially impacting on the IMF and close binary fraction, plus even if the mass
|
711 |
+
and metallicity of a galaxy is the same at high- and low z, the environment, gas accretion and merger rate, AGN
|
712 |
+
activity, will be different. It is speculated that old galactic globular clusters (GCs) in particular are born as Young
|
713 |
+
Massive Clusters (YMCs, Portegies Zwart et al. 2010) from an α-enhanced composition, with a first generation of
|
714 |
+
metal-poor massive and intermediate-mass stars present (Bastian & Lardo 2018) which could have contributed to the
|
715 |
+
present-day chemical composition of the clusters (de Mink et al. 2009; Szécsi et al. 2018; Szécsi & Wünsch 2019).
|
716 |
+
Regarding future prospects, efforts have recently been made to build extensive spectroscopic catalogues of massive
|
717 |
+
stars in Local Group dwarf galaxies with sub-SMC metallicities (Lorenzo et al. 2022). These catalogues will yield a
|
718 |
+
proper characterization of the physical parameters of metal-poor massive stars and will correct stellar evolutionary
|
719 |
+
models. By introducing their physical properties as inputs of photoionization codes (CLOUDY Ferland et al. 1998),
|
720 |
+
we will be able to study the conditions of their surrounding interstellar medium and understand the stellar feedback of
|
721 |
+
these metal-poor massive stars. Studying this interplay between individual massive stars and their surrounding
|
722 |
+
interstellar medium in metal-poor environments can help us interpret the observations of high-z galaxies and even
|
723 |
+
estimate the amount of ionizing photons that dwarf galaxies contributed to the reionization of the Universe.
|
724 |
+
7
|
725 |
+
From Star-by-Star Studies to IMF Averages and Population Synthesis
|
726 |
+
The sources of feedback energy from massive stars – their ionizing photon flux, the momentum carried by their
|
727 |
+
stellar winds, and their ultimate fate as supernovae – all depend strongly on the detailed physics of stellar evolution.
|
728 |
+
Without a clear understanding of the physical processes involved in the lives and deaths of massive stars, we cannot
|
729 |
+
understand the ultimate impact of stellar feedback on galaxies. Despite the urgency of this question, many theoretical
|
730 |
+
13
|
731 |
+
|
732 |
+
studies of galaxy evolution make use of heavily simplified assumptions of how massive stars evolve. How can we
|
733 |
+
translate the best current understanding of stellar evolution into a better foundation for theoretical models of galaxy
|
734 |
+
formation?
|
735 |
+
Stellar feedback in galaxies has been invoked as a mechanism to control the galactic star formation rate, the growth of
|
736 |
+
spheroids, the baryon and metal content of galaxy discs, among other galaxy-scale properties. Energy and
|
737 |
+
momentum injected by massive stars can destroy star-forming clouds before they can convert the bulk of their gas
|
738 |
+
into stars, and ultimately drive powerful galactic winds that remove baryons from the disc. Capturing these processes,
|
739 |
+
either in semi-analytic models or hydrodynamic simulations, must begin with a robust budget (and timeline) of the
|
740 |
+
relevant energy sources.
|
741 |
+
7.1
|
742 |
+
What Matters at the scale of Galaxies?
|
743 |
+
Broadly speaking, the primary physical process that makes galaxies “care” about the stellar populations they contain
|
744 |
+
is feedback. Galaxy-scale feedback is generally considered to be negative, with stellar feedback limiting galactic star
|
745 |
+
formation by injecting turbulence (e.g. Padoan et al. 2016), driving galactic outflows (e.g. Larson 1974), or
|
746 |
+
destroying star-forming molecular clouds (e.g. Chevance et al. 2022). In addition to the energy and momentum that
|
747 |
+
stellar populations inject into their surroundings, the mass-loss of stars can also pollute the interstellar medium (ISM)
|
748 |
+
with metals produced in those stars, increasing the cooling rate of this gas and acting as a form of positive feedback
|
749 |
+
(Hirschmann et al. 2013). Thus, the stellar physics that determines the energy and momentum of stellar winds, SN
|
750 |
+
explosions, and UV radiation all act to change the impact of stellar feedback on the scale of galaxies.
|
751 |
+
For all but the smallest galaxies, the stellar populations driving feedback comprise tens of thousands or more stars. In
|
752 |
+
addition, simulations of galaxies typically cannot resolve individual stars except in the smallest, most isolated
|
753 |
+
systems. Thus, the primary questions that galactic astrophysicists must have for stellar astrophysicists come down to
|
754 |
+
integrated or population-averaged quantities. Simulations of galaxies may include supernovae, stellar winds, or UV
|
755 |
+
feedback (or any combination of these). What is needed are mass loss, energy and momentum injection, and UV
|
756 |
+
photon production rates as a function of time (in other words, yields of each of these quantities). A detailed study of
|
757 |
+
an individual star will not alone suffice for this: what is needed is an understanding of a fully-sampled IMF. As the
|
758 |
+
small-scale environment of individual stars is unknown and unresolved in these simulations, the only dependency of
|
759 |
+
these quantities that can be probed are ones which are again population averaged, such as the birth metallicity
|
760 |
+
(Badenes et al. 2018) or ISM density(Chabrier et al. 2014). The tool typically used to determine the
|
761 |
+
population-averaged yields needed for galaxy simulations is Population Synthesis.
|
762 |
+
7.2
|
763 |
+
Population Synthesis and Simple Stellar Populations
|
764 |
+
No matter whether galaxies are modelled using analytic approximations, semi-analytic models, or full hydrodynamic
|
765 |
+
simulations, the phenomena occurring inside and around individual stars necessarily must be averaged across large
|
766 |
+
numbers (103 − 107) of stars. Historically, this has been done through the use of Population Synthesis of
|
767 |
+
Simple/Single Stellar Populations (SSPs). SSPs are groups of stars, sampled from a given IMF (e.g. Leitherer et al.
|
768 |
+
1999), that are assumed to have been born at a fixed time, with identical chemical properties. Population synthesis
|
769 |
+
models allow simulation codes to determine, as a function of time, the yields of mass, metals, and energy produced
|
770 |
+
by the individual star particles within those simulations (or from an assumed population in an analytic or
|
771 |
+
semi-analytic model). Typically, this is done via either tabulated outputs from a population synthesis code (e.g.
|
772 |
+
Leitherer et al. 1999; da Silva et al. 2012), or through analytic functions fit to these yields. While this hides much of
|
773 |
+
the stellar physics involved in producing these yields “under the hood” of the population synthesis model, it does
|
774 |
+
offer us the opportunity to easily incorporate more a sophisticated model of stellar evolution without significant work
|
775 |
+
required to re-design galaxy simulation codes.
|
776 |
+
8
|
777 |
+
Connecting Theory and Observations
|
778 |
+
Theoretical approaches such as simulations are essential in astrophysics since laboratory experiments of most
|
779 |
+
astronomical phenomena are impossible. Using theoretical results to inform observational results requires the
|
780 |
+
creation of “synthetic” observations, or mock observational results generated using simulated inputs. This can take
|
781 |
+
the form of simulated stellar spectra, multi-wavelength gas emission maps, mock galaxy catalogues, and more. This
|
782 |
+
14
|
783 |
+
|
784 |
+
process is important both for observers, who may wish to understand the systems they observe with full 3D and time
|
785 |
+
information, and theorists who wish to better constrain their models.
|
786 |
+
Creating mock observations is a complex process with many steps that must be treated properly to produce accurate
|
787 |
+
results. This is a subject that has been widely discussed on various scales, from the regions around stars (see review
|
788 |
+
by Haworth et al. 2018) to cosmological galaxy formation (e.g. Guidi et al. 2015).
|
789 |
+
There are various hurdles relevant to stellar evolution and feedback that must be overcome if we are to close the gap
|
790 |
+
between observed systems and theoretical predictions for how they behave. One key issue is ensuring that the
|
791 |
+
physical structure of the observed system is realistic. This is highly affected by stellar feedback on all scales, which
|
792 |
+
in turn is affected by the details of (massive) stellar evolution, as discussed in previous Sections. Conversely, with
|
793 |
+
accurate theoretical models, it may be possible to use observations of feedback-driven structures as archaeological
|
794 |
+
tools to inform studies of how stars evolve.
|
795 |
+
The motion of interstellar gas is chaotic, since it requires solutions to the coupled non-linear equations for (radiative
|
796 |
+
magneto)hydrodynamics and N-body gravitation. This means that small perturbations to the early state of the cloud,
|
797 |
+
such as initial seed turbulence or differences in stellar output, can have large cumulative effects on the later evolution
|
798 |
+
of astrophysical systems. The variance from differences in stellar input and initial gas properties have been explored
|
799 |
+
in star-forming regions (Geen et al. 2018) and galaxies (Keller & Kruijssen 2022). Some linear response and
|
800 |
+
mitigation of sampling errors is recoverable using statistical analysis and comparisons of large catalogues of both
|
801 |
+
simulations and observations (Eadie et al. 2018). However, the physical divergence of solutions to sets of non-linear
|
802 |
+
equations over time remains a serious concern in reproducing astronomical phenomena using simulations.
|
803 |
+
Simulations will often necessarily simplify or omit certain details of real-world physics for the sake of producing
|
804 |
+
computationally-feasible or reducible results. Some models assume 1D or 2D geometries with symmetry in other
|
805 |
+
dimensions, or ignore effects such as (non-)ideal magnetohydrodynamics, gas chemistry, thermal conduction, etc.
|
806 |
+
Choices concerning simulated system size and resolution must also be made. Many of these assumptions may be
|
807 |
+
reasonable and lead to minimal impact on the end result (e.g. through convergence in simulation resolution), but it is
|
808 |
+
often hard to determine whether this is true without access to more expensive, physically-complete simulations.
|
809 |
+
Finally, the emission and absorption properties of stars and interstellar gas are complex, but are nonetheless required
|
810 |
+
to be reproduced in detail if we wish to create accurate synthetic observations. This may be relatively simple for
|
811 |
+
low-opacity systems with well-understood stellar populations, but becomes complex in other more general cases.
|
812 |
+
Efforts have begun to connect the actions of stars to the emission properties of interstellar nebulae (see, e.g. Pellegrini
|
813 |
+
et al. 2020). However, the problem remains a difficult and costly one. A solution requires a good understanding of
|
814 |
+
stellar evolution, feedback physics and gas microphysics and chemistry, all operating together over the lifetime of a
|
815 |
+
system.
|
816 |
+
One mitigation to these problems may be found in posing questions in a way that reduces the impact of some of the
|
817 |
+
uncertainties given above. Rather than producing a 1:1 comparison of individual objects, we may instead seek an
|
818 |
+
interval of validity - that is to say, a set of possibilities informed by simulations that constrain certain parameters.
|
819 |
+
Public data availability through standard databases would assist in this by allowing simulators and observers to access
|
820 |
+
large quantities of relevant information, provided the limitations of the simulations and observations within the
|
821 |
+
databases (e.g. resolution limits, systemic errors or important physical choices) are properly understood by the user.
|
822 |
+
To ensure that the interval of validity and limitations are properly understood, increased collaborations between
|
823 |
+
observers and simulators in the near future will be helpful.
|
824 |
+
9
|
825 |
+
Conclusions
|
826 |
+
The interplay between stars and their environment (termed “stellar feedback”) is a long-standing problem that
|
827 |
+
nonetheless is still the subject of active study. These questions remain open for numerous reasons, relating to the
|
828 |
+
complexity of large-scale astrophysical gas dynamics and of the evolution of stars, individually and in multiple stellar
|
829 |
+
systems.
|
830 |
+
The outcome of the workshop was to identify a wide-ranging set of points of interaction between massive stars and
|
831 |
+
the gas in galaxies, from the scale of protostellar disks to cosmological scales. In addition, the workshop highlighted
|
832 |
+
the need for detailed discussions between researchers working on different aspects of both stellar evolution and
|
833 |
+
feedback. For example, bridging the scales of molecular clouds and galaxies is important in tracking how the impact
|
834 |
+
of massive stellar evolution is felt on (cosmological) galaxy scales.
|
835 |
+
Much of this work is concerned with providing an inventory of the variables and unknowns affecting each field and
|
836 |
+
15
|
837 |
+
|
838 |
+
how they relate to each other. For example, metallicity plays an important role in both the wind and radiation outputs
|
839 |
+
from massive stars and the impact these processes have on the gas in galaxies through radiative cooling efficiencies.
|
840 |
+
We provide detailed discussion of both theoretical and observed behaviour of stars and gas at different metallicities,
|
841 |
+
using our local galactic environment and higher redshift galaxies as observational examples of this. Meanwhile, there
|
842 |
+
remain strong uncertainties in the budget of mass, energy and chemical enrichment from winds, radiation and
|
843 |
+
supernovae at different metallicities, including whether certain stars become supernovae at all (“islands of
|
844 |
+
explodability”).
|
845 |
+
We discuss the effects governing stellar evolution, including both internal effects such as mixing and magnetic fields,
|
846 |
+
and external effects such as interaction with companion stars and how this shapes feedback. Determining the internal
|
847 |
+
structure of stars remains difficult, although there are promising techniques for doing so using asteroseismology and
|
848 |
+
comparison with theory, which in turn offers the ability to constrain a new generation of theoretical stellar evolution
|
849 |
+
models. Multiple stellar evolution greatly complicates the evolutionary path of massive stars. Nonetheless,
|
850 |
+
understanding stellar multiples remains crucial not only because a large fraction, or even the majority, of massive
|
851 |
+
stars are in binaries, but also because interacting binaries drastically change the feedback properties from massive
|
852 |
+
stars, both before and after the stars go supernova. This in turn can even influence how cosmological processes such
|
853 |
+
as reionization occur.
|
854 |
+
We note that it is important to understand not just the action of individual stars or binary systems, but how feedback
|
855 |
+
from stars combines as populations in galaxies. This in turn is important for determining what we know about
|
856 |
+
individual stars when observing distant galaxies where individual stars cannot be resolved.
|
857 |
+
Finally, we discuss efforts to compare theory and observations in detail. This remains a difficult task, since modelling
|
858 |
+
the spectral emission from atmospheres of stars, as well as (photo and collisionally-)ionized gas is non-trivial,
|
859 |
+
although more recently software tools are now able to perform this task. More worryingly, as (astrophysical) fluids
|
860 |
+
evolve non-linearly and precise information about the initial state of an observed system is often difficult to obtain,
|
861 |
+
direct one-to-one comparison is often challenging or impossible, and we must often rely on statistical comparisons.
|
862 |
+
Overall, we believe that this is an exciting time to begin widening discussions between workers in the fields of stellar
|
863 |
+
evolution and feedback, with advances in theory and observations in both fields allowing great improvements in our
|
864 |
+
understanding of astrophysics, both from the point of view of the birth and evolution of stars in a galactic context, and
|
865 |
+
also an inventory of how energy propagates from stars to shape local star formation, whole galaxies and the wider
|
866 |
+
universe.
|
867 |
+
10
|
868 |
+
Acknowledgements
|
869 |
+
We would like to thank the anonymous referee for their work in improving the quality of the manuscript. The
|
870 |
+
workshop on which this manuscript is based was made possible thanks to the logistical and financial support of the
|
871 |
+
Lorentz Center, Leiden, Netherlands. This funding is made available by Leiden University and the Dutch Science
|
872 |
+
Foundation (NWO). The workshop was further supported by a NOVA grant for Star Formation, which SG also
|
873 |
+
acknowledges as support. SG further acknowledges support from a Spinoza award of the NWO for research on the
|
874 |
+
physics and chemistry of the interstellar medium. This research was partly funded by the National Science Center
|
875 |
+
(NCN), Poland under grant number OPUS 2021/41/B/ST9/00757. Y.A.F. and E.R.D. acknowledge support from
|
876 |
+
Collaborative Research Center 956, sub-project C4, funded by the Deutsche Forschungsgemeinschaft (DFG) –
|
877 |
+
project ID 184018867. Y.A.F was supported by the International Max Planck Research School in Astronomy and
|
878 |
+
Astrophysics. SR acknowledges funding from the European Research Council Horizon 2020 research and innovation
|
879 |
+
programme (Grant No. 833925, project STAREX). H.S. and D.Sz. were supported by the Alexander von Humboldt
|
880 |
+
Foundation. R.S was funded in part by the National Science Center(NCN), Poland under grant number OPUS
|
881 |
+
2021/41/B/ST9/00757. For the purpose of Open Access, the author has applied a CC-BY public copyright license to
|
882 |
+
any Author Accepted Manuscript (AAM)version arising from this submission. M.T. acknowledges support from the
|
883 |
+
NWO grant 0.16.VIDI.189.162 (“ODIN”). For the purpose of Open Access, the author has applied a CC-BY public
|
884 |
+
copyright license to any Author Accepted Manuscript (AAM) version arising from this submission. A.A.C.S. and
|
885 |
+
V.R. are supported by the Deutsche Forschungsgemeinschaft (DFG - German Research Foundation) in the form of an
|
886 |
+
Emmy Noether Research Group – Project-ID 445674056 (SA4064/1-1, PI Sander)" M. L. gratefully acknowledges
|
887 |
+
support by grants PID2019-105552RB-C41 and MDM-2017-0737 Unidad de Excelencia "María de Maeztu"-Centro
|
888 |
+
de Astrobiología (CSIC-INTA), funded by MCIN/AEI/10.13039/501100011033 and “ESF Investing in your future".
|
889 |
+
16
|
890 |
+
|
891 |
+
Contact:
|
892 |
+
Name: Sam Geen
|
893 |
+
Institution: (1) Anton Pannekoek Institute for Astronomy, University of Amsterdam, 1098 XH Amsterdam, The
|
894 |
+
Netherlands (2) Leiden Observatory, Leiden University, PO Box 9513, 2300 RA Leiden, Netherlands
|
895 |
+
Email: [email protected]
|
896 |
+
Full list of institutions:
|
897 |
+
1 Anton Pannekoek Institute for Astronomy, Universiteit van Amsterdam, Science Park 904, 1098 XH Amsterdam,
|
898 |
+
Netherlands
|
899 |
+
2 Leiden Observatory, Leiden University, PO Box 9513, 2300 RA Leiden, Netherlands
|
900 |
+
3 McWilliams Center for Cosmology, Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213,
|
901 |
+
USA
|
902 |
+
4 Physics & Astronomy, University of Sheffield, Hounsfield Road, Sheffield, S3 7RH, United Kingdom
|
903 |
+
5 Department of Physics and Material Science, The University of Memphis, Memphis, TN 38152, USA
|
904 |
+
6 Institute of Astronomy, KU Leuven, Celestijnenlaan 200D, 3001 Leuven, Belgium
|
905 |
+
7 Center for Computational Astrophysics, Division of Science, National Astronomical Observatory of Japan, 2-21-1,
|
906 |
+
Osawa, Mitaka, Tokyo 181-8588, Japan
|
907 |
+
8 Cardiff Hub for for Astrophysics Research and Technology, School of Physics and Astronomy, Cardiff University,
|
908 |
+
Queen’s Buildings, The Parade, Cardiff CF24 3AA, UK
|
909 |
+
9 Department of Physics and Astronomy, University of Exeter, Stocker Road, Exeter EX4 4QL, United Kingdom
|
910 |
+
10 I. Physikalisches Institut, Universität zu Köln, Zülpicher Str. 77, 50937 Cologne, Germany
|
911 |
+
11 Department of astronomy, University of Geneva, Chemin Pegasi 51, 1290 Versoix, Switzerland
|
912 |
+
12Argelander-Institut für Astronomie, Universität Bonn, Auf dem Hügel 71, D-53121 Bonn, Germany
|
913 |
+
13 Armagh Observatory & Planetarium, College Hill, Armagh, BT619DG, United Kingdom
|
914 |
+
14 Heidelberger Institut für Theoretische Studien, Schloss-Wolfsbrunnenweg 35, 69118 Heidelberg, Germany
|
915 |
+
15 Centro de Astrobiología, CSIC-INTA. Crtra. de Torrejón a Ajalvir km 4. 28850 Torrejón de Ardoz (Madrid), Spain
|
916 |
+
16Centre for Extragalactic Astronomy, Department of Physics, Durham University, South Road, Durham DH1 3LE,
|
917 |
+
United Kingdom
|
918 |
+
17Institute for Computational Cosmology, Department of Physics, University of Durham, South Road, Durham DH1
|
919 |
+
3LE, United Kingdom
|
920 |
+
18 Institute for Astronomy and Astrophysics, University of Tübingen, Auf der Morgenstelle 10, 72076 Tübingen,
|
921 |
+
Germany
|
922 |
+
19 Zentrum für Astronomie der Universität Heidelberg, Astronomisches Rechen-Institut, Mönchhofstr. 12-14, 69120
|
923 |
+
Heidelberg, Germany
|
924 |
+
20 Sub-department of Astrophysics, University of Oxford, DWB, Keble Road, Oxford OX1 3RH, United Kingdom
|
925 |
+
21 Institute of Astronomy, Faculty of Physics, Astronomy and Informatics, Nicolaus Copernicus University,
|
926 |
+
Grudzi ˛adzka 5, 87-100 Toru´n, Poland
|
927 |
+
22 Kapteyn Astronomical Institute, University of Groningen, P.O. Box 800, 9700 AV Groningen, Netherlands
|
928 |
+
23 The Observatories of the Carnegie Institution for Science, 813 Santa Barbara Street, CA-91101 Pasadena, USA
|
929 |
+
24 SOFIA Science Center, USRA, NASA Ames Research Center, Moffett Field, CA 94045, USA
|
930 |
+
25 Las Cumbres Observatory, 6740 Cortona Dr, Suite 102, Goleta, CA 93117-5575, USA
|
931 |
+
26 Department of Physics, University of California, Santa Barbara, CA 93106-9530, USA
|
932 |
+
27 Departamento de Física Teórica, Universidad Autónoma de Madrid (UAM), Campus de Cantoblanco, E-28049
|
933 |
+
Madrid, Spain
|
934 |
+
References
|
935 |
+
Afflerbach A., Churchwell E., Werner M. W., 1997, ApJ, 478, 190
|
936 |
+
Agertz O., et al., 2020, MNRAS, 491, 1656
|
937 |
+
Agrawal P., Szécsi D., Stevenson S., Eldridge J. J., Hurley J., 2022, MNRAS, 512, 5717
|
938 |
+
Ali A. A., 2021, MNRAS, 501, 4136
|
939 |
+
17
|
940 |
+
|
941 |
+
Badenes C., et al., 2018, ApJ, 854, 147
|
942 |
+
Barnes A. T., Longmore S. N., Dale J. E., Krumholz M. R., Kruijssen J. M. D., Bigiel F., 2020, MNRAS, 498, 4906
|
943 |
+
Basinger C. M., Kochanek C. S., Adams S. M., Dai X., Stanek K. Z., 2021, MNRAS, 508, 1156
|
944 |
+
Bastian N., Lardo C., 2018, ARA&A, 56, 83
|
945 |
+
Björklund R., Sundqvist J. O., Puls J., Najarro F., 2021, A&A, 648, A36
|
946 |
+
Böhm-Vitense E., 1958, ZAp, 46, 108
|
947 |
+
Bowman D. M., 2021, in OBA Stars: Variability and Magnetic Fields. p. 27, doi:10.5281/zenodo.5109690
|
948 |
+
Bowman J. D., Rogers A. E. E., Monsalve R. A., Mozdzen T. J., Mahesh N., 2018, Nature, 555, 67
|
949 |
+
Braithwaite J., Nordlund Å., 2006, A&A, 450, 1077
|
950 |
+
Braithwaite J., Spruit H. C., 2004, Nature, 431, 819
|
951 |
+
Brott I., et al., 2011, A&A, 530, A115
|
952 |
+
Burbidge E. M., Burbidge G. R., Fowler W. A., Hoyle F., 1957, Reviews of Modern Physics, 29, 547
|
953 |
+
Cantiello M., et al., 2009, A&A, 499, 279
|
954 |
+
Cardamone C., et al., 2009, MNRAS, 399, 1191
|
955 |
+
Carlberg R. G., Keating L. C., 2022, ApJ, 924, 77
|
956 |
+
Castor J. I., Abbott D. C., Klein R. I., 1975, ApJ, 195, 157
|
957 |
+
Castro N., Fossati L., Langer N., Simón-Díaz S., Schneider F. R. N., Izzard R. G., 2014, A&A, 570, L13
|
958 |
+
Chabrier G., Hennebelle P., Charlot S., 2014, ApJ, 796, 75
|
959 |
+
Chen H.-L., Woods T. E., Yungelson L. R., Gilfanov M., Han Z., 2015, MNRAS, 453, 3024
|
960 |
+
Chevance M., et al., 2022, MNRAS, 509, 272
|
961 |
+
Cranmer S. R., Owocki S. P., 1995, ApJ, 440, 308
|
962 |
+
Crowther P. A., 2019, Galaxies, 7, 88
|
963 |
+
Crowther P. A., et al., 2016, Monthly Notices of the Royal Astronomical Society, 458, 624
|
964 |
+
Davies B., Vink J. S., Oudmaijer R. D., 2007, A&A, 469, 1045
|
965 |
+
Davies B., Crowther P. A., Beasor E. R., 2018, MNRAS, 478, 3138
|
966 |
+
Dayal P., et al., 2020, MNRAS, 495, 3065
|
967 |
+
Dessart L., Hillier D. J., Waldman R., Livne E., 2013, MNRAS, 433, 1745
|
968 |
+
Dobbs C. L., Bending T. J. R., Pettitt A. R., Bate M. R., 2022, MNRAS, 509, 954
|
969 |
+
Doran E. I., et al., 2013, A&A, 558, A134
|
970 |
+
Eadie G., Keller B., Harris W. E., 2018, ApJ, 865, 72
|
971 |
+
Eide M. B., Graziani L., Ciardi B., Feng Y., Kakiichi K., Di Matteo T., 2018, MNRAS, 476, 1174
|
972 |
+
Eldridge J. J., Stanway E. R., 2020, arXiv e-prints, p. arXiv:2005.11883
|
973 |
+
Eldridge J. J., Stanway E. R., 2022, arXiv e-prints, p. arXiv:2202.01413
|
974 |
+
18
|
975 |
+
|
976 |
+
Emerick A., Bryan G. L., Mac Low M.-M., 2019, MNRAS, 482, 1304
|
977 |
+
Farmer R., Laplace E., de Mink S. E., Justham S., 2021, ApJ, 923, 214
|
978 |
+
Faucher-Giguère C.-A., 2020, MNRAS, 493, 1614
|
979 |
+
Federrath C., Schrön M., Banerjee R., Klessen R. S., 2014, ApJ, 790, 128
|
980 |
+
Ferland G. J., 2003, Annual Review of Astronomy and Astrophysics, 41, 517
|
981 |
+
Ferland G. J., Korista K. T., Verner D. A., Ferguson J. W., Kingdon J. B., Verner E. M., 1998, PASP, 110, 761
|
982 |
+
Fichtner Y. A., Grassitelli L., Romano-Díaz E., Porciani C., 2022, MNRAS, 512, 4573
|
983 |
+
Finlay P., et al., 2012, Phys. Rev. C, 85, 055501
|
984 |
+
Franeck A., Wünsch R., Martínez-González S., Orlitová I., Boorman P., Svoboda J., Szécsi D., Douna V., 2022, ApJ,
|
985 |
+
927, 212
|
986 |
+
Fuller J., Piro A. L., Jermyn A. S., 2019, MNRAS, 485, 3661
|
987 |
+
Gal-Yam A., Leonard D. C., 2009, Nature, 458, 865
|
988 |
+
Garcia M., Herrero A., Najarro F., Camacho I., Lorenzo M., 2019, MNRAS, 484, 422
|
989 |
+
Geen S., Rosdahl J., Blaizot J., Devriendt J., Slyz A., 2015, Monthly Notices of the Royal Astronomical Society, 448,
|
990 |
+
3248
|
991 |
+
Geen S., Watson S. K., Rosdahl J., Bieri R., Klessen R. S., Hennebelle P., 2018, MNRAS, 481, 2548
|
992 |
+
Georgy C., Meynet G., Ekström S., Wade G. A., Petit V., Keszthelyi Z., Hirschi R., 2017, A&A, 599, L5
|
993 |
+
Gerke J. R., Kochanek C. S., Stanek K. Z., 2015, MNRAS, 450, 3289
|
994 |
+
Gonzalez-Delgado R. M., et al., 1994, ApJ, 437, 239
|
995 |
+
Götberg Y., de Mink S. E., Groh J. H., Leitherer C., Norman C., 2019, A&A, 629, A134
|
996 |
+
Götberg Y., de Mink S. E., McQuinn M., Zapartas E., Groh J. H., Norman C., 2020, A&A, 634, A134
|
997 |
+
Gräfener G., Vink J. S., 2016, MNRAS, 455, 112
|
998 |
+
Gräfener G., Owocki S. P., Vink J. S., 2012, A&A, 538, A40
|
999 |
+
Grassitelli L., Langer N., Mackey J., Gräfener G., Grin N. J., Sander A. A. C., Vink J. S., 2021, A&A, 647, A99
|
1000 |
+
Groenewegen M., Lamers H., Pauldrach A., 1989, Astronomy & Astrophysics, 221, 78
|
1001 |
+
Groh J. H., Meynet G., Ekström S., 2013a, A&A, 550, L7
|
1002 |
+
Groh J. H., Meynet G., Georgy C., Ekström S., 2013b, A&A, 558, A131
|
1003 |
+
Grudi´c M. Y., Guszejnov D., Offner S. S. R., Rosen A. L., Raju A. N., Faucher-Giguère C.-A., Hopkins P. F., 2022,
|
1004 |
+
MNRAS, 512, 216
|
1005 |
+
Guedel M., Briggs K. R., Montmerle T., Audard M., Rebull L., Skinner S. L., 2008, Science, 319, 309
|
1006 |
+
Guidi G., Scannapieco C., Walcher C. J., 2015, MNRAS, 454, 2381
|
1007 |
+
Gutcke T. A., Pakmor R., Naab T., Springel V., 2021, MNRAS, 501, 5597
|
1008 |
+
Haworth T. J., Glover S. C. O., Koepferl C. M., Bisbas T. G., Dale J. E., 2018, New A Rev., 82, 1
|
1009 |
+
Heger A., Woosley S. E., Spruit H. C., 2005, ApJ, 626, 350
|
1010 |
+
19
|
1011 |
+
|
1012 |
+
Henry R. B. C., Worthey G., 1999, PASP, 111, 919
|
1013 |
+
Higgins E. R., Vink J. S., 2020, A&A, 635, A175
|
1014 |
+
Hirschmann M., et al., 2013, MNRAS, 436, 2929
|
1015 |
+
Horiuchi S., Nakamura K., Takiwaki T., Kotake K., Tanaka M., 2014, MNRAS, 445, L99
|
1016 |
+
Humphreys R. M., Davidson K., 1994, PASP, 106, 1025
|
1017 |
+
Izotov I. I., Guseva N. G., Lipovetskii V. A., Kniazev A. I., Stepanian J. A., 1990, Nature, 343, 238
|
1018 |
+
Izotov Y. I., Schaerer D., Thuan T. X., Worseck G., Guseva N. G., Orlitová I., Verhamme A., 2016, MNRAS, 461,
|
1019 |
+
3683
|
1020 |
+
Jiang Y.-F., Cantiello M., Bildsten L., Quataert E., Blaes O., 2015, ApJ, 813, 74
|
1021 |
+
Jones T. A., Swinbank A. M., Ellis R. S., Richard J., Stark D. P., 2010, MNRAS, 404, 1247
|
1022 |
+
Kaiser E. A., Hirschi R., Arnett W. D., Georgy C., Scott L. J. A., Cristini A., 2020, MNRAS, 496, 1967
|
1023 |
+
Kasen D., Metzger B., Barnes J., Quataert E., Ramirez-Ruiz E., 2017a, Nature, 551, 80
|
1024 |
+
Kasen D., Metzger B., Barnes J., Quataert E., Ramirez-Ruiz E., 2017b, Nature, 551, 80
|
1025 |
+
Katz N., 1992, ApJ, 391, 502
|
1026 |
+
Kawata D., 2001, ApJ, 558, 598
|
1027 |
+
Kee N. D., Sundqvist J. O., Decin L., de Koter A., Sana H., 2021, A&A, 646, A180
|
1028 |
+
Keller B. W., Kruijssen J. M. D., 2022, MNRAS, 512, 199
|
1029 |
+
Kennicutt Robert C. J., Lee J. C., Funes J. G., J. S., Sakai S., Akiyama S., 2008, ApJS, 178, 247
|
1030 |
+
Keszthelyi Z., Wade G. A., Petit V., 2017a, in Eldridge J. J., Bray J. C., McClelland L. A. S., Xiao L., eds, Vol. 329,
|
1031 |
+
The Lives and Death-Throes of Massive Stars. pp 250–254 (arXiv:1702.04460),
|
1032 |
+
doi:10.1017/S1743921317002745
|
1033 |
+
Keszthelyi Z., Puls J., Wade G. A., 2017b, A&A, 598, A4
|
1034 |
+
Keszthelyi Z., Meynet G., Georgy C., Wade G. A., Petit V., David-Uraz A., 2019, MNRAS, 485, 5843
|
1035 |
+
Keszthelyi Z., et al., 2020, MNRAS, 493, 518
|
1036 |
+
Keszthelyi Z., Meynet G., Martins F., de Koter A., David-Uraz A., 2021, MNRAS, 504, 2474
|
1037 |
+
Keszthelyi Z., et al., 2022, MNRAS, 10.1093/mnras/stac2598
|
1038 |
+
Kobayashi C., Karakas A. I., Lugaro M., 2020, ApJ, 900, 179
|
1039 |
+
Kotak R., Vink J. S., 2006, A&A, 460, L5
|
1040 |
+
Krumholz M. R., Burkhart B., 2016, MNRAS, 458, 1671
|
1041 |
+
Kudritzki R.-P., Puls J., 2000, ARA&A, 38, 613
|
1042 |
+
Kuiper R., Hosokawa T., 2018, A&A, 616, A101
|
1043 |
+
Lancaster L., Ostriker E. C., Kim J.-G., Kim C.-G., 2021, The Astrophysical Journal, 914, 89
|
1044 |
+
Langer N., 1997, in Nota A., Lamers H., eds, Astronomical Society of the Pacific Conference Series Vol. 120,
|
1045 |
+
Luminous Blue Variables: Massive Stars in Transition. p. 83
|
1046 |
+
Langer N., 1998, A&A, 329, 551
|
1047 |
+
20
|
1048 |
+
|
1049 |
+
Laplace E., Justham S., Renzo M., Götberg Y., Farmer R., Vartanyan D., de Mink S. E., 2021, A&A, 656, A58
|
1050 |
+
Larson R. B., 1974, MNRAS, 169, 229
|
1051 |
+
Leitherer C., et al., 1999, ApJS, 123, 3
|
1052 |
+
Lemasle B., et al., 2018, A&A, 618, A160
|
1053 |
+
Lequeux J., Peimbert M., Rayo J. F., Serrano A., Torres-Peimbert S., 1979, A&A, 80, 155
|
1054 |
+
Livermore R. C., et al., 2015, MNRAS, 450, 1812
|
1055 |
+
Lopez L. A., Krumholz M. R., Bolatto A. D., Prochaska J. X., Ramirez-Ruiz E., Castro D., 2014, ApJ, 795, 121
|
1056 |
+
Lorenzo M., Garcia M., Najarro F., Herrero A., Cerviño M., Castro N., 2022, MNRAS, 516, 4164
|
1057 |
+
Lovegrove E., Woosley S. E., 2013, ApJ, 769, 109
|
1058 |
+
Lucas W. E., Bonnell I. A., Dale J. E., 2020, MNRAS, 493, 4700
|
1059 |
+
Mackey J., Castro N., Fossati L., Langer N., 2015, A&A, 582, A24
|
1060 |
+
Madau P., Dickinson M., 2014, ARA&A, 52, 415
|
1061 |
+
Maeder A., 1987, A&A, 178, 159
|
1062 |
+
Maeder A., 2009, Physics, Formation and Evolution of Rotating Stars. Springer Berlin Heidelberg,
|
1063 |
+
doi:10.1007/978-3-540-76949-1
|
1064 |
+
Maeder A., Meynet G., 2000, ARA&A, 38, 143
|
1065 |
+
Maeder A., Meynet G., 2003, A&A, 411, 543
|
1066 |
+
Maeder A., Meynet G., 2004, A&A, 422, 225
|
1067 |
+
Maeder A., Meynet G., 2005, A&A, 440, 1041
|
1068 |
+
Martínez-González S., Wünsch R., Tenorio-Tagle G., Silich S., Szécsi D., Palouš J., 2022, ApJ, 934, 51
|
1069 |
+
Martins F., Schaerer D., Hillier D. J., Meynadier F., Heydari-Malayeri M., Walborn N. R., 2005, A&A, 441, 735
|
1070 |
+
Mathews W. G., Baker J. C., 1971, ApJ, 170, 241
|
1071 |
+
McCray R., Snow T. P. J., 1979, ARA&A, 17, 213
|
1072 |
+
McDonald S. L. E., Davies B., Beasor E. R., 2022, MNRAS, 510, 3132
|
1073 |
+
McKee C. F., Ostriker J. P., 1977, ApJ, 218, 148
|
1074 |
+
McLeod A. F., Dale J. E., Evans C. J., Ginsburg A., Kruijssen J. M. D., Pellegrini E. W., Ramsay S. K., Testi L.,
|
1075 |
+
2019, MNRAS, 486, 5263
|
1076 |
+
McLeod A. F., et al., 2020, ApJ, 891, 25
|
1077 |
+
McLeod A. F., et al., 2021, MNRAS, 508, 5425
|
1078 |
+
Meynet G., Eggenberger P., Maeder A., 2011, A&A, 525, L11
|
1079 |
+
Meynet G., et al., 2015, A&A, 575, A60
|
1080 |
+
Micheva G., Oey M. S., Jaskot A. E., James B. L., 2017, ApJ, 845, 165
|
1081 |
+
Moe M., Di Stefano R., 2017, ApJS, 230, 15
|
1082 |
+
Montargès M., et al., 2021, Nature, 594, 365
|
1083 |
+
21
|
1084 |
+
|
1085 |
+
Müller B., et al., 2019, MNRAS, 484, 3307
|
1086 |
+
Naab T., Ostriker J. P., 2017, ARA&A, 55, 59
|
1087 |
+
Nogueras-Lara F., et al., 2018, A&A, 620, A83
|
1088 |
+
Olivier G. M., Berg D. A., Chisholm J., Erb D. K., Pogge R. W., Skillman E. D., 2021a, arXiv e-prints, p.
|
1089 |
+
arXiv:2109.06725
|
1090 |
+
Olivier G. M., Lopez L. A., Rosen A. L., Nayak O., Reiter M., Krumholz M. R., Bolatto A. D., 2021b, ApJ, 908, 68
|
1091 |
+
Oskinova L., Schaerer D., 2022, arXiv e-prints, p. arXiv:2203.04987
|
1092 |
+
Owocki S. P., 2004, in Maeder A., Eenens P., eds, IAU Symposium Vol. 215, Stellar Rotation. p. 515
|
1093 |
+
Padoan P., Pan L., Haugbølle T., Nordlund Å., 2016, ApJ, 822, 11
|
1094 |
+
Paxton B., et al., 2013, ApJS, 208, 4
|
1095 |
+
Pedersen M. G., et al., 2021, Nature Astronomy, 5, 715
|
1096 |
+
Peimbert M., Torres-Peimbert S., Rayo J. F., 1978, ApJ, 220, 516
|
1097 |
+
Pellegrini E. W., Rahner D., Reissl S., Glover S. C. O., Klessen R. S., Rousseau-Nepton L., Herrera-Camus R., 2020,
|
1098 |
+
MNRAS, 496, 339
|
1099 |
+
Petit V., et al., 2017, MNRAS, 466, 1052
|
1100 |
+
Podsiadlowski P., Joss P. C., Hsu J. J. L., 1992, ApJ, 391, 246
|
1101 |
+
Portegies Zwart S. F., McMillan S. L. W., Gieles M., 2010, ARA&A, 48, 431
|
1102 |
+
Potter A. T., Chitre S. M., Tout C. A., 2012, MNRAS, 424, 2358
|
1103 |
+
Prinja R. K., Barlow M., Howarth I. D., 1990, Astrophysical Journal, 361, 607
|
1104 |
+
Puchwein E., Haardt F., Haehnelt M. G., Madau P., 2019, MNRAS, 485, 47
|
1105 |
+
Puls J., Vink J. S., Najarro F., 2008, A&A Rev., 16, 209
|
1106 |
+
Puls J., Sundqvist J. O., Markova N., 2015, in Meynet G., Georgy C., Groh J., Stee P., eds, IAU Symposium Vol. 307,
|
1107 |
+
New Windows on Massive Stars. pp 25–36 (arXiv:1409.3582), doi:10.1017/S174392131400622X
|
1108 |
+
Ramachandran V., Hainich R., Hamann W. R., Oskinova L. M., Shenar T., Sander A. A. C., Todt H., Gallagher J. S.,
|
1109 |
+
2018a, A&A, 609, A7
|
1110 |
+
Ramachandran V., Hamann W. R., Hainich R., Oskinova L. M., Shenar T., Sander A. A. C., Todt H., Gallagher J. S.,
|
1111 |
+
2018b, A&A, 615, A40
|
1112 |
+
Ramachandran V., et al., 2019, A&A, 625, A104
|
1113 |
+
Renzo M., et al., 2019, A&A, 624, A66
|
1114 |
+
Rey M. P., Starkenburg T. K., 2022, MNRAS, 510, 4208
|
1115 |
+
Rieder S., Dobbs C., Bending T., Liow K. Y., Wurster J., 2022, MNRAS, 509, 6155
|
1116 |
+
Robertson B. E., Ellis R. S., Furlanetto S. R., Dunlop J. S., 2015, ApJ, 802, L19
|
1117 |
+
Romano D., Karakas A. I., Tosi M., Matteucci F., 2010, A&A, 522, A32
|
1118 |
+
Rosdahl J., Schaye J., Dubois Y., Kimm T., Teyssier R., 2017, MNRAS, 466, 11
|
1119 |
+
Rosdahl J., et al., 2018, MNRAS, 479, 994
|
1120 |
+
22
|
1121 |
+
|
1122 |
+
Rosen A. L., Lopez L. A., Krumholz M. R., Ramirez-Ruiz E., 2014, Monthly Notices of the Royal Astronomical
|
1123 |
+
Society, 442, 2701
|
1124 |
+
Sabhahit G. N., Vink J. S., Higgins E. R., Sander A. A. C., 2021, MNRAS, 506, 4473
|
1125 |
+
Sana H., et al., 2012, Science, 337, 444
|
1126 |
+
Sander A. A. C., Vink J. S., 2020, MNRAS, 499, 873
|
1127 |
+
Saviane I., Rizzi L., Held E. V., Bresolin F., Momany Y., 2002, A&A, 390, 59
|
1128 |
+
Schaerer D., Fragos T., Izotov Y. I., 2019, A&A, 622, L10
|
1129 |
+
Schmutz W., Leitherer C., Gruenwald R., 1992, PASP, 104, 1164
|
1130 |
+
Schootemeijer A., Langer N., Grin N. J., Wang C., 2019, A&A, 625, A132
|
1131 |
+
Schootemeijer A., et al., 2021, A&A, 646, A106
|
1132 |
+
Scott L. J. A., Hirschi R., Georgy C., Arnett W. D., Meakin C., Kaiser E. A., Ekström S., Yusof N., 2021, MNRAS,
|
1133 |
+
503, 4208
|
1134 |
+
Searle L., 1971, ApJ, 168, 327
|
1135 |
+
Senchyna P., Stark D. P., Mirocha J., Reines A. E., Charlot S., Jones T., Mulchaey J. S., 2020, MNRAS, 494, 941
|
1136 |
+
Shenar T., Gilkis A., Vink J. S., Sana H., Sander A. A. C., 2020, A&A, 634, A79
|
1137 |
+
Singh S., et al., 2022, Nature Astronomy, 6, 607
|
1138 |
+
Smartt S. J., 2009, ARA&A, 47, 63
|
1139 |
+
Smith L. J., Norris R. P. F., Crowther P. A., 2002, MNRAS, 337, 1309
|
1140 |
+
Smith M. C., Bryan G. L., Somerville R. S., Hu C.-Y., Teyssier R., Burkhart B., Hernquist L., 2021, MNRAS, 506,
|
1141 |
+
3882
|
1142 |
+
Spruit H. C., 2002, A&A, 381, 923
|
1143 |
+
Steidel C. C., Strom A. L., Pettini M., Rudie G. C., Reddy N. A., Trainor R. F., 2016, ApJ, 826, 159
|
1144 |
+
Sukhbold T., Adams S., 2020, MNRAS, 492, 2578
|
1145 |
+
Sutherland R. S., Dopita M. A., 1993, The Astrophysical Journal Supplement Series, 88, 253
|
1146 |
+
Szécsi D., Wünsch R., 2019, ApJ, 871, 20
|
1147 |
+
Szécsi D., Langer N., Yoon S.-C., Sanyal D., de Mink S., Evans C. J., Dermine T., 2015, A&A, 581, A15
|
1148 |
+
Szécsi D., Mackey J., Langer N., 2018, A&A, 612, A55
|
1149 |
+
Takahashi K., Langer N., 2021, A&A, 646, A19
|
1150 |
+
Townsley L. K., Feigelson E. D., Montmerle T., Broos P. S., Chu Y.-H., Garmire G. P., 2003, ApJ, 593, 874
|
1151 |
+
Trebitsch M., et al., 2021, A&A, 653, A154
|
1152 |
+
Vartanyan D., Laplace E., Renzo M., Götberg Y., Burrows A., de Mink S. E., 2021, ApJ, 916, L5
|
1153 |
+
Verliat A., Hennebelle P., González M., Lee Y.-N., Geen S., 2022, A&A, 663, A6
|
1154 |
+
Vila-Costas M. B., Edmunds M. G., 1992, MNRAS, 259, 121
|
1155 |
+
Vink J. S., 2022, ARA&A, p. arXiv:2109.08164
|
1156 |
+
23
|
1157 |
+
|
1158 |
+
Vink J. S., Gräfener G., 2012, ApJ, 751, L34
|
1159 |
+
Vink J. S., de Koter A., Lamers H. J. G. L. M., 2001, Astronomy and Astrophysics, 369, 574
|
1160 |
+
Vink J. S., Brott I., Gräfener G., Langer N., de Koter A., Lennon D. J., 2010, A&A, 512, L7
|
1161 |
+
Vink J. S., Higgins E. R., Sander A. A. C., Sabhahit G. N., 2021, MNRAS, 504, 146
|
1162 |
+
Weaver R., McCray R., Castor J., Shapiro P., Moore R., 1977, The Astrophysical Journal, 218, 377
|
1163 |
+
White R. L., Long K. S., 1991, ApJ, 373, 543
|
1164 |
+
Woosley S. E., Heger A., 2006, ApJ, 637, 914
|
1165 |
+
Worseck G., Prochaska J. X., Hennawi J. F., McQuinn M., 2016, ApJ, 825, 144
|
1166 |
+
Yoon S. C., Langer N., 2005a, A&A, 435, 967
|
1167 |
+
Yoon S. C., Langer N., 2005b, A&A, 443, 643
|
1168 |
+
Yung L. Y. A., Somerville R. S., Finkelstein S. L., Popping G., Davé R., Venkatesan A., Behroozi P., Ferguson H. C.,
|
1169 |
+
2020, MNRAS, 496, 4574
|
1170 |
+
Zapartas E., et al., 2017, A&A, 601, A29
|
1171 |
+
da Silva R. L., Fumagalli M., Krumholz M., 2012, ApJ, 745, 145
|
1172 |
+
de Mink S. E., Pols O. R., Langer N., Izzard R. G., 2009, A&A, 507, L1
|
1173 |
+
ud-Doula A., Owocki S. P., 2002, ApJ, 576, 413
|
1174 |
+
van Zee L., Haynes M. P., 2006, ApJ, 636, 214
|
1175 |
+
24
|
1176 |
+
|
4NFRT4oBgHgl3EQfozeT/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
5NE4T4oBgHgl3EQfbwwx/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:caa20b2f4dfa013d5cd21de61949ab43255877f3adf3921d4f7f214d9a7f54d7
|
3 |
+
size 3014701
|
5NE4T4oBgHgl3EQfbwwx/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:adc330a573ab78385624b658b6b80e859540a2528e7c7ebe42fa3e43aa361d7e
|
3 |
+
size 120204
|
5tAyT4oBgHgl3EQfcfeN/content/tmp_files/2301.00284v1.pdf.txt
ADDED
@@ -0,0 +1,1421 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
arXiv:2301.00284v1 [math.DG] 31 Dec 2022
|
2 |
+
SQUARE ROOT NORMAL FIELDS FOR LIPSCHITZ SURFACES AND THE
|
3 |
+
WASSERSTEIN FISHER RAO METRIC
|
4 |
+
EMMANUEL HARTMAN∗, MARTIN BAUER†, AND ERIC KLASSEN‡
|
5 |
+
Abstract. The Square Root Normal Field (SRNF) framework is a method in the area of shape analysis that defines
|
6 |
+
a (pseudo) distance between unparametrized surfaces. For piecewise linear (PL) surfaces it was recently proved that
|
7 |
+
the SRNF distance between unparametrized surfaces is equivalent to the Wasserstein Fisher Rao (WFR) metric on the
|
8 |
+
space of finitely supported measures on S2. In the present article we extend this point of view to a much larger set of
|
9 |
+
surfaces; we show that the SRNF distance on the space of Lipschitz surfaces is eqivalent to the WFR distance between
|
10 |
+
Borel measures on S2. For the space of spherical surfaces this result directly allows us to characterize the non-injectivity
|
11 |
+
and the (closure of the) image of the SRNF transform. In the last part of the paper we further generalize this result
|
12 |
+
by showing that the WFR metric for general measure spaces can be interpreted as an optimization problem over the
|
13 |
+
diffeomorphism group of an independent background space.
|
14 |
+
1. Introduction. The investigations of this article are motivated by applications in the area of
|
15 |
+
mathematical shape analysis, which seeks to quantify differences, perform classification, and explain
|
16 |
+
variability for populations of shapes [51, 40, 13, 28]. More specifically, the results of this article concern
|
17 |
+
the Square Root Normal Field distance [16] on the space of surfaces and the Wasserstein Fisher Rao
|
18 |
+
metric [9, 26] from unbalanced optimal transport. Before we describe the contributions of the current
|
19 |
+
work in more detail, we will briefly summarize some results from these two areas.
|
20 |
+
Shape analysis of surfaces:
|
21 |
+
For the purpose of this article we consider a shape to be a
|
22 |
+
parametrized surface or curve in Rd, where we identify two objects if they only differ by a trans-
|
23 |
+
lation and/or a reparametrization. In practice, it is often of interest to mod out by further shape
|
24 |
+
preserving group actions, such as the groups of rotations or scalings. To keep the presentation simple,
|
25 |
+
we will ignore these additional finite dimensional groups. Consequently, the resulting shape space is
|
26 |
+
an infinite dimensional, non-linear (quotient) space, which makes the application of statistical tech-
|
27 |
+
niques to analyse these types of data a highly challenging task. A common approach to overcome these
|
28 |
+
difficulties can be found in the area of geometric statistics [35, 36], in which one develops statistical
|
29 |
+
frameworks based on (Riemannian) geometry. In the context of shape analysis of surfaces or curves,
|
30 |
+
a variety of different metrics have been proposed for this purpose; this includes metrics induced by
|
31 |
+
(right-invariant) metrics on diffeomorphism groups [51, 31] and reparametrization invariant metrics
|
32 |
+
on the space of immersions [40, 3, 30], which are directly related to the investigations of the present
|
33 |
+
article as we will explain next.
|
34 |
+
In the latter approach the calculation of the distance (similarity) between two shapes reduces to two
|
35 |
+
tasks: calculating the geodesic distance on the space of immersions (parametrized surfaces or curves,
|
36 |
+
resp.) and minimizing over the action of the shape preserving group actions, i.e., diffeomorphisms
|
37 |
+
of the parameter space and translations in Rd. In general there do not exist any explicit formulas
|
38 |
+
for geodesics and thus computing solutions to the geodesic boundary value problems (and thus of
|
39 |
+
the distance) is a highly non-trivial task and usually has to be solved using numerical optimization
|
40 |
+
techniques, see eg. [14, 2].
|
41 |
+
For specific examples of Riemannian metrics, however, simplifying transformations have been
|
42 |
+
developed that allow for explicit calculations of geodesics and geodesic distance.
|
43 |
+
This includes in
|
44 |
+
particular the family of Ga,b-metrics on the space of curves [5, 34, 33, 50], a family of first order
|
45 |
+
Sobolev type metrics, that are often called elastic metrics due to their connections to linear elasticity
|
46 |
+
theory; see eg. [33, 8, 5]. For the specific choice of parameters a = 1, b = 1/2 the corresponding
|
47 |
+
transformation is the so-called Square-Root-Velocity (SRV) transform [39], which is widely used in
|
48 |
+
∗Department of Mathematics, Florida State University ([email protected])
|
49 |
+
†Department of Mathematics, Florida State University and University of Vienna ([email protected])
|
50 |
+
‡Department of Mathematics, Florida State University ([email protected])
|
51 |
+
1
|
52 |
+
|
53 |
+
2
|
54 |
+
M. BAUER, E. HARTMAN, E. KLASSEN
|
55 |
+
applications; see [40] and the references therein. The advantage of this transformation is that it reduces
|
56 |
+
the shape comparison problem to a single optimization over the shape preserving group actions, i.e.,
|
57 |
+
in the setting of the present article over reparametrizations and translations.
|
58 |
+
This computational
|
59 |
+
simplification has led to both the development of efficient algorithms [49, 12, 39] and to analytic
|
60 |
+
results on existence of minimizers and optimal parametrizations [7, 24, 44].
|
61 |
+
The family of elastic Ga,b metrics has a natural generalization to a four parameter family of metrics
|
62 |
+
on the space of surfaces [42]. Similarly to the case of curves, simplifying transformations have also
|
63 |
+
been proposed in this more complicated situation [19, 20, 16, 41]. Notably, as a generalization of the
|
64 |
+
SRV transform, the Square Root Normal Field (SRNF) transformation [16] has been introduced. In
|
65 |
+
contrast to the situation for curves, the corresponding Riemannian metric for this transformation is
|
66 |
+
degenerate and, furthermore, it only leads to a first order approximation of the geodesic distance.
|
67 |
+
Nonetheless it defines a reparametrization invariant (pseudo-) distance on the space of surfaces, which
|
68 |
+
still allows for efficient computations using several methods of approximating the optimization over
|
69 |
+
the diffeomorphism group [23, 4] and has proven successful in several applications, see [21, 17, 29, 22].
|
70 |
+
and the references therein.
|
71 |
+
Unbalanced Optimal transport: The second core theme of the present article can be found in
|
72 |
+
the theory of optimal transport (OT). Since Monge’s formulation of OT as a non-convex optimization
|
73 |
+
problem in the space of transport maps, many formulations of the problem have been proposed to
|
74 |
+
give insight to the theoretical properties of the problem as well as efficient methods for computing the
|
75 |
+
solution, see [45, 46] for a comprehensive overview on the field.
|
76 |
+
In classical optimal transport theory one considers normalized (probability) distributions. It is,
|
77 |
+
however, important for many applications to relax this normalization assumption and compute trans-
|
78 |
+
portation plans between arbitrary positive measures. Motivated by this observation the theory of
|
79 |
+
optimal transport has been extended to measures with different masses. This field, called unbalanced
|
80 |
+
optimal transport, has seen rapid developments in the past years and several different frameworks
|
81 |
+
have been proposed [9, 25, 27, 37]. Among them is the Wasserstein Fisher Rao (WFR) distance, an
|
82 |
+
interpolating distance between the quadratic Wasserstein metric and the Fisher–Rao metric, that was
|
83 |
+
introduced independently by [9] and [26]. The WFR distance has been applied to a variety of problems
|
84 |
+
where it is more natural to consider optimal transport in an unbalanced setting. These applications
|
85 |
+
range from color transfer [10], to earthquake epicenter location [52] and document semantic similarity
|
86 |
+
metrics [47]. Because of the growing field of applications, several algorithms have been proposed to
|
87 |
+
compute the Wasserstein Fisher Rao metric. A variation on the popular Sinkhorn algorithm to solve
|
88 |
+
for an entropy regularized version of the distance was proposed by [10] and an alternating minimization
|
89 |
+
algorithm that computes an exact solution was introduced in [6].
|
90 |
+
1.1. Contributions of the article. Recently a new and surprising relationship between these
|
91 |
+
two areas (shape analysis and unbalanced optimal transport) has been found. Namely, in [6] it has
|
92 |
+
been shown that for triangulated surfaces the calculation of the SRNF shape distance can be reduced
|
93 |
+
to calculating the WFR distance between their corresponding surface area measures. The presentation
|
94 |
+
in [6] was entirely focused on the discrete (PL) setting and the proof of the result essentially reduced
|
95 |
+
to algebraic considerations. In the first part of the present article we build the analytical tools to
|
96 |
+
extend this result to the infinite dimensional setting, which contains in particular the original setup
|
97 |
+
of the SRNF distance; the space of smooth surfaces. The main result of this part of our article – cf.
|
98 |
+
Theorem 3.1 – shows that the SRNF shape distance between any two Lipschitz surfaces is equal to
|
99 |
+
the WFR distance between their surface area measures.
|
100 |
+
As a direct consequence of this result we are able to answer two fundamental questions regarding
|
101 |
+
the SRNF transform: since the inception of the SRNF transform, it has been understood that the map
|
102 |
+
is neither injective nor surjective [16]. Characterizing the image and non-injectivity have, however,
|
103 |
+
remained open problems. Recently a first degeneracy result in the context of closed surfaces has been
|
104 |
+
found [18]. Using our equivalence result we are able to obtain a characterization of the closure of the
|
105 |
+
|
106 |
+
3
|
107 |
+
image of this transform – cf. Theorem 3.6 – and a new strong degeneracy result of the corresponding
|
108 |
+
distance (non-injectivity of the transform, resp.) – cf. Theorem 3.8.
|
109 |
+
In the second part we further explore the equivalence result for more general unbalanced optimal
|
110 |
+
transport problems. Generalizations of some of the intermediate results of the first part allow us to offer
|
111 |
+
a novel formulation of the WFR metric as a diffeomorphic optimization problem – cf. Theorem 4.1.
|
112 |
+
Whereas the main result of the first part of the article relates the WFR on S2 with a specific choice
|
113 |
+
of parameter to a diffeomorphic optimization problem, we here extend this relationship to the WFR
|
114 |
+
with any choice of parameter defined on any connected, compact, oriented Riemannian manifold, N.
|
115 |
+
Notably, the space of diffeomorphisms we have to optimize over does not depend on N, but can be
|
116 |
+
chosen as the diffeomorphism group of some background manifold, that only needs to be of dimension
|
117 |
+
greater than or equal to two.
|
118 |
+
Acknowledgements. The authors thank FX Vialard and Cy Maor for useful discussions during
|
119 |
+
the preparation of this manuscript. M. Bauer was supported by NSF-grants 1912037 and 1953244 and
|
120 |
+
by FWF grant P 35813-N. E. Hartman was supported by NSF grant DMS-1953244.
|
121 |
+
2. Preliminaries.
|
122 |
+
2.1. The Wasserstein Fisher Rao Distance. In the following, we will summarize the Kan-
|
123 |
+
torovich formulation of the Wasserstein Fischer Rao distance, as introduced in [11] for measures on a
|
124 |
+
smooth, connected, compact, oriented Riemannian manifold, N. Therefore we denote by M(N) the
|
125 |
+
space of finite Borel measures on N. In the Kantorovich formulation of the Wasserstein-Fisher-Rao
|
126 |
+
distance, we will define a functional on the space of semi-couplings.
|
127 |
+
Therefore we first recall the
|
128 |
+
definition of a semi-coupling:
|
129 |
+
Definition 2.1 (Semi-couplings [11]). Given µ, ν ∈ M(N) the set of all semi-couplings from µ to
|
130 |
+
ν is given by
|
131 |
+
Γ(µ, ν) =
|
132 |
+
�
|
133 |
+
(γ0, γ1) ∈ M(N × N)2|(Proj0)#γ0 = µ, (Proj1)#γ1 = ν
|
134 |
+
�
|
135 |
+
.
|
136 |
+
To define the Wasserstein-Fisher-Rao distance from µ to ν we define a functional on the space of
|
137 |
+
semi-couplings from µ to ν. Let d denote the geodesic distance on N and δ ∈ (0, ∞). We consider the
|
138 |
+
functional
|
139 |
+
Jδ : Γ(µ, ν) → R
|
140 |
+
(γ1, γ2) �→ 4δ2
|
141 |
+
�
|
142 |
+
µ(N) + ν(N) − 2
|
143 |
+
�
|
144 |
+
N×N
|
145 |
+
√γ1γ2
|
146 |
+
γ
|
147 |
+
(u, v)cos(d(u, v)/2δ)dγ(u, v)
|
148 |
+
�
|
149 |
+
where γ ∈ M(N × N) such that γ1, γ2 ≪ γ. Note that in the case where N = S2, we have d(u, v) =
|
150 |
+
cos−1(u · v). Thus for δ = 1
|
151 |
+
2,
|
152 |
+
Jδ(γ1, γ2) =
|
153 |
+
�
|
154 |
+
S2×S2
|
155 |
+
����
|
156 |
+
�γ1
|
157 |
+
γ (u, v)u −
|
158 |
+
�γ1
|
159 |
+
γ (u, v)v
|
160 |
+
����
|
161 |
+
2
|
162 |
+
dγ(u, v).
|
163 |
+
(2.1)
|
164 |
+
Definition 2.2 (Wasserstein-Fisher-Rao Distance
|
165 |
+
[11, 26]). The Wasserstein-Fisher-Rao Dis-
|
166 |
+
tance on M(N) is given by
|
167 |
+
WFRδ : M(N) × M(N) → R≥0 defined via
|
168 |
+
(2.2)
|
169 |
+
(µ, ν) �→
|
170 |
+
inf
|
171 |
+
(γ0,γ1)∈Γ(µ,ν)
|
172 |
+
�
|
173 |
+
Jδ(µ, ν).
|
174 |
+
(2.3)
|
175 |
+
Some results in this article will specifically apply to the case where δ = 1/2. To simplify our notation,
|
176 |
+
we define J := J1/2 and WFR := WFR1/2.
|
177 |
+
|
178 |
+
4
|
179 |
+
M. BAUER, E. HARTMAN, E. KLASSEN
|
180 |
+
2.2. The Square Root Normal Field Shape Distance. In mathematical shape analysis, one
|
181 |
+
defines metrics that measure the differences between geometric objects [51, 3, 40, 13]. In this article
|
182 |
+
we consider geometric objects described by unparameterized surfaces which are elements of an infinite
|
183 |
+
dimensional non-linear space modulo several finite and infinite dimensional group action. As a result,
|
184 |
+
computations in this space are difficult and even simple statistical operations are not well defined.
|
185 |
+
Riemannian geometry can help to overcome these challenges. In such a framework, one considers the
|
186 |
+
space of all surfaces as an infinite dimensional manifold and equips it with a Riemannian metric that is
|
187 |
+
invariant to the group action, which allows one to consider the induced metric on the quotient space.
|
188 |
+
For our purposes we will consider immersions of a smooth, connected, compact, oriented Rie-
|
189 |
+
mannian 2-dimensional manifold, M, with or without boundary. We denote the space of all Lipschitz
|
190 |
+
immersions of M into R3 by Imm(M, R3), i.e.,
|
191 |
+
Imm(M, R3) = {f ∈ W 1,∞(M, R3) : T f is inj. a.e.} .
|
192 |
+
(2.4)
|
193 |
+
As we are interested in unparametrized surfaces, we have to factor out the action of the group of
|
194 |
+
diffeomorphisms. In the context of Lipschitz immersions the natural group of reparametrizations for
|
195 |
+
us to consider is the group of all orientation preserving, bi-Lipschitz diffeomorphisms:
|
196 |
+
Γ(M) = {γ ∈ W 1,∞(M, M) : γ−1 ∈ W 1,∞(M, M), |Dγ| > 0 a.e.},
|
197 |
+
where |Dγ| denotes the Jacobian determinant of γ, which is well-defined as Dγ ∈ L∞. Note that this
|
198 |
+
reparametrization group acts by composition from the right on Imm(M, R3). In addition to the action
|
199 |
+
by the reparametrization group, we also want to identify surfaces that only differ by a translation.
|
200 |
+
This leads us to consider the following quotient space:
|
201 |
+
S := Imm(M, R3)/(Γ(M) × trans)
|
202 |
+
(2.5)
|
203 |
+
In the following we will equip Imm(M) with a reparameterization invariant distance; the so called
|
204 |
+
square root normal field (SRNF) distance. The SRNF map (distance resp.) was originally introduced
|
205 |
+
by Jermyn et al. in [15] for the space of smooth immersions, but it naturally extends to the space of
|
206 |
+
all Lipschitz surfaces, as demonstrated in [6]. We now recall the definition of this distance.
|
207 |
+
For any given f ∈ Imm(M, R3), the orientation on M allows us to consider the unit normal vector
|
208 |
+
field nf : M → R3, which is well-defined as an element of L∞(M, R3). Furthermore, let {v, w} be an
|
209 |
+
orthonormal basis of TxM. Then for any f ∈ Imm(M, R3) we can define the area multiplication factor
|
210 |
+
at x ∈ M via af(x) = |dfx(v) × dfx(w)|. The SRNF map is then given by
|
211 |
+
Φ : Imm(M, R3)/ translations → L2(M, R3)
|
212 |
+
(2.6)
|
213 |
+
f �→ qf where qf(x) :=
|
214 |
+
�
|
215 |
+
af(x) nf(x).
|
216 |
+
(2.7)
|
217 |
+
From this transform we define a distance on Imm(M, R3)/ translations by
|
218 |
+
dImm(f1, f2) = ∥Φ(f1) − Φ(f2)∥L2.
|
219 |
+
Next we consider a right-action of Γ(M) on L2(M, R3) that is compatible with the mapping Φ. For
|
220 |
+
q ∈ L2(M, R3) and γ ∈ Γ(M) we let
|
221 |
+
(q ∗ γ)(x) =
|
222 |
+
�
|
223 |
+
|Dγ(x)|q(γ(x)).
|
224 |
+
(2.8)
|
225 |
+
It is easy to check that the action of Γ(M) on L2(M, R3) is by linear isometries and that for any
|
226 |
+
f ∈ Imm and γ ∈ Γ,
|
227 |
+
Φ(f) ∗ γ = Φ(f ◦ γ).
|
228 |
+
|
229 |
+
5
|
230 |
+
Thus, it follows that the SRNF distance on Imm(M, R3) is invariant with respect to this action and
|
231 |
+
thus it descends to a (pseudo) distance on the quotient space S, which is given by
|
232 |
+
dS([f1], [f2]) =
|
233 |
+
inf
|
234 |
+
γ∈Γ(M) d(f1, f2 ◦ γ),
|
235 |
+
[f1], [f2] ∈ S(M)
|
236 |
+
As we will see later the induced (pseudo) distance on the quotient space is highly degenerate.
|
237 |
+
2.3. Equivalence of WFR and SRNF in the piecewise linear category. In [6] a surprising
|
238 |
+
equivalence of the WFR and SRNF distance was shown: for piecewise linear surfaces it was proved
|
239 |
+
that the SRNF distance can be reduced to the WFR distance between finitely supported measures.
|
240 |
+
To formulate this result in detail we first associate to every q ∈ L2(M, R3) a measure on S2; namely,
|
241 |
+
for any open U ⊆ S2, we define
|
242 |
+
q∗U = {x ∈ M|q(x) ̸= 0 and q(x)/|q(x)| ∈ U}
|
243 |
+
and define the map
|
244 |
+
L2(M, R3) → M(S2) via q �→ µq
|
245 |
+
where for U ⊆ S2, µq(U) =
|
246 |
+
�
|
247 |
+
q∗U
|
248 |
+
q(x) · q(x)dm.
|
249 |
+
The result proved in [6] is then formulated as:
|
250 |
+
Theorem 2.3. Given two piecewise linear surfaces S1 and S2 parameterized by f and g, the SRNF
|
251 |
+
shape distance can be computed as an unbalanced transport problem. More precisely, we have
|
252 |
+
dS([f], [g]) =
|
253 |
+
inf
|
254 |
+
γ∈Γ(M) ∥qf − qg ∗ γ∥ = WFR(µqf , µqg).
|
255 |
+
where qf and qg are the SRNFs of f and g respectively.
|
256 |
+
In the next section we will extend this result of to all Lipschitz immersions (Borel-measures, resp.).
|
257 |
+
3. The SRNF distance. For the goal of extending the result of Theorem 2.3 to all Lipschitz
|
258 |
+
surfaces, we will consider specifically δ = 1
|
259 |
+
2 in the definition of the WFR metric.
|
260 |
+
3.1. Equivalence of the WFR and SRNF distances. Our main result of this section is the
|
261 |
+
following theorem, which is slightly stronger than the desired equivalence result.
|
262 |
+
Theorem 3.1. Given q1, q2 ∈ L2(M, R3),
|
263 |
+
inf
|
264 |
+
γ∈Γ(M) ∥q1 − q2 ∗ γ∥L2 = WFR(µq1, µq2).
|
265 |
+
In particular, given f, g ∈ W 1,∞(M, R3) we can calculate their SRNF distance as an unbalanced OMT
|
266 |
+
problem via
|
267 |
+
dS([f], [g]) = WFR(µqf , µqg),
|
268 |
+
where qf and qg are the SRNFs of f and g respectively.
|
269 |
+
Remark 1. Note, that as a direct consequence of Theorem 3.1 we can also conclude the extension
|
270 |
+
of Theorem 2.3 to the original setup of the SRNF distance, the space of all smooth surfaces.
|
271 |
+
The proof of Theorem 3.1 relies on a series of technical lemmas, which we will show next.
|
272 |
+
|
273 |
+
6
|
274 |
+
M. BAUER, E. HARTMAN, E. KLASSEN
|
275 |
+
Lemma 3.2. Let X, Y be topological spaces and ρ : X → Y be a measurable function with respect
|
276 |
+
to the Borel σ-algebras. If µ, µ1 ∈ M(X), γ, γ1 ∈ M(Y ) such that µ1 ≪ µ, γ = ρ∗µ, and γ1 = ρ∗µ1,
|
277 |
+
then γ1 ≪ γ. Furthermore, µ1
|
278 |
+
µ = γ1
|
279 |
+
γ ◦ ρ almost everywhere.
|
280 |
+
Proof. Let U ⊆ Y open such that γ(U) = 0.
|
281 |
+
By definition, µ(ρ−1(U)) = 0. Since µ1 ≪ µ,
|
282 |
+
µ1(ρ−1(U)) = 0. Therefore, γ1(U) = 0. This proves γ1 ≪ γ.
|
283 |
+
Following the definitions of the Radon-Nikodym derivatives, pushforwards, and the change of variables
|
284 |
+
formula, we obtain
|
285 |
+
�
|
286 |
+
ρ−1(U)
|
287 |
+
µ1
|
288 |
+
µ dµ =
|
289 |
+
�
|
290 |
+
ρ−1(U)
|
291 |
+
dµ1 =
|
292 |
+
�
|
293 |
+
U
|
294 |
+
dγ1 =
|
295 |
+
�
|
296 |
+
U
|
297 |
+
γ1
|
298 |
+
γ dγ =
|
299 |
+
�
|
300 |
+
ρ−1(U)
|
301 |
+
γ1
|
302 |
+
γ ◦ ρ dµ.
|
303 |
+
Thus, µ1
|
304 |
+
µ = γ1
|
305 |
+
γ ◦ ρ almost everywhere.
|
306 |
+
Given q ∈ L2(M, R3) we can define a function from M to S2 that takes every point x ∈ M to the unit
|
307 |
+
vector in the direction of q(x). As a matter of defining this function on every point, we can canonically
|
308 |
+
choose the north pole of S2 for points where q(x) = 0.
|
309 |
+
Definition 3.3. For q ∈ L2(M, R3) we define the unit vector map of q as
|
310 |
+
q : M → S2 given by
|
311 |
+
x �→
|
312 |
+
� q(x)
|
313 |
+
|q(x)|
|
314 |
+
if q(x) ̸= 0
|
315 |
+
(1, 0, 0)
|
316 |
+
otherwise
|
317 |
+
.
|
318 |
+
Note that since q ∈ L2(M, R3), it follows that q : M → S2 is measurable. Let q ∈ L2(M, R3). We can
|
319 |
+
define a measure, νq ∈ M(M), via
|
320 |
+
νq(U) =
|
321 |
+
�
|
322 |
+
U
|
323 |
+
|q(x)|2dm.
|
324 |
+
for all open U ⊆ M. Note that νq ≪ m and νq
|
325 |
+
m = |q|2. Further, we can equivalently define µq as the
|
326 |
+
pushforward of νq via q.
|
327 |
+
Lemma 3.4. Let q ∈ L2(M, R3) and µq ∈ M(S2) be the measure associated with q. Then µq =
|
328 |
+
q∗νq.
|
329 |
+
Proof. Let U ⊆ S2 open and define M0 = {x ∈ M|q(x) = 0}.
|
330 |
+
If (1, 0, 0) ̸∈ S2, q−1(U) = q∗(U) and thus
|
331 |
+
q∗νq(U) =
|
332 |
+
�
|
333 |
+
q−1(U)
|
334 |
+
|q(x)|2dm =
|
335 |
+
�
|
336 |
+
q∗(U)
|
337 |
+
|q(x)|2dm = µq.
|
338 |
+
If (1, 0, 0) ∈ S2, q−1(U) = q∗(U) ∪ M0 and thus
|
339 |
+
q∗νq(U) =
|
340 |
+
�
|
341 |
+
q−1(U)
|
342 |
+
|q(x)|2dm =
|
343 |
+
�
|
344 |
+
q∗(U)
|
345 |
+
|q(x)|2dm +
|
346 |
+
�
|
347 |
+
M0
|
348 |
+
|q(x)|2dm = µq.
|
349 |
+
Leveraging what we have proven above we may show a key continuity result that will then allow us to
|
350 |
+
complete the proof of the main theorem.
|
351 |
+
Lemma 3.5. The map (L2(M, R3), ∥ · ∥L2) → (M(S2), WFR) defined via q �→ µq given by Equa-
|
352 |
+
tion (2.3) is Lipschitz continuous with Lipschitz constant K = 1.
|
353 |
+
|
354 |
+
7
|
355 |
+
Proof. Let q1, q2 ∈ L2(M, R3). For any semi-coupling (γ1, γ2) ∈ Γ(µq1, µq2),
|
356 |
+
WFR(µq1, µq2) ≤
|
357 |
+
�
|
358 |
+
Jδ(γ1, γ2).
|
359 |
+
Thus, to prove the theorem we must construct (γ1, γ2) ∈ Γ(µq1, µq2) such that Jδ(γ1, γ2) = ∥q1−q2∥2
|
360 |
+
L2.
|
361 |
+
To construct such a semi-coupling we first construct ρ : M → S2 × S2 defined as unit vector maps of
|
362 |
+
q1 and q2 on the first and second factor respectively. I.e. the map is given by ρ(x) = (q1(x), q2(x)) .
|
363 |
+
Since q1 and q2 are individually measurable, then so is ρ. We can then define γ1, γ2 ∈ M(S2 × S2) via
|
364 |
+
γ1 = ρ∗νq1 and γ2 = ρ∗νq2.
|
365 |
+
Claim 1. The pair of measures, (γ1, γ2) is a semi-coupling from µq1 to µq2.
|
366 |
+
Proof of claim.
|
367 |
+
Let U ⊆ S2 be open. Thus,
|
368 |
+
γ1(U × S2) = νq1
|
369 |
+
�
|
370 |
+
ρ−1(U × S2)
|
371 |
+
�
|
372 |
+
= νq1
|
373 |
+
�
|
374 |
+
q1−1(U) ∩ q2−1(S2)
|
375 |
+
�
|
376 |
+
= νq1
|
377 |
+
�
|
378 |
+
q1−1(U)
|
379 |
+
�
|
380 |
+
= µq1(U)
|
381 |
+
and
|
382 |
+
γ2(S2 × U) = νq2
|
383 |
+
�
|
384 |
+
ρ−1(S2 × U)
|
385 |
+
�
|
386 |
+
= νq1
|
387 |
+
�
|
388 |
+
q1−1(S2) ∩ q2−1(U)
|
389 |
+
�
|
390 |
+
= νq1
|
391 |
+
�
|
392 |
+
q2−1(U)
|
393 |
+
�
|
394 |
+
= µq2(U).
|
395 |
+
So (γ1, γ2) is a semi-coupling from µq1 to µq2.
|
396 |
+
Recall from the definition of the functional Jδ we need to construct γ ∈ M(S2 × S2) such that
|
397 |
+
γ1, γ2 ≪ γ. Define γ = ρ∗m. We know µq1, µq2 ≪ m. Thus, by Lemma 3.2, γ1, γ2 ≪ γ. Furthermore,
|
398 |
+
|q1|2 = µq1
|
399 |
+
m = γ1
|
400 |
+
γ ◦ ρ a.e.
|
401 |
+
and
|
402 |
+
|q2|2 = µq2
|
403 |
+
m = γ2
|
404 |
+
γ ◦ ρ a.e.
|
405 |
+
So,
|
406 |
+
Jδ(γ1, γ2) =
|
407 |
+
�
|
408 |
+
S2×S2
|
409 |
+
����
|
410 |
+
�γ1
|
411 |
+
γ (u, v)u −
|
412 |
+
�γ1
|
413 |
+
γ (u, v)v
|
414 |
+
����
|
415 |
+
2
|
416 |
+
dγ(u, v)
|
417 |
+
=
|
418 |
+
�
|
419 |
+
S2×S2
|
420 |
+
γ1
|
421 |
+
γ (u, v)dγ(u, v) +
|
422 |
+
�
|
423 |
+
S2×S2
|
424 |
+
γ2
|
425 |
+
γ (u, v)dγ(u, v)
|
426 |
+
− 2
|
427 |
+
�
|
428 |
+
S2×S2
|
429 |
+
√γ1γ2
|
430 |
+
γ
|
431 |
+
(u, v)⟨u, v⟩dγ(u, v)
|
432 |
+
=
|
433 |
+
�
|
434 |
+
ρ−1(S2×S2)
|
435 |
+
γ1
|
436 |
+
γ ◦ ρ(x) dm +
|
437 |
+
�
|
438 |
+
ρ−1(S2×S2)
|
439 |
+
γ2
|
440 |
+
γ ◦ ρ(x) dm
|
441 |
+
− 2
|
442 |
+
�
|
443 |
+
ρ−1(S2×S2)
|
444 |
+
�γ1
|
445 |
+
γ ◦ ρ(x)
|
446 |
+
�γ2
|
447 |
+
γ ◦ ρ(x)⟨ρ(x)⟩dγ(u, v)
|
448 |
+
=
|
449 |
+
�
|
450 |
+
M
|
451 |
+
|q1(x)|2dm +
|
452 |
+
�
|
453 |
+
M
|
454 |
+
|q2(x)|2dm − 2
|
455 |
+
�
|
456 |
+
M
|
457 |
+
|q1(x)||q2(x)|
|
458 |
+
� q1(x)
|
459 |
+
|q1(x)|, q2(x)
|
460 |
+
|q2(x)|
|
461 |
+
�
|
462 |
+
dm
|
463 |
+
=∥q1 − q2∥2
|
464 |
+
L2
|
465 |
+
Thus,
|
466 |
+
WFR(µq1, µq2) ≤
|
467 |
+
�
|
468 |
+
Jδ(γ1, γ2) = 1 · ∥q1 − q2∥L2
|
469 |
+
We are now ready to conclude the proof of Theorem 3.1:
|
470 |
+
Proof of Theorem 3.1. Let q1, q2 ∈ L2(M, R3) and let ǫ > 0. Let p1, p2 be piecewise constant
|
471 |
+
functions such that ∥q1 − p1∥L2 < ǫ/4 and ∥q2 − p2∥L2 < ǫ/4. Therefore,
|
472 |
+
inf
|
473 |
+
γ∈Γ(M) ∥q1 − p1 ∗ γ∥L2,
|
474 |
+
inf
|
475 |
+
γ∈Γ(M) ∥q2 − p2 ∗ γ∥L2, WFR(µq1, µp1), WFR(µq2, µp2) < ǫ/4.
|
476 |
+
|
477 |
+
8
|
478 |
+
M. BAUER, E. HARTMAN, E. KLASSEN
|
479 |
+
Thus,
|
480 |
+
inf
|
481 |
+
γ∈Γ(M) ∥q1 − q2 ∗ γ∥L2 ≤
|
482 |
+
inf
|
483 |
+
γ∈Γ(M) ∥q1 − p1 ∗ γ∥L2 +
|
484 |
+
inf
|
485 |
+
γ∈Γ(M) ∥p2 − q2 ∗ γ∥L2
|
486 |
+
+
|
487 |
+
inf
|
488 |
+
γ∈Γ(M) ∥p1 − p2 ∗ γ∥L2
|
489 |
+
≤ ǫ/2 +
|
490 |
+
inf
|
491 |
+
γ∈Γ(M) ∥p1 − p2 ∗ γ∥L2
|
492 |
+
= ǫ/2 + WFR(µp1, µp2)
|
493 |
+
≤ ǫ/2 + WFR(µq1, µp1) + WFR(µp2, µq2) + WFR(µq1, µq2)
|
494 |
+
≤ ǫ + WFR(µq1, µq2)
|
495 |
+
and
|
496 |
+
WFR(µq1, µq2) ≤ WFR(µp1, µp2) + WFR(µq1, µp1) + WFR(µp2, µq2)
|
497 |
+
≤ WFR(µp2, µq2) + ǫ/2
|
498 |
+
=
|
499 |
+
inf
|
500 |
+
γ∈Γ(M) ∥p1 − p2 ∗ γ∥L2 + ǫ/2
|
501 |
+
≤
|
502 |
+
inf
|
503 |
+
γ∈Γ(M) ∥q1 − p1 ∗ γ∥L2 +
|
504 |
+
inf
|
505 |
+
γ∈Γ(M) ∥p2 − q2 ∗ γ∥L2
|
506 |
+
+
|
507 |
+
inf
|
508 |
+
γ∈Γ(M) ∥q1 − q2 ∗ γ∥L2 + ǫ/2
|
509 |
+
≤
|
510 |
+
inf
|
511 |
+
γ∈Γ(M) ∥q1 − q2 ∗ γ∥L2 + ǫ.
|
512 |
+
So,
|
513 |
+
WFR(µq1, µq2) − ǫ ≤
|
514 |
+
inf
|
515 |
+
γ∈Γ(M) ∥q1 − q2 ∗ γ∥L2 ≤ WFR(µq1, µq2) + ǫ.
|
516 |
+
Taking ǫ → 0 we can conclude infγ∈Γ(M) ∥q1 − q2 ∗ γ∥L2 = WFR(µq1, µq2).
|
517 |
+
3.2. Characterizing the closure of the image of the SRNF map. Our equivalence result
|
518 |
+
will also allow us to characterize the (closure of the) image of the SRNF map Φ in the context of
|
519 |
+
spherical surfaces:
|
520 |
+
Theorem 3.6. Let f ∈ Imm(S2, R3) and let q = Φ(f) ∈ L2(S2, R3). Then q satisfies the closure
|
521 |
+
condition
|
522 |
+
�
|
523 |
+
S2 q(x)|q(x)|dm = 0. Moreover, the closure of the image of Φ is given by the set
|
524 |
+
U :=
|
525 |
+
�
|
526 |
+
q ∈ L2(S2, R3) such that
|
527 |
+
�
|
528 |
+
S2 q(x)|q(x)|dm = 0
|
529 |
+
�
|
530 |
+
.
|
531 |
+
To prove this result we will need a classical theorem from geometric measure theory and the study of
|
532 |
+
convex polyhedra, which we will recall next:
|
533 |
+
Theorem 3.7 (Minkowski’s Theorem [1, 32, 38]).
|
534 |
+
Let µ ∈ M(S2) such that the support of µ is
|
535 |
+
not concentrated on a great circle and
|
536 |
+
�
|
537 |
+
S2 x dµ(x) = 0.
|
538 |
+
Then, there exists a unique (up to translation) convex body whose surface area measure is µ. Moreover,
|
539 |
+
if µ is finitely supported then the convex body is a polytope.
|
540 |
+
|
541 |
+
9
|
542 |
+
Proof of Theorem 3.6.. Let f ∈ Imm(S2, R3) and qf = Φ(f). Let S = f(S2) and V be the surface
|
543 |
+
enclosed by S. Therefore,
|
544 |
+
�
|
545 |
+
S2 qf(x)|qf(x)|dm =
|
546 |
+
�
|
547 |
+
S2 af(x)nf(x)dm =
|
548 |
+
�
|
549 |
+
S
|
550 |
+
nfdS.
|
551 |
+
Thus, this is the integral of the normal vector of a closed surface in R3. A simple application of the
|
552 |
+
divergence theorem shows that the integral of the normal vector of the closed surface is zero. To see
|
553 |
+
this, let {ei}3
|
554 |
+
i=1 be the unit basis vectors of R3. For i = 1, 2, 3,
|
555 |
+
�
|
556 |
+
S
|
557 |
+
(nf · ei) dS =
|
558 |
+
�
|
559 |
+
V
|
560 |
+
(∇ · ei) dV = 0.
|
561 |
+
Therefore,
|
562 |
+
�
|
563 |
+
S2 qf(x)|qf(x)|dm = 0 and the image of Φ is contained in U.
|
564 |
+
To prove the converse direction let q ∈ U. We aim to construct a convex body f with µqf arbitrarily
|
565 |
+
close to µq. By the definition of U the measure µq satisfies
|
566 |
+
�
|
567 |
+
S2 n dµq(n) = 0. Since finitely supported
|
568 |
+
measures are dense with respect to the WFR metric, we can choose a finitely supported measure µq
|
569 |
+
such that
|
570 |
+
�
|
571 |
+
S2 n dµq(n) = 0 and WFR(µq, µq) < ǫ/3.
|
572 |
+
If the support of µq is not concentrated on a great circle we can invoke the Minkowski theorem
|
573 |
+
and the result follows. For the general case we will slightly deform the measure as follows. Define
|
574 |
+
ˆµq := µq +
|
575 |
+
3
|
576 |
+
�
|
577 |
+
i=1
|
578 |
+
ǫ
|
579 |
+
18δei +
|
580 |
+
3
|
581 |
+
�
|
582 |
+
i=1
|
583 |
+
ǫ
|
584 |
+
18δ−ei
|
585 |
+
where {ei}3
|
586 |
+
i=1 is the set of unit basis vectors of R3. Then ˆµq is a finitely supported measue and satisfies
|
587 |
+
�
|
588 |
+
S2 n d ˆµq(n) = 0 and ˆµq is not supported on a single great circle. Moreover, WFR(µq, ˆµq) < ǫ/3. By
|
589 |
+
the Minkowski Theorem (Theorem 3.7) there exists a convex polytope with surface area measure given
|
590 |
+
by ˆµq.
|
591 |
+
Let f ∈ W 1,∞(S2, R3) be the PL spherical parameterization of this convex body, so that
|
592 |
+
µqf = ˆµq. Thus, there exists γ ∈ Γ(M) such that ∥qf − q ∗ γ∥L2 < WFR(µqf , µq) + ǫ/3. Therefore,
|
593 |
+
∥qf − q ∗ γ∥L2 ≤ WFR(µqf , µq) + ǫ/3 = WFR( ˆµq, µq) + ǫ/3 ≤ WFR( ˆµq, µq) + WFR(µq, µq) + ǫ/3 < ǫ,
|
594 |
+
which concludes the proof.
|
595 |
+
3.3. Characterizing the degeneracy of the SRNF distance. As a second important con-
|
596 |
+
sequence of the our equivalence result we can give a detailed proof of the degeneracy of the SRNF
|
597 |
+
distance for smooth surfaces. Degeneracy results were studied in [18] and it was further characterized
|
598 |
+
for certain PL surfaces in [6]. Here we will generalize the characterization of [6] to smooth surfaces:
|
599 |
+
Theorem 3.8. For any smooth, regular surface f ∈ C∞(S2, R3) ∩ Imm(S2, R3) there exists a
|
600 |
+
unique (up to translations) convex body that is indistinguishable from f by the SRNF shape distance,
|
601 |
+
i.e, dS([f], [f1]) = 0.
|
602 |
+
Proof of Theorem 3.8. Let f ∈ C∞(S2, R3) ∩ Imm(S2, R3) be a regular surface. By [43, Prop.
|
603 |
+
4.33] the Gauss map of f is surjective. Thus the image of qf is not contained in a single hyperplane of
|
604 |
+
R3. Furthermore,
|
605 |
+
�
|
606 |
+
S2 qf(x)|qf(x)|dm = 0. Thus, by Theorem 3.7, there exists a unique convex body
|
607 |
+
(up to translation) with surface area measure given by µqf . By Theorem 3.1 the surface f and the
|
608 |
+
convex body are SRNF distance 0 from each other.
|
609 |
+
4. The WFR metric as a diffeomorphic optimization problem. In this section, we will
|
610 |
+
generalize the results of the previous sections for the Wasserstein Fisher Rao distance on any manifold
|
611 |
+
and for any coeffecient δ. Thus characterizing the Wasserstein Fisher Rao distance as a diffeomorphic
|
612 |
+
optimization problem. Let N be a smooth, connected, compact, oriented Riemannian manifold. Define
|
613 |
+
|
614 |
+
10
|
615 |
+
M. BAUER, E. HARTMAN, E. KLASSEN
|
616 |
+
the cone over N via C(N) := (N × R≥0)/(N × {0}). If we let d denote the geodesic distance on N and
|
617 |
+
fix some δ ∈ (0, ∞), then we can define a metric on C(N) via
|
618 |
+
dC(N)((n1, r1), (n2, r2))2 = 4δ2r2
|
619 |
+
1 + 4δ2r2
|
620 |
+
2 − 8δ2r1r2cos(d(n1, n2)/2δ).
|
621 |
+
Let M be another smooth, connected, compact, oriented Riemannian manifold. Any function q : M →
|
622 |
+
C(N) can be decomposed into component functions by q(x) = (q(x), q◦(x)) where q : M → N and
|
623 |
+
q◦ : M → R≥0. We can thus define
|
624 |
+
ˆq : M → R≥0 via for all x ∈ M, ˆq(x) =
|
625 |
+
√
|
626 |
+
2δq◦(x).
|
627 |
+
Given q1, q2 : M → C(N). The L2 distance between q1 and q2 is given by
|
628 |
+
dL2(q1, q2)2 =
|
629 |
+
�
|
630 |
+
M
|
631 |
+
dC(N)(q1(x), q2(x))2dm.
|
632 |
+
By decomposing q1 and q2, we can alternatively write
|
633 |
+
(4.1)
|
634 |
+
dL2(q1, q2)2 =
|
635 |
+
�
|
636 |
+
M
|
637 |
+
ˆq1(x)2dm +
|
638 |
+
�
|
639 |
+
M
|
640 |
+
ˆq2(x)2dm − 2
|
641 |
+
�
|
642 |
+
M
|
643 |
+
ˆq1(x) ˆq2(x)cos(d(q1(x), q2(x))/2δ)dm
|
644 |
+
The L2 cost of a function q : M → C(N) as the distance from q to the function that maps all of M to
|
645 |
+
the cone point. In particular, using the decomposition of q, this distance is given by
|
646 |
+
dL2(0, q)2 =
|
647 |
+
�
|
648 |
+
M
|
649 |
+
ˆq(x)2 dm.
|
650 |
+
Thus, the space of L2-functions from M to C(N) as
|
651 |
+
L2(M, C(N)) := {q : M → C(N) s.t. dL2(0, q)2 < ∞}
|
652 |
+
and we equip L2(M, C(N)) with the metric dL2. We define the right action of the diffeomorphisms of
|
653 |
+
on L2(M, C(N)) component-wise. We treat ˆq as a half density and define the action of Γ(M) on this
|
654 |
+
component as the action on half-densities. Thus, we define the action of Γ(M) on L2(M, C(N)) given
|
655 |
+
by
|
656 |
+
L2(M, C(N)) × Γ(M) → L2(M, C(N)) via
|
657 |
+
(q, ˆq), γ �→
|
658 |
+
�
|
659 |
+
q ◦ γ, ˆq ◦ γ ·
|
660 |
+
�
|
661 |
+
|Dγ|
|
662 |
+
�
|
663 |
+
The main result of this section is to show that the Wasserstein Fisher Rao distance can written as the
|
664 |
+
distance between the orbits associated with the measures:
|
665 |
+
Theorem 4.1. Let N be a smooth connected compact Riemannian manifold and M be a smooth
|
666 |
+
connected compact Riemannian manifold of dimension 2 or higher.
|
667 |
+
a.) For all µ1, µ2 ∈ M(N) and q1, q2 ∈ L2(M, C(N)) such that µ1 = q1∗νq1 and µ2 = q2∗νq2 we have
|
668 |
+
WFRδ(µ1, µ2) =
|
669 |
+
inf
|
670 |
+
γ∈Γ(N) dL2(q1, q2 ∗ γ).
|
671 |
+
b.) Moreover, for all µ ∈ M(N) there exists q ∈ L2(M, C(N)) such that µ = q∗νq. If µ is a finitely
|
672 |
+
supported measure given by µ = �n
|
673 |
+
i=1 aiδui, then one can choose q piece wise constant. More
|
674 |
+
specifically, the function q given by
|
675 |
+
q(x) =
|
676 |
+
��
|
677 |
+
uj,
|
678 |
+
�
|
679 |
+
aj
|
680 |
+
area(σj)
|
681 |
+
�
|
682 |
+
if 1 ≤ j ≤ n
|
683 |
+
(u1, 0)
|
684 |
+
if n < j ≤ m
|
685 |
+
,
|
686 |
+
where {σj}m
|
687 |
+
j=1 is a subdivision of the canonical triangulation of M with m ≥ n, satis���es µ = q∗νq.
|
688 |
+
|
689 |
+
11
|
690 |
+
Before we are able to prove this theorem, we will show again several technical lemmas. Therefore we
|
691 |
+
will consider specific measures associated with functions q ∈ L2(M, C(N)). First, we define νq ∈ M(M)
|
692 |
+
such that for any U ⊆ M open
|
693 |
+
νq(U) =
|
694 |
+
�
|
695 |
+
U
|
696 |
+
ˆq(x)2dm.
|
697 |
+
Note that νq ≪ m and νq
|
698 |
+
m = ˆq2. Further, we can define a pushforward of νq via q. In particular, for
|
699 |
+
every q ∈ L2(M, C(N)), we can define a Borel measure on N given by µq := q∗νq. In other words for
|
700 |
+
all U ⊆ N open
|
701 |
+
µq(U) =
|
702 |
+
�
|
703 |
+
q−1(U)
|
704 |
+
ˆq2(x)dm.
|
705 |
+
Now we will show that the orbit of any q ∈ L2(M, C(N)) under the action of Γ(M) is mapped to the
|
706 |
+
same measure on N.
|
707 |
+
Lemma 4.2. Let q ∈ L2(M, C(N)). Then for all γ ∈ Γ(M), µq = µq∗γ.
|
708 |
+
Proof. Let U ⊆ N open. Then
|
709 |
+
µq∗γ(U) =
|
710 |
+
�
|
711 |
+
γ−1(q−1(U))
|
712 |
+
(ˆq ◦ γ(x) ·
|
713 |
+
�
|
714 |
+
|Dγ|)2dm
|
715 |
+
=
|
716 |
+
�
|
717 |
+
γ−1(q−1(U))
|
718 |
+
ˆq ◦ γ(x)2 · |Dγ|dm =
|
719 |
+
�
|
720 |
+
q−1(U)
|
721 |
+
ˆq(x)2dm = µq(U).
|
722 |
+
Therefore, we can map each orbit of q ∈ L2(M, C(N)) under the half density action by Γ(M) to a
|
723 |
+
measure on N. As in the previous section, we will first show the result for piecewise constant functions
|
724 |
+
and extend by continuity. We prove the piecewise constant case in the following lemma.
|
725 |
+
Lemma 4.3. Let d ≥ 2 and M be a smooth, connected, compact, oriented Riemannian d-dimensional
|
726 |
+
manifold with or without boundary. Given two piecewise constant functions q1, q2 : M → C(N),
|
727 |
+
inf
|
728 |
+
γ∈Γ(M) dL2(q2, q2 ∗ γ) = WFRδ(µq1, µq2).
|
729 |
+
Proof. Let {σi}m
|
730 |
+
i=1 and {τj}n
|
731 |
+
j=1 be triangulations of M such that q1 is constant on each σi and q2
|
732 |
+
is constant on each τj. Let ˆq1 : M → R, q1 : M → N be the decomposition of q1 and ˆq2 : M → R,
|
733 |
+
q2 : M → M be the decomposition of q2. Define a function ⟨·, ·⟩ : N × N → R given via ⟨u, v⟩ =
|
734 |
+
cos(d(u, v)/2δ). A brief computation shows
|
735 |
+
inf
|
736 |
+
γ∈Γ(M) d2
|
737 |
+
L2(q1, q2 ∗ γ) =
|
738 |
+
m
|
739 |
+
�
|
740 |
+
i=1
|
741 |
+
ai +
|
742 |
+
n
|
743 |
+
�
|
744 |
+
j=1
|
745 |
+
bj − 2
|
746 |
+
sup
|
747 |
+
γ∈Γ(M)
|
748 |
+
�
|
749 |
+
M
|
750 |
+
ˆq1(x) ˆq2(γ(x))
|
751 |
+
�
|
752 |
+
|Dγ|⟨q1(x), q2(γ(x))⟩dm.
|
753 |
+
Let A be the set of all discrete semi-couplings from µq1 to µq2. Recall
|
754 |
+
WFRδ(µq1, µq2)2 =
|
755 |
+
m
|
756 |
+
�
|
757 |
+
i=1
|
758 |
+
ai +
|
759 |
+
n
|
760 |
+
�
|
761 |
+
j=1
|
762 |
+
bj − 2
|
763 |
+
sup
|
764 |
+
(A,B)∈A
|
765 |
+
m
|
766 |
+
�
|
767 |
+
i=1
|
768 |
+
n
|
769 |
+
�
|
770 |
+
j=1
|
771 |
+
�
|
772 |
+
AijBij⟨ui, vj⟩
|
773 |
+
Therefore, the theorem is equivalent to showing
|
774 |
+
sup
|
775 |
+
(A,B)∈A
|
776 |
+
m
|
777 |
+
�
|
778 |
+
i=1
|
779 |
+
n
|
780 |
+
�
|
781 |
+
j=1
|
782 |
+
�
|
783 |
+
AijBij⟨ui, vj⟩ =
|
784 |
+
sup
|
785 |
+
γ∈Γ(S2)
|
786 |
+
�
|
787 |
+
M
|
788 |
+
ˆq1(x) ˆq2(γ(x))
|
789 |
+
�
|
790 |
+
|Dγ|⟨q1(x), q2(γ(x))⟩dm.
|
791 |
+
|
792 |
+
12
|
793 |
+
M. BAUER, E. HARTMAN, E. KLASSEN
|
794 |
+
Claim 2. Assume that (A, B) is a discrete semi-coupling from µq1 to µq2. Then for all ǫ > 0 there
|
795 |
+
is a PL homeomorphism γ : M → M such that
|
796 |
+
������
|
797 |
+
�
|
798 |
+
M
|
799 |
+
ˆq1(x) ˆq2(γ(x))
|
800 |
+
�
|
801 |
+
|Dγ|⟨q1(x), q2(γ(x))⟩dm −
|
802 |
+
�
|
803 |
+
i,j
|
804 |
+
�
|
805 |
+
AijBij⟨ui, vj⟩
|
806 |
+
������
|
807 |
+
< ǫ.
|
808 |
+
Proof of Claim 2. Let (A, B) be a discrete semi-coupling from µq1 to µq2 such that for each 1 ≤ i ≤ m
|
809 |
+
and 1 ≤ j ≤ n, Aij, Bij > 0. We will first prove the claim for this restricted case and extend it
|
810 |
+
to all semi-couplings by continuity. First we choose a real number r ∈ (0, 1). For each 1 ≤ i ≤ m,
|
811 |
+
subdivide σi into n smaller d-simplexes σij such that ˆq1
|
812 |
+
2 = Aij/m(σij). Similarly, for each 1 ≤ j ≤ n,
|
813 |
+
subdivide τj into m smaller d-simplexes τij such that ˆq2
|
814 |
+
2 = Bij/m(τij). For each 1 ≤ i ≤ m and
|
815 |
+
1 ≤ j ≤ n, choose a smaller d-simplex ˜σij, whose closure is contained in the interior of σij, such that
|
816 |
+
m(˜σij) = rm(σij). Similarly, for each 1 ≤ i ≤ m and 1 ≤ j ≤ n, choose a smaller d-simplex ˜τij, whose
|
817 |
+
closure is contained in the interior of τij, such that m(˜τij) = rm(τij). We now construct an orientation
|
818 |
+
preserving PL homeomorphism γr : M → M. First, for each 1 ≤ i ≤ m and 1 ≤ j ≤ n, define
|
819 |
+
γr : ˜σij → ˜τij to be a PL orientation preserving homeomorphism with constant area multiplication
|
820 |
+
factor, |Dγr| = m(τij)/m(σij). Note that
|
821 |
+
M −
|
822 |
+
|
823 |
+
|
824 |
+
m
|
825 |
+
�
|
826 |
+
i=1
|
827 |
+
n�
|
828 |
+
j=1
|
829 |
+
˜σo
|
830 |
+
ij
|
831 |
+
|
832 |
+
is homeomorphic to M −
|
833 |
+
|
834 |
+
|
835 |
+
m
|
836 |
+
�
|
837 |
+
i=1
|
838 |
+
n
|
839 |
+
�
|
840 |
+
j=1
|
841 |
+
˜τ o
|
842 |
+
ij
|
843 |
+
|
844 |
+
.
|
845 |
+
Hence, we can extend the homeomorphism γr defined on the ˜σij’s to a homeomorphism from M to M.
|
846 |
+
Note that on each ˜σij, ˆq2
|
847 |
+
2(γr(x))|Dγr| = Bij/m(σij). Write M = M1 ∪ M2, where M1 =
|
848 |
+
m�
|
849 |
+
i=1
|
850 |
+
n�
|
851 |
+
j=1
|
852 |
+
˜σij
|
853 |
+
and M2 = M − M1. A simple computation shows
|
854 |
+
�
|
855 |
+
M1
|
856 |
+
ˆq1(x) ˆq2(γr(x))
|
857 |
+
�
|
858 |
+
|Dγr|⟨q1(x), q2(γr(x))⟩dm
|
859 |
+
=
|
860 |
+
m
|
861 |
+
�
|
862 |
+
i=1
|
863 |
+
n
|
864 |
+
�
|
865 |
+
j=1
|
866 |
+
�
|
867 |
+
˜σij
|
868 |
+
ˆq1(x) ˆq2(γr(x))
|
869 |
+
�
|
870 |
+
|Dγr|⟨q1(x), q2(γr(x))⟩dm
|
871 |
+
=
|
872 |
+
m
|
873 |
+
�
|
874 |
+
i=1
|
875 |
+
n
|
876 |
+
�
|
877 |
+
j=1
|
878 |
+
�
|
879 |
+
AijBij
|
880 |
+
m(σij) m(˜σij)⟨ui, vj⟩ =
|
881 |
+
m
|
882 |
+
�
|
883 |
+
i=1
|
884 |
+
n
|
885 |
+
�
|
886 |
+
j=1
|
887 |
+
�
|
888 |
+
rAij
|
889 |
+
�
|
890 |
+
rBij⟨ui, vj⟩.
|
891 |
+
Meanwhile by the Schwarz inequality,
|
892 |
+
����
|
893 |
+
�
|
894 |
+
M2
|
895 |
+
ˆq1(x) ˆq2(γr(x))
|
896 |
+
�
|
897 |
+
|Dγr|⟨q1(x), q2(γr(x))⟩dm
|
898 |
+
���� ≤
|
899 |
+
�
|
900 |
+
M2
|
901 |
+
ˆq1(x) ˆq2(γr(x))
|
902 |
+
�
|
903 |
+
|Dγr|dm
|
904 |
+
≤
|
905 |
+
��
|
906 |
+
M2
|
907 |
+
ˆq1
|
908 |
+
2dm
|
909 |
+
��
|
910 |
+
M2
|
911 |
+
ˆq2
|
912 |
+
2(γr(x))|Dγr|dm =
|
913 |
+
�
|
914 |
+
(1 − r)
|
915 |
+
�
|
916 |
+
M
|
917 |
+
ˆq1
|
918 |
+
2dm
|
919 |
+
�
|
920 |
+
(1 − r)
|
921 |
+
�
|
922 |
+
M
|
923 |
+
ˆq2
|
924 |
+
2dm.
|
925 |
+
So as we let r → 1,
|
926 |
+
�
|
927 |
+
M1
|
928 |
+
ˆq1(x) ˆq2(γr(x))
|
929 |
+
�
|
930 |
+
|Dγr|⟨q1(x), q2(γr(x))⟩dm →
|
931 |
+
m
|
932 |
+
�
|
933 |
+
i=1
|
934 |
+
n
|
935 |
+
�
|
936 |
+
j=1
|
937 |
+
�
|
938 |
+
AijBij⟨ui, vj⟩
|
939 |
+
and
|
940 |
+
�
|
941 |
+
M2
|
942 |
+
ˆq1(x) ˆq2(γr(x))
|
943 |
+
�
|
944 |
+
|Dγr|⟨q1(x), q2(γr(x))⟩dm → 0.
|
945 |
+
|
946 |
+
13
|
947 |
+
Hence,
|
948 |
+
�
|
949 |
+
M
|
950 |
+
ˆq1(x) ˆq2(γr(x))
|
951 |
+
�
|
952 |
+
|Dγr|⟨q1(x), q2(γr(x))⟩dm →
|
953 |
+
m
|
954 |
+
�
|
955 |
+
i=1
|
956 |
+
n
|
957 |
+
�
|
958 |
+
j=1
|
959 |
+
�
|
960 |
+
AijBij⟨ui, vj⟩.
|
961 |
+
Thus Claim 2 follows for the case in which for each 1 ≤ i ≤ m and 1 ≤ j ≤ n, Aij > 0 and Bij > 0.
|
962 |
+
The general case then follows immediately from the continuity of
|
963 |
+
m
|
964 |
+
�
|
965 |
+
i=1
|
966 |
+
n
|
967 |
+
�
|
968 |
+
j=1
|
969 |
+
�
|
970 |
+
AijBij⟨ui, vj⟩
|
971 |
+
as a function of (A, B). This completes the proof of Claim 2. It follows that
|
972 |
+
sup
|
973 |
+
γ∈Γ(S2)
|
974 |
+
�
|
975 |
+
M
|
976 |
+
ˆq1(x) ˆq2(x)⟨q1(x), q2(x)⟩dm ≥
|
977 |
+
sup
|
978 |
+
(A,B)∈A
|
979 |
+
m
|
980 |
+
�
|
981 |
+
i=1
|
982 |
+
n
|
983 |
+
�
|
984 |
+
j=1
|
985 |
+
�
|
986 |
+
AijBij⟨ui, vj⟩.
|
987 |
+
We are left to show the opposite inequality.
|
988 |
+
Claim 3. Assume γ is a PL-homeomorphism from M to M, then there exists a discrete semi-
|
989 |
+
coupling (A, B) such that
|
990 |
+
sup
|
991 |
+
γ∈Γ(M)
|
992 |
+
�
|
993 |
+
M
|
994 |
+
ˆq1(x) ˆq2(γ(x))
|
995 |
+
�
|
996 |
+
|Dγ|⟨q1(x), q2(γ(x))⟩dm ≤
|
997 |
+
sup
|
998 |
+
(A,B)∈A
|
999 |
+
m
|
1000 |
+
�
|
1001 |
+
i=1
|
1002 |
+
n
|
1003 |
+
�
|
1004 |
+
j=1
|
1005 |
+
�
|
1006 |
+
AijBij⟨ui, vj⟩.
|
1007 |
+
Proof of Claim 3. Let γ : M → M be an orientation preserving PL homeomorphism. For 1 ≤ i ≤ m
|
1008 |
+
and 1 ≤ j ≤ n, define σij = γ−1(τj) ∩ σi and define τij = γ(σij). Now define two (m + 1) × (n + 1)
|
1009 |
+
matrices A and B via:
|
1010 |
+
• For 1 ≤ i ≤ m and 1 ≤ j ≤ n, Aij =
|
1011 |
+
�
|
1012 |
+
σij
|
1013 |
+
ˆq1
|
1014 |
+
2dm and Bij =
|
1015 |
+
�
|
1016 |
+
τij
|
1017 |
+
ˆq2
|
1018 |
+
2dm.
|
1019 |
+
• For 0 ≤ i ≤ m, B0i = 0 and Ai0 = ai −
|
1020 |
+
n
|
1021 |
+
�
|
1022 |
+
j=1
|
1023 |
+
�
|
1024 |
+
σij
|
1025 |
+
ˆq1
|
1026 |
+
2dm.
|
1027 |
+
• For 0 ≤ j ≤ n, Aj0 = 0 and B0j = bj −
|
1028 |
+
m
|
1029 |
+
�
|
1030 |
+
i=1
|
1031 |
+
�
|
1032 |
+
τij
|
1033 |
+
ˆq2
|
1034 |
+
2dm.
|
1035 |
+
The pair of matrices (A, B) is a discrete semi-coupling from µq1 to µq2 by construction. We say that
|
1036 |
+
(A, B) is the semi-coupling corresponding to the homeomorphism γ. Denote the area multiplication
|
1037 |
+
factor of γ on σij by mij. Then by the Schwarz inequality,
|
1038 |
+
�
|
1039 |
+
σij
|
1040 |
+
ˆq1(x) ˆq2(γ(x))
|
1041 |
+
�
|
1042 |
+
|Dγ|⟨ui, vj⟩dm ≤
|
1043 |
+
��
|
1044 |
+
σij
|
1045 |
+
ˆq1
|
1046 |
+
2(x)dm
|
1047 |
+
��
|
1048 |
+
σij
|
1049 |
+
ˆq2
|
1050 |
+
2(γ(x))|Dγ|dm⟨ui · vj⟩
|
1051 |
+
=
|
1052 |
+
��
|
1053 |
+
σij
|
1054 |
+
ˆq1
|
1055 |
+
2(x)dm
|
1056 |
+
��
|
1057 |
+
τij
|
1058 |
+
ˆq2
|
1059 |
+
2(x)dm⟨ui · vj⟩ =
|
1060 |
+
�
|
1061 |
+
Aij
|
1062 |
+
�
|
1063 |
+
Bij⟨ui · vj⟩.
|
1064 |
+
Summing over all i and j we obtain:
|
1065 |
+
�
|
1066 |
+
M
|
1067 |
+
ˆq1(x) ˆq2(γ(x))
|
1068 |
+
�
|
1069 |
+
|Dγ|⟨q1(x), q2(γ(x))⟩dm
|
1070 |
+
=
|
1071 |
+
�
|
1072 |
+
i,j
|
1073 |
+
�
|
1074 |
+
σij
|
1075 |
+
ˆq1(x) ˆq2(γ(x))
|
1076 |
+
�
|
1077 |
+
|Dγ|⟨q1(x), q2(γ(x))⟩dm ≤
|
1078 |
+
�
|
1079 |
+
i,j
|
1080 |
+
�
|
1081 |
+
Aij
|
1082 |
+
�
|
1083 |
+
Bij⟨ui · vj⟩.
|
1084 |
+
|
1085 |
+
14
|
1086 |
+
M. BAUER, E. HARTMAN, E. KLASSEN
|
1087 |
+
This completes the proof of Claim 3. It follows that,
|
1088 |
+
sup
|
1089 |
+
γ∈Γ(M)
|
1090 |
+
�
|
1091 |
+
M
|
1092 |
+
ˆq1(x) ˆq2(γ(x))
|
1093 |
+
�
|
1094 |
+
|Dγ|⟨q1(x), q2(γ(x))⟩dm ≤
|
1095 |
+
sup
|
1096 |
+
(A,B)∈A
|
1097 |
+
m
|
1098 |
+
�
|
1099 |
+
i=1
|
1100 |
+
n
|
1101 |
+
�
|
1102 |
+
j=1
|
1103 |
+
�
|
1104 |
+
AijBij⟨ui · vj⟩.
|
1105 |
+
and thus the lemma is proved.
|
1106 |
+
To extend the results to all of L2(M, C(N)) we will need the following continuity result:
|
1107 |
+
Lemma 4.4. The map (L2(M, C(N)), dL2) → (M(N), WFRδ) defined via q �→ q∗νq is Lipschitz
|
1108 |
+
continuous with Lipschitz constant K = 1.
|
1109 |
+
Proof. Let q1, q2 ∈ L2(M, C(N)), µq1 = q1∗νq1, and µq2 = q2∗νq2. For any semi-coupling (γ1, γ2) ∈
|
1110 |
+
Γ(µq1, µq2),
|
1111 |
+
WFRδ(µq1, µq2) ≤
|
1112 |
+
�
|
1113 |
+
Jδ(γ1, γ2).
|
1114 |
+
Thus, to prove the theorem we must construct (γ1, γ2) ∈ Γ(µq1, µq2) such that Jδ(γ1, γ2) = dL2(q1, q2).
|
1115 |
+
To construct such a semi-coupling we first construct ρ : M → N×N defined a the first component maps
|
1116 |
+
of q1 and q2 on the first and second factor respectively. I.e. the map is given by ρ(x) = (q1(x), q2(x)) .
|
1117 |
+
Since q1 and q2 are individually measurable, then so is ρ. We can then define γ1, γ2 ∈ M(N × N) via
|
1118 |
+
γ1 = ρ∗νq1 and γ2 = ρ∗νq2.
|
1119 |
+
Claim 4. The pair of measures, (γ1, γ2) is a semi-coupling from µq1 to µq2.
|
1120 |
+
Proof of claim.
|
1121 |
+
Let U ⊆ N be open. Thus,
|
1122 |
+
γ1(U × N) = νq1
|
1123 |
+
�
|
1124 |
+
ρ−1(U × N)
|
1125 |
+
�
|
1126 |
+
= νq1
|
1127 |
+
�
|
1128 |
+
q1−1(U) ∩ q2−1(N)
|
1129 |
+
�
|
1130 |
+
= νq1
|
1131 |
+
�
|
1132 |
+
q1−1(U)
|
1133 |
+
�
|
1134 |
+
= µq1(U)
|
1135 |
+
and
|
1136 |
+
γ2(N × U) = νq2
|
1137 |
+
�
|
1138 |
+
ρ−1(N × U)
|
1139 |
+
�
|
1140 |
+
= νq1
|
1141 |
+
�
|
1142 |
+
q1−1(N) ∩ q2−1(U)
|
1143 |
+
�
|
1144 |
+
= νq1
|
1145 |
+
�
|
1146 |
+
q2−1(U)
|
1147 |
+
�
|
1148 |
+
= µq2(U).
|
1149 |
+
So (γ1, γ2) is a semi-coupling from µq1 to µq2.
|
1150 |
+
Recall from the definition of the functional J we need to construct γ ∈ M(N ×N) such that γ1, γ2 ≪ γ.
|
1151 |
+
Define γ = ρ∗m. We know µq1, µq2 ≪ m. Thus, by Lemma 3.2, γ1, γ2 ≪ γ. Furthermore,
|
1152 |
+
ˆq1
|
1153 |
+
2 = µq1
|
1154 |
+
m = γ1
|
1155 |
+
γ ◦ ρ a.e.
|
1156 |
+
and
|
1157 |
+
ˆq2
|
1158 |
+
2 = µq2
|
1159 |
+
m = γ2
|
1160 |
+
γ ◦ ρ a.e.
|
1161 |
+
So,
|
1162 |
+
Jδ(γ1, γ2) =µ1(N) + µ2(N) − 2
|
1163 |
+
�
|
1164 |
+
N×N
|
1165 |
+
√γ1γ2
|
1166 |
+
γ
|
1167 |
+
(u, v)cos(d(u, v)/2δ)dγ(u, v)
|
1168 |
+
=
|
1169 |
+
�
|
1170 |
+
N×N
|
1171 |
+
γ1
|
1172 |
+
γ dγ +
|
1173 |
+
�
|
1174 |
+
N×N
|
1175 |
+
γ2
|
1176 |
+
γ dγ − 2
|
1177 |
+
�
|
1178 |
+
N×N
|
1179 |
+
�γ1
|
1180 |
+
γ (u, v)γ2
|
1181 |
+
γ (u, v)cos(d(u, v)/2δ)dγ(u, v)
|
1182 |
+
=
|
1183 |
+
�
|
1184 |
+
ρ−1(N×N)
|
1185 |
+
γ1
|
1186 |
+
γ ◦ ρ dm +
|
1187 |
+
�
|
1188 |
+
ρ−1(N×N)
|
1189 |
+
γ2
|
1190 |
+
γ ◦ ρ dm
|
1191 |
+
− 2
|
1192 |
+
�
|
1193 |
+
ρ−1(N×N)
|
1194 |
+
�γ1
|
1195 |
+
γ ◦ ρ(x)γ2
|
1196 |
+
γ ◦ ρ(x)cos(d(ρ(x))/2δ)dm
|
1197 |
+
=
|
1198 |
+
�
|
1199 |
+
M
|
1200 |
+
ˆq1(x)2 dm +
|
1201 |
+
�
|
1202 |
+
M
|
1203 |
+
ˆq2(x)2 dm − 2
|
1204 |
+
�
|
1205 |
+
M
|
1206 |
+
ˆq1(x) ˆq2(x)cos(d(q1, q2)/2δ)dm = dL2(q1, q2)
|
1207 |
+
Thus,
|
1208 |
+
WFRδ(µq1, µq2) ≤
|
1209 |
+
�
|
1210 |
+
Jδ(γ1, γ2) = 1 · dL2(µq1, µq2)
|
1211 |
+
|
1212 |
+
15
|
1213 |
+
Finally, we can leverage this continuity result to complete the proof of Theorem 4.1.
|
1214 |
+
Proof of Theorem 4.1. Let µ1, µ2 ∈ M(N) and q1, q2 ∈ L2(M, C(N)) such that µ1 = q1∗νq1 and
|
1215 |
+
µ2 = q2∗νq2. By an argument analogous to the proof of Theorem 3.1 we can conclude
|
1216 |
+
inf
|
1217 |
+
γ∈Γ(M) dL2(q1, q2 ∗ γ) = WFRδ(µ1, µ2).
|
1218 |
+
This concludes the the proof of part a.). Let µ = �n
|
1219 |
+
i=1 aiδui be a finitely supported measure on N.
|
1220 |
+
By [48], M admits a canonical PL structure. Let m ≥ n and subdivide the triangulation of M into
|
1221 |
+
m simplices given by σj for 1 ≤ j ≤ m. Let x ∈ M. Thus, there exists 1 ≤ j ≤ m such that x ∈ σj.
|
1222 |
+
Thus we define
|
1223 |
+
q(x) =
|
1224 |
+
��
|
1225 |
+
uj,
|
1226 |
+
�
|
1227 |
+
aj
|
1228 |
+
area(σj)
|
1229 |
+
�
|
1230 |
+
if 1 ≤ j ≤ n
|
1231 |
+
(u1, 0)
|
1232 |
+
if n < j ≤ m
|
1233 |
+
.
|
1234 |
+
Let U ⊆ N, then µ(U) =
|
1235 |
+
�
|
1236 |
+
i|ui∈U
|
1237 |
+
ai. Meanwhile, q−1(U) =
|
1238 |
+
�
|
1239 |
+
i|ui∈U
|
1240 |
+
σi. Thus,
|
1241 |
+
�
|
1242 |
+
q−1(U)
|
1243 |
+
ˆq2(x)dm =
|
1244 |
+
�
|
1245 |
+
i|ui∈U
|
1246 |
+
�
|
1247 |
+
σi
|
1248 |
+
ai
|
1249 |
+
area(σi)dm =
|
1250 |
+
�
|
1251 |
+
i|ui∈U
|
1252 |
+
ai.
|
1253 |
+
To complete the proof of part b.) we will extend the result to the whole space by continuity. For any
|
1254 |
+
µ ∈ M(N), let {µn} ⊆ M(N) be a sequence of finitely supported measures that converges to µ with
|
1255 |
+
respect to the Wasserstein Fisher Rao. In particular, {µn} is Cauchy with respect to WFRδ. Note
|
1256 |
+
that for all n ∈ N,there exists a piecewise constant qn ∈ L2(M, C(N)) satisfying
|
1257 |
+
µn(U) =
|
1258 |
+
�
|
1259 |
+
qn−1(U)
|
1260 |
+
ˆqn(x)2dm.
|
1261 |
+
Thus, we can construct a sequence of functions given by q∗
|
1262 |
+
0 = q0 an for all n ∈ N, q∗n+1 = qn+1 ∗ γn
|
1263 |
+
where γn is a PL homeomorphism from M to M such that
|
1264 |
+
dL2(q∗
|
1265 |
+
n, qn+1 ∗ γn) = WFRδ(µn, µn+1) + 1
|
1266 |
+
2n .
|
1267 |
+
Note that the existence of such a γn is guaranteed by Lemma 4.3. Since {µn} is Cauchy with respect
|
1268 |
+
to WFRδ, it follows that {q∗
|
1269 |
+
n} is Cauchy with respect to dL2. By completeness of (L2(M, C(N)), dL2),
|
1270 |
+
there exists a limit q ∈ L2(M, C(N)). Let U ⊆ N open. Thus,
|
1271 |
+
µ(U) = lim
|
1272 |
+
n→∞ µn(U) = lim
|
1273 |
+
n→∞
|
1274 |
+
�
|
1275 |
+
qn−1(U)
|
1276 |
+
ˆqn(x)2dm = lim
|
1277 |
+
n→∞
|
1278 |
+
�
|
1279 |
+
M
|
1280 |
+
ˆqn(x)2χqn−1(U)dm
|
1281 |
+
=
|
1282 |
+
�
|
1283 |
+
M
|
1284 |
+
lim
|
1285 |
+
n→∞ ˆqn(x)2χqn−1(U)dm =
|
1286 |
+
�
|
1287 |
+
M
|
1288 |
+
ˆq(x)2χq−1(U)dm =
|
1289 |
+
�
|
1290 |
+
q−1(U)
|
1291 |
+
ˆq(x)2dm
|
1292 |
+
Thus, µ = q∗νq This completes the proof of part b.) of the theorem.
|
1293 |
+
REFERENCES
|
1294 |
+
[1] A. Alexandrov, Zur theorie der gemischten volumina von konvexen k¨orpern i, Mat. Sbornik NS, 1 (1938), pp. 227–
|
1295 |
+
251.
|
1296 |
+
[2] M. Bauer, M. Bruveris, P. Harms, and J. Møller-Andersen, A numerical framework for sobolev metrics on
|
1297 |
+
the space of curves, SIAM Journal on Imaging Sciences, 10 (2017), pp. 47–73.
|
1298 |
+
|
1299 |
+
16
|
1300 |
+
M. BAUER, E. HARTMAN, E. KLASSEN
|
1301 |
+
[3] M. Bauer, M. Bruveris, and P. W. Michor, Overview of the geometries of shape spaces and diffeomorphism
|
1302 |
+
groups, Journal of Mathematical Imaging and Vision, 50 (2014), pp. 60–97.
|
1303 |
+
[4] M. Bauer, N. Charon, P. Harms, and H.-W. Hsieh, A numerical framework for elastic surface matching,
|
1304 |
+
comparison, and interpolation, International Journal of Computer Vision, 129 (2021), pp. 2425–2444.
|
1305 |
+
[5] M. Bauer, N. Charon, E. Klassen, S. Kurtek, T. Needham, and T. Pierron, Elastic metrics on spaces of
|
1306 |
+
euclidean curves: Theory and algorithms, arXiv preprint arXiv:2209.09862, (2022).
|
1307 |
+
[6] M. Bauer, E. Hartman, and E. Klassen, The square root normal field distance and unbalanced optimal transport,
|
1308 |
+
Applied Mathematics & Optimization, 85 (2022), https://doi.org/10.1007/s00245-022-09867-y, https://doi.
|
1309 |
+
org/10.1007%2Fs00245-022-09867-y.
|
1310 |
+
[7] M. Bruveris, Optimal reparametrizations in the square root velocity framework, SIAM Journal on Mathematical
|
1311 |
+
Analysis, 48 (2016), pp. 4335–4354.
|
1312 |
+
[8] N.
|
1313 |
+
Charon
|
1314 |
+
and
|
1315 |
+
L.
|
1316 |
+
Younes,
|
1317 |
+
Shape
|
1318 |
+
spaces:
|
1319 |
+
From
|
1320 |
+
geometry
|
1321 |
+
to
|
1322 |
+
biological
|
1323 |
+
plausibility,
|
1324 |
+
arXiv
|
1325 |
+
preprint
|
1326 |
+
arXiv:2205.01237, (2022).
|
1327 |
+
[9] L. Chizat, G. Peyr´e, B. Schmitzer, and F.-X. Vialard, An interpolating distance between optimal transport
|
1328 |
+
and Fisher–Rao metrics, Foundations of Computational Mathematics, 18 (2018), pp. 1–44.
|
1329 |
+
[10] L. Chizat, G. Peyr´e, B. Schmitzer, and F.-X. Vialard, Scaling algorithms for unbalanced optimal transport
|
1330 |
+
problems, Mathematics of Computation, 87 (2018), pp. 2563–2609.
|
1331 |
+
[11] L. Chizat, G. Peyr´e, B. Schmitzer, and F.-X. Vialard, Unbalanced optimal transport: Dynamic and Kan-
|
1332 |
+
torovich formulations, Journal of Functional Analysis, 274 (2018), pp. 3090–3123.
|
1333 |
+
[12] G. Dogan, J. Bernal, and C. R. Hagwood, A fast algorithm for elastic shape distances between closed planar
|
1334 |
+
curves, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 4222–
|
1335 |
+
4230.
|
1336 |
+
[13] I. L. Dryden and K. V. Mardia, Statistical shape analysis: with applications in R, vol. 995, John Wiley & Sons,
|
1337 |
+
2016.
|
1338 |
+
[14] E. Hartman, Y. Sukurdeep, E. Klassen, N. Charon, and M. Bauer, Elastic shape analysis of surfaces with
|
1339 |
+
second-order sobolev metrics: a comprehensive numerical framework, To appear in IJCV, (2022).
|
1340 |
+
[15] I. H. Jermyn, S. Kurtek, E. Klassen, and A. Srivastava, Elastic shape matching of parameterized surfaces
|
1341 |
+
using square root normal fields, in European conference on computer vision, Springer, 2012, pp. 804–817.
|
1342 |
+
[16] I. H. Jermyn, S. Kurtek, H. Laga, and A. Srivastava, Elastic shape analysis of three-dimensional objects,
|
1343 |
+
Synthesis Lectures on Computer Vision, 12 (2017), pp. 1–185.
|
1344 |
+
[17] S. H. Joshi, Q. Xie, S. Kurtek, A. Srivastava, and H. Laga, Surface shape morphometry for hippocampal
|
1345 |
+
modeling in alzheimer’s disease, in 2016 International Conference on Digital Image Computing: Techniques
|
1346 |
+
and Applications (DICTA), IEEE, 2016, pp. 1–8.
|
1347 |
+
[18] E. Klassen and P. W. Michor, Closed surfaces with different shapes that are indistinguishable by the SRNF,
|
1348 |
+
Archivum Mathematicum, 56 (2020), pp. 107–114.
|
1349 |
+
[19] S. Kurtek, E. Klassen, Z. Ding, and A. Srivastava, A novel Riemannian framework for shape analysis of 3D
|
1350 |
+
objects, in 2010 IEEE computer society conference on computer vision and pattern recognition, IEEE, 2010,
|
1351 |
+
pp. 1625–1632.
|
1352 |
+
[20] S. Kurtek, E. Klassen, J. C. Gore, Z. Ding, and A. Srivastava, Elastic geodesic paths in shape space of
|
1353 |
+
parameterized surfaces, IEEE transactions on pattern analysis and machine intelligence, 34 (2011), pp. 1717–
|
1354 |
+
1730.
|
1355 |
+
[21] S. Kurtek, C. Samir, and L. Ouchchane, Statistical shape model for simulation of realistic endometrial tissue.,
|
1356 |
+
in ICPRAM, 2014, pp. 421–428.
|
1357 |
+
[22] H. Laga, M. Padilla, I. H. Jermyn, S. Kurtek, M. Bennamoun, and A. Srivastava, 4d atlas: Statistical
|
1358 |
+
analysis of the spatiotemporal variability in longitudinal 3D shape data, arXiv preprint arXiv:2101.09403,
|
1359 |
+
(2021).
|
1360 |
+
[23] H. Laga, Q. Xie, I. H. Jermyn, and A. Srivastava, Numerical inversion of SRNF maps for elastic shape
|
1361 |
+
analysis of genus-zero surfaces, IEEE transactions on pattern analysis and machine intelligence, 39 (2017),
|
1362 |
+
pp. 2451–2464.
|
1363 |
+
[24] S. Lahiri, D. Robinson, and E. Klassen, Precise matching of PL curves in RN in the square root velocity
|
1364 |
+
framework, Geometry, Imaging and Computing, 2 (2015), pp. 133–186.
|
1365 |
+
[25] M. Liero, A. Mielke, and G. Savar´e, Optimal transport in competition with reaction: The Hellinger–Kantorovich
|
1366 |
+
distance and geodesic curves, SIAM Journal on Mathematical Analysis, 48 (2016), pp. 2869–2911.
|
1367 |
+
[26] M. Liero, A. Mielke, and G. Savar´e, Optimal entropy-transport problems and a new Hellinger–Kantorovich
|
1368 |
+
distance between positive measures, Inventiones mathematicae, 211 (2018), pp. 969–1117.
|
1369 |
+
[27] Lombardi, Damiano and Maitre, Emmanuel, Eulerian models and algorithms for unbalanced optimal transport,
|
1370 |
+
ESAIM: M2AN, 49 (2015), pp. 1717–1744, https://doi.org/10.1051/m2an/2015025, https://doi.org/10.1051/
|
1371 |
+
m2an/2015025.
|
1372 |
+
[28] J. S. Marron and A. M. Alonso, Overview of object oriented data analysis, Biometrical Journal, 56 (2014),
|
1373 |
+
pp. 732–753.
|
1374 |
+
[29] J. Matuk, S. Mohammed, S. Kurtek, and K. Bharath, Biomedical applications of geometric functional data
|
1375 |
+
analysis, in Handbook of Variational Methods for Nonlinear Geometric Data, Springer, 2020, pp. 675–701.
|
1376 |
+
|
1377 |
+
17
|
1378 |
+
[30] P. W. Michor and D. Mumford, An overview of the riemannian metrics on spaces of curves using the hamiltonian
|
1379 |
+
approach, Applied and Computational Harmonic Analysis, 23 (2007), pp. 74–113.
|
1380 |
+
[31] M. I. Miller, A. Trouv´e, and L. Younes, On the metrics and euler-lagrange equations of computational anatomy,
|
1381 |
+
Annual review of biomedical engineering, 4 (2002), pp. 375–405.
|
1382 |
+
[32] H. Minkowski, Allgemeine lehrs¨atze ¨uber die convexen polyeder, Nachrichten von der Gesellschaft der Wis-
|
1383 |
+
senschaften zu G¨ottingen, Mathematisch-Physikalische Klasse, 1897 (1897), pp. 198–220, http://eudml.org/
|
1384 |
+
doc/58391.
|
1385 |
+
[33] W. Mio, A. Srivastava, and S. Joshi, On shape of plane elastic curves, International Journal of Computer Vision,
|
1386 |
+
73 (2007), pp. 307–324.
|
1387 |
+
[34] T. Needham and S. Kurtek, Simplifying transforms for general elastic metrics on the space of plane curves,
|
1388 |
+
SIAM journal on imaging sciences, 13 (2020), pp. 445–473.
|
1389 |
+
[35] X. Pennec, Intrinsic statistics on riemannian manifolds: Basic tools for geometric measurements, Journal of
|
1390 |
+
Mathematical Imaging and Vision, 25 (2006), pp. 127–154.
|
1391 |
+
[36] X. Pennec, S. Sommer, and T. Fletcher, Riemannian geometric statistics in medical image analysis, Academic
|
1392 |
+
Press, 2019.
|
1393 |
+
[37] B. Piccoli and F. Rossi, Generalized Wasserstein distance and its application to transport equations with source,
|
1394 |
+
Archive for Rational Mechanics and Analysis, 211 (2014), pp. 335–358.
|
1395 |
+
[38] R. Schneider, Convex surfaces, curvature and surface area measures, in Handbook of convex geometry, Elsevier,
|
1396 |
+
1993, pp. 273–299.
|
1397 |
+
[39] A. Srivastava, E. Klassen, S. H. Joshi, and I. H. Jermyn, Shape analysis of elastic curves in euclidean spaces,
|
1398 |
+
IEEE transactions on pattern analysis and machine intelligence, 33 (2010), pp. 1415–1428.
|
1399 |
+
[40] A. Srivastava and E. P. Klassen, Functional and shape data analysis, vol. 1, Springer, 2016.
|
1400 |
+
[41] Z. Su, M. Bauer, E. Klassen, and K. Gallivan, Simplifying transformations for a family of elastic metrics on the
|
1401 |
+
space of surfaces, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
|
1402 |
+
Workshops, 2020, pp. 848–849.
|
1403 |
+
[42] Z. Su, M. Bauer, S. C. Preston, H. Laga, and E. Klassen, Shape analysis of surfaces using general elastic
|
1404 |
+
metrics, Journal of Mathematical Imaging and Vision, 62 (2020), pp. 1087–1106.
|
1405 |
+
[43] K. Tapp, Differential Geometry of Curves and Surfaces, Undergraduate Texts in Mathematics, Springer Interna-
|
1406 |
+
tional Publishing, 2016, https://books.google.com/books?id=kfIqDQAAQBAJ.
|
1407 |
+
[44] A. Trouv´e and L. Younes, On a class of diffeomorphic matching problems in one dimension, SIAM Journal on
|
1408 |
+
Control and Optimization, 39 (2000), pp. 1112–1135.
|
1409 |
+
[45] C. Villani, Topics in optimal transportation, no. 58, American Mathematical Soc., 2003.
|
1410 |
+
[46] C. Villani, Optimal transport: old and new, vol. 338, Springer Science & Business Media, 2008.
|
1411 |
+
[47] Z. Wang, D. P. Zhou, M. Yang, Y. Zhang, C.-Y. Rao, and H. Wu, Robust document distance with wasserstein-
|
1412 |
+
fisher-rao metric, in ACML, 2020.
|
1413 |
+
[48] J. H. C. Whitehead, On C1-complexes, Annals of Mathematics, (1940), pp. 809–824.
|
1414 |
+
[49] E. N. Wøien and M. Grasmair, A pde-based method for shape registration, SIAM Journal on Imaging Sciences,
|
1415 |
+
15 (2022), pp. 762–796.
|
1416 |
+
[50] L. Younes, Computable elastic distances between shapes, SIAM Journal on Applied Mathematics, 58 (1998),
|
1417 |
+
pp. 565–586.
|
1418 |
+
[51] L. Younes, Shapes and diffeomorphisms, vol. 171, Springer, 2010.
|
1419 |
+
[52] D. Zhou, J. Chen, H. Wu, D. H. Yang, and L. Qiu, The wasserstein-fisher-rao metric for waveform based
|
1420 |
+
earthquake location, arXiv: Numerical Analysis, (2018).
|
1421 |
+
|
5tAyT4oBgHgl3EQfcfeN/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
6NAyT4oBgHgl3EQf2fnD/content/2301.00753v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:04c173f808ac221d3c5fcfa0d2b7382e29dad1c56036ee9beb62cf59d0b79e4e
|
3 |
+
size 242968
|
6NAyT4oBgHgl3EQf2fnD/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d7d241eee692e756af33a30dce863d871fb5bea5080780930fdd655732eec9b7
|
3 |
+
size 132639
|
6tAzT4oBgHgl3EQfgPwz/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:960a3929969401acd2bf0bca7f87c82f365cb8a0d978de182e154fe66dbdbb4e
|
3 |
+
size 140219
|
7NAzT4oBgHgl3EQf-f4D/content/2301.01933v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bdbb2eb02922992d5b0bc396a4db8834f6629d0b2e08c795ffd0e6b70720fa59
|
3 |
+
size 1967134
|
7NAzT4oBgHgl3EQf-f4D/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:82232c7746b925471d2d265c6a235165355b3959d29830402cc4ac16271cbb8c
|
3 |
+
size 160444
|
AtFQT4oBgHgl3EQfMjZB/content/tmp_files/2301.13268v1.pdf.txt
ADDED
@@ -0,0 +1,1264 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Contextual Dynamic Prompting for Response Generation in
|
2 |
+
Task-oriented Dialog Systems
|
3 |
+
Sandesh Swamy
|
4 |
+
AWS AI Labs
|
5 | |
6 |
+
Narges Tabari
|
7 |
+
AWS AI Labs
|
8 | |
9 |
+
Chacha Chen∗
|
10 |
+
University of Chicago
|
11 | |
12 |
+
Rashmi Gangadharaiah
|
13 |
+
AWS AI Labs
|
14 | |
15 |
+
Abstract
|
16 |
+
Response generation is one of the critical com-
|
17 |
+
ponents in task-oriented dialog systems. Exist-
|
18 |
+
ing studies have shown that large pre-trained
|
19 |
+
language models can be adapted to this task.
|
20 |
+
The typical paradigm of adapting such ex-
|
21 |
+
tremely large language models would be by
|
22 |
+
fine-tuning on the downstream tasks which is
|
23 |
+
not only time-consuming but also involves sig-
|
24 |
+
nificant resources and access to fine-tuning
|
25 |
+
data. Prompting (Schick and Schütze, 2020)
|
26 |
+
has been an alternative to fine-tuning in many
|
27 |
+
NLP tasks. In our work, we explore the idea
|
28 |
+
of using prompting for response generation
|
29 |
+
in task-oriented dialog systems. Specifically,
|
30 |
+
we propose an approach that performs con-
|
31 |
+
textual dynamic prompting where the prompts
|
32 |
+
are learnt from dialog contexts.
|
33 |
+
We aim to
|
34 |
+
distill useful prompting signals from the dia-
|
35 |
+
log context. On experiments with MultiWOZ
|
36 |
+
2.2 dataset (Zang et al., 2020), we show that
|
37 |
+
contextual dynamic prompts improve response
|
38 |
+
generation in terms of combined score (Mehri
|
39 |
+
et al., 2019a) by 3 absolute points, and a mas-
|
40 |
+
sive 20 points when dialog states are incor-
|
41 |
+
porated. Furthermore, human annotation on
|
42 |
+
these conversations found that agents which in-
|
43 |
+
corporate context were preferred over agents
|
44 |
+
with vanilla prefix-tuning.
|
45 |
+
1
|
46 |
+
Introduction
|
47 |
+
With the advent of large language models (LLMs),
|
48 |
+
a vast majority of NLP tasks, including dialog sys-
|
49 |
+
tems, further fine-tune these LMs for their down-
|
50 |
+
stream tasks.
|
51 |
+
Although these approaches pro-
|
52 |
+
vide substantial improvements over traditional task-
|
53 |
+
specific models (Ham et al., 2020; Hosseini-Asl
|
54 |
+
et al., 2020; He et al., 2022), it is a time consum-
|
55 |
+
ing process that also involves significant use of
|
56 |
+
energy/resources in the form of compute. These ap-
|
57 |
+
proaches also require tuning and storing parameters
|
58 |
+
for each downstream task.
|
59 |
+
∗ Work done during an internship at AWS AI Labs
|
60 |
+
A more recent line of work, explores “prompt-
|
61 |
+
ing” LLMs to elicit the necessary knowledge re-
|
62 |
+
quired for the downstream tasks (Shin et al., 2020;
|
63 |
+
Gao et al., 2020; Schick and Schütze, 2020; Petroni
|
64 |
+
et al., 2019; Lee et al., 2021; Zhu et al., 2022).
|
65 |
+
Prompts composed of tokens or short pieces of
|
66 |
+
text (discrete prompts) inserted at the end of the
|
67 |
+
input examples. These prompts are typically man-
|
68 |
+
ually defined based on the specific downstream
|
69 |
+
task. The main motivation behind these approaches
|
70 |
+
stems from the idea that the large corpora that these
|
71 |
+
language models are trained on contain relevant in-
|
72 |
+
formation which is pertinent to the task on hand.
|
73 |
+
Adapter-tuning was proposed as an alternate ap-
|
74 |
+
proach to fine-tuning. These methods only train
|
75 |
+
task-specific layers that are inserted within pre-
|
76 |
+
trained LMs. Such a lightweight approach that add
|
77 |
+
about 4% task-specific parameters has shown to ob-
|
78 |
+
tain comparable performances to their fine-tuning
|
79 |
+
counterparts (Rebuffi et al., 2017; Houlsby et al.,
|
80 |
+
2019; Lin et al., 2020a).
|
81 |
+
Drawing inspiration from prompting, prefix-
|
82 |
+
tuning approaches (Li and Liang, 2021) were pro-
|
83 |
+
posed as another alternative to fine-tuning. These
|
84 |
+
approaches pre-pend a sequence of task-specific
|
85 |
+
continuous vectors (aka prefix-) to the input. In
|
86 |
+
contrast to prompting, the prefix consists of free
|
87 |
+
parameters that do not correspond to actual real
|
88 |
+
tokens. Such an approach is more prevalent since
|
89 |
+
it only optimizes the prefix and does not tune pa-
|
90 |
+
rameters of the entire LM.
|
91 |
+
Most of the existing approaches use static
|
92 |
+
prompts, i.e., the same set of tokens are used as
|
93 |
+
“prompt tokens" regardless of input. However, we
|
94 |
+
believe that taking context into consideration is
|
95 |
+
critical especially in response generation since the
|
96 |
+
current response has to fit not only the domain but
|
97 |
+
also the information being requested in previous
|
98 |
+
turns. For example: In the MultiWOZ dataset, if
|
99 |
+
a customer asks about train bookings, the agent
|
100 |
+
response has to restrict itself to that particular do-
|
101 |
+
arXiv:2301.13268v1 [cs.CL] 30 Jan 2023
|
102 |
+
|
103 |
+
main. To address this problem, we explore the
|
104 |
+
idea of generating input-dependent or contextual
|
105 |
+
prompts. We want the prompts to capture and en-
|
106 |
+
code different signals for different turns of dialogs
|
107 |
+
depending on the context, hence, we call our ap-
|
108 |
+
proach dynamic context prompting. This way, we
|
109 |
+
hope to distill useful signals into the prompts and
|
110 |
+
provide the model with adequate signals to gener-
|
111 |
+
ate a desired system response. In this work, we
|
112 |
+
explore the potential of using dialog context within
|
113 |
+
a prefix tuning approach for the task of response
|
114 |
+
generation in task-oriented dialog systems (TOD).
|
115 |
+
The contributions of this paper are summarized as:
|
116 |
+
• we propose a context-dependent prefix-tuning
|
117 |
+
method for dialog response generation in TOD
|
118 |
+
systems.
|
119 |
+
• to illustrate the benefits of such an approach,
|
120 |
+
we conduct experiments on the MultiWOZ
|
121 |
+
dataset. We show that our model significantly
|
122 |
+
outperforms the original task-dependent de-
|
123 |
+
sign of the prefix-tuning method.
|
124 |
+
2
|
125 |
+
Related Work
|
126 |
+
2.1
|
127 |
+
Dialog Generation
|
128 |
+
With the prevalence of LLMs, the quest for an
|
129 |
+
answer to “how do we effectively adapt such mod-
|
130 |
+
els for dialog generation?" has been on the fore-
|
131 |
+
front of researchers’ minds in the dialog commu-
|
132 |
+
nity. For task-oriented dialogs, fine-tuning large
|
133 |
+
pre-trained models such as GPT-2 or T5 has made
|
134 |
+
great progress on benchmarks recently (Ham et al.,
|
135 |
+
2020; Hosseini-Asl et al., 2020). Built upon these
|
136 |
+
advances, more recent line of work investigates
|
137 |
+
the effectiveness of using multi-task learning (Su
|
138 |
+
et al., 2021; Lin et al., 2020b; Yang et al., 2021),
|
139 |
+
or pre-training the model on external dialog cor-
|
140 |
+
pora (Peng et al., 2021; Liu et al., 2021). More
|
141 |
+
recently, prompting has been used to address the
|
142 |
+
sub-task of dialog state tracking (Lee et al., 2021;
|
143 |
+
Zhu et al., 2022). Different from those works, we
|
144 |
+
focus on the task of dialog response generation.
|
145 |
+
2.2
|
146 |
+
Prompt-based Learning
|
147 |
+
As an alternative to the fine-tuning paradigm,
|
148 |
+
prompting involves a sequence of tokens appended
|
149 |
+
to the input text, which can then induce the model
|
150 |
+
to engage in a certain behavior suited to the task.
|
151 |
+
Since the release of GPT-2 (Radford et al., 2018,
|
152 |
+
2019; Brown et al., 2020), many prompt-related pa-
|
153 |
+
pers have emerged. Most of the leading approaches
|
154 |
+
in prompting use task-specific prompts, ranging
|
155 |
+
from discrete prompts (Shin et al., 2020; Gao et al.,
|
156 |
+
2020; Schick and Schütze, 2020; Petroni et al.,
|
157 |
+
2019) to continuous “soft prompts” (Li and Liang,
|
158 |
+
2021; Lester et al., 2021). These methods have
|
159 |
+
a fixed prompt for each task. However, in dialog
|
160 |
+
systems specifically, the context varies for every
|
161 |
+
turn. In our work, we aim to design prompts which
|
162 |
+
are context-dependent.
|
163 |
+
3
|
164 |
+
Problem Statement
|
165 |
+
Response generation is one of the tasks carried
|
166 |
+
out in dialog systems usually in addition to dia-
|
167 |
+
log state tracking (DST). Given a dialog context
|
168 |
+
(previous turns between the system and the user)
|
169 |
+
C = [u1, s1, ..., un−1, sn−1] and the current user
|
170 |
+
utterance un, the goal of response generation is
|
171 |
+
to generate system response sn. Note that in the
|
172 |
+
actual task, we generate delexicalized system re-
|
173 |
+
sponses, given all the groundtruth previous turns
|
174 |
+
as input, following previous works (Hosseini-Asl
|
175 |
+
et al., 2020; Wen et al., 2015).
|
176 |
+
Techniques mentioned in (Ham et al., 2020;
|
177 |
+
Hosseini-Asl et al., 2020) rely on fully fine-tuning
|
178 |
+
LLMs to carry out this task. In contrast, our ap-
|
179 |
+
proach builds on the prefix-tuning framework, but
|
180 |
+
incorporates dialog context, C, as an additional
|
181 |
+
signal for the prefix tokens. As a supplement to
|
182 |
+
context C, we added dialog state information D
|
183 |
+
(up to the current turn) to further help response
|
184 |
+
generation.
|
185 |
+
4
|
186 |
+
Contextual Dynamic Prompting
|
187 |
+
Framework
|
188 |
+
4.1
|
189 |
+
Prefix-tuning for Response Generation
|
190 |
+
Our work is built on top of prefix tuning for genera-
|
191 |
+
tion tasks (Li and Liang, 2021), which adds a fixed
|
192 |
+
set of tunable prefix tokens/prompts to the origi-
|
193 |
+
nal input x to obtain a new input, [PREFIX; x].
|
194 |
+
Following the denotation in (Li and Liang, 2021),
|
195 |
+
we use Pθ[i, :] to denote the ith prefix. Pθ[i, :] is
|
196 |
+
generated by:
|
197 |
+
Pθ[:, :] = MLPθ(P ′),
|
198 |
+
(1)
|
199 |
+
where P ′ is a fixed smaller matrix as input to a
|
200 |
+
feedforward neural network (MLPθ). The training
|
201 |
+
objective of prefix-tuning is same as fine-tuning,
|
202 |
+
i.e., the following log-likelihood objective:
|
203 |
+
max
|
204 |
+
θ
|
205 |
+
log pφ(y|x),
|
206 |
+
|
207 |
+
Figure 1: The figures above indicate the differences between the vanilla prefix-tuning approach compared to our approach. In
|
208 |
+
both these variants, only the prefix tokens are tuned.
|
209 |
+
where y is the decoder output and x is the input. θ
|
210 |
+
represents the trainable parameters in the prefix tun-
|
211 |
+
ing feedforward neural network and φ denotes all
|
212 |
+
other parameters that include the frozen parameters
|
213 |
+
of the large language model.
|
214 |
+
For our task of response generation, we con-
|
215 |
+
catenate the prefix with the dialog context and
|
216 |
+
the current user utterance as input [PREFIX;
|
217 |
+
u1, s1, ..., un−1, sn−1, un]. The target output is the
|
218 |
+
system response sn as seen in Figure 1 (a).
|
219 |
+
We adopt T5 (Raffel et al., 2020) as the pre-
|
220 |
+
trained language model. T5 employs an encoder-
|
221 |
+
decoder framework which is prevalent in seq2seq
|
222 |
+
tasks (Sutskever et al., 2014; Cho et al., 2014).
|
223 |
+
4.2
|
224 |
+
Contextual Prefix-tuning
|
225 |
+
In vanilla prefix-tuning, the parameters of the prefix
|
226 |
+
are fixed after training for any particular task to be
|
227 |
+
reused. However, a dialog system involves having
|
228 |
+
multiple turns of conversation between a system
|
229 |
+
and the user. It is imperative in such systems to
|
230 |
+
dynamically incorporate contextual information to
|
231 |
+
carry out a meaningful conversation with the user.
|
232 |
+
We explore how we can distill the dialog context
|
233 |
+
information into the prefix with a prompt encoder.
|
234 |
+
Different from the original design, we want to
|
235 |
+
encode additional signals into the prefix that dif-
|
236 |
+
fers for each input instances. In other words, we
|
237 |
+
want to generate contextual prefix or contextual
|
238 |
+
dynamic prompts.
|
239 |
+
Formally, we modify the equation (1) as follows:
|
240 |
+
Pθ[:, :] = MLPθ(encoder(C)),
|
241 |
+
(2)
|
242 |
+
where C = [u1, s1, ..., un−1, sn−1] represents the
|
243 |
+
dialog context. We first obtain the representation
|
244 |
+
of the dialog context by feeding C into a T5 en-
|
245 |
+
coder which is kept frozen as shown in Figure 1 (b).
|
246 |
+
Subsequently, we use the prompt encoder, i.e., the
|
247 |
+
feedforward neural network, to get the prefix. The
|
248 |
+
generated prefix Pθ is then concatenated with only
|
249 |
+
the current user utterance. Instead of concatenating
|
250 |
+
the whole context as the input to the T5 decoder,
|
251 |
+
we first distill the signal into the prefix tokens. As a
|
252 |
+
consequence of freezing the T5 encoder which gen-
|
253 |
+
erates the context representation, we still have the
|
254 |
+
same number of tunable parameters as the original
|
255 |
+
prefix-tuning framework.
|
256 |
+
4.3
|
257 |
+
Input-dependent Prefix-tuning with
|
258 |
+
Dialog State
|
259 |
+
In most task-oriented dialog systems, we also have
|
260 |
+
access to the dialog state at every turn in addition
|
261 |
+
to dialog context. The dialog state has information
|
262 |
+
such as requested slots and filled slots at every turn.
|
263 |
+
We provide the dialog state D in addition to the
|
264 |
+
context C to obtain contextual dynamic prompts.
|
265 |
+
As a result, we will now modify equation (2) as:
|
266 |
+
Pθ[:, :] = MLPθ(encoder(C; Dn−1)),
|
267 |
+
(3)
|
268 |
+
we only provide the most recent dialog state
|
269 |
+
Dn−1 which is an amalgamation of all previous
|
270 |
+
dialog states D<n−1.
|
271 |
+
5
|
272 |
+
Experimental Settings
|
273 |
+
5.1
|
274 |
+
Dataset and Metrics
|
275 |
+
We evaluate our proposed framework and model
|
276 |
+
on the MultiWOZ 2.2 dataset (Zang et al., 2020;
|
277 |
+
Budzianowski et al., 2018) which is a large-scale,
|
278 |
+
multi-domain, human-human task-oriented dialog
|
279 |
+
dataset collected via the Wizard-of-Oz framework
|
280 |
+
where one participant plays the role of the system.
|
281 |
+
It consists of seven domains including hotel, restau-
|
282 |
+
rant, attraction, train, taxi, hospital, and police,
|
283 |
+
and an additional domain general for acts such as
|
284 |
+
greeting or goodbye. Due to its multi-domain set-
|
285 |
+
ting, complex ontology, and flexible human expres-
|
286 |
+
sions, developing dialog systems on MultiWOZ is
|
287 |
+
extremely challenging. The training data contain
|
288 |
+
8437 dialogs, the dev and test set contain 1000
|
289 |
+
dialogs each.
|
290 |
+
We use four evaluation metrics: BLEU (Pap-
|
291 |
+
ineni et al., 2002), Inform, and Success rates, and
|
292 |
+
combined score. Inform measures whether the
|
293 |
+
|
294 |
+
System
|
295 |
+
response
|
296 |
+
System
|
297 |
+
response
|
298 |
+
frozen T5 model
|
299 |
+
frozen T5 model
|
300 |
+
Transformer block
|
301 |
+
Transformer block
|
302 |
+
frozen T5 encodel
|
303 |
+
Transformer block
|
304 |
+
Prefix
|
305 |
+
Context
|
306 |
+
User'input
|
307 |
+
Prefix
|
308 |
+
Userinput
|
309 |
+
Context
|
310 |
+
tokens
|
311 |
+
tokens
|
312 |
+
(a) The original prefix-tuning framework where a set of prefix tokens
|
313 |
+
(b) Our changes which now incorporate the dialog context C into
|
314 |
+
are added to the input which also consists of dialog context C in
|
315 |
+
the prefix by obtaining a representation by passing through a
|
316 |
+
addition to the current user input un
|
317 |
+
frozen T5 encoder.MultiWOZ 2.2
|
318 |
+
BLEU
|
319 |
+
Inform
|
320 |
+
Success
|
321 |
+
Combined Score
|
322 |
+
Av. len.
|
323 |
+
#uniq. words
|
324 |
+
#uniq. 3-grams
|
325 |
+
Prefix-Tuning
|
326 |
+
19.19
|
327 |
+
54.7
|
328 |
+
48.0
|
329 |
+
70.54
|
330 |
+
13.83
|
331 |
+
245
|
332 |
+
1671
|
333 |
+
Prefix-Tuning (with DS)
|
334 |
+
19.36
|
335 |
+
51.8
|
336 |
+
47.0
|
337 |
+
68.76
|
338 |
+
13.08
|
339 |
+
231
|
340 |
+
1626
|
341 |
+
Contextual Dynamic Prompt
|
342 |
+
19.16
|
343 |
+
58.1
|
344 |
+
50.5
|
345 |
+
73.46
|
346 |
+
14.16
|
347 |
+
231
|
348 |
+
1532
|
349 |
+
Contextual Dynamic Prompt (with DS)
|
350 |
+
17.94
|
351 |
+
77.2
|
352 |
+
68.8
|
353 |
+
90.94
|
354 |
+
14.02
|
355 |
+
282
|
356 |
+
2390
|
357 |
+
Table 1: Performance Comparison. All model performance are based on features from all modalities. Contextual
|
358 |
+
Dynamic Prompt (with DS) has the best performance in combined score.
|
359 |
+
system provides an appropriate entity and Success
|
360 |
+
measures whether the system answers all the re-
|
361 |
+
quested attributes. Specifically, the Inform rate
|
362 |
+
relates to attributes that allow the user to constrain
|
363 |
+
database searches, e.g., restaurant location or price
|
364 |
+
range (the informational slots) and the Success rate
|
365 |
+
focuses on request-able slots, that can be asked
|
366 |
+
by the user, e.g., phone number. Both are calcu-
|
367 |
+
lated on the level of dialogs. The combined score
|
368 |
+
is calculated following (Mehri et al., 2019b) as
|
369 |
+
BLEU +0.5∗(Inform+Success). We followed
|
370 |
+
a standard script 1 to report different measures.
|
371 |
+
5.2
|
372 |
+
Human Evaluation
|
373 |
+
We chose a 10% subset of the evaluation set (ran-
|
374 |
+
domly shuffled) conversations with a total of 728
|
375 |
+
turns across them and provided annotators with the
|
376 |
+
responses generated by each of the methods de-
|
377 |
+
scribed in section 4. Annotators were asked to rate
|
378 |
+
each agent on a turn-level and to also pick the agent
|
379 |
+
which carried out the best conversation. If annota-
|
380 |
+
tors felt more than one agent did well, they could
|
381 |
+
choose multiple agents. The agent numbers, when
|
382 |
+
provided to annotators, were shuffled to avoid bias.
|
383 |
+
Each agent is described as:
|
384 |
+
• Agent 1: Incorporates only prefix-tuning
|
385 |
+
• Agent 2: Incorporates prefix-tuning with Dia-
|
386 |
+
log State
|
387 |
+
• Agent 3: Incorporates contextual dynamic
|
388 |
+
prompts
|
389 |
+
• Agent 4: Incorporates contextual dynamic
|
390 |
+
prompts with Dialog State
|
391 |
+
When annotating on turn level, from these 728
|
392 |
+
turns, we saw that the agents tied on 596 occasions,
|
393 |
+
agent 1 had outright win on 12 occasions, agent
|
394 |
+
2 on 22, agent 3 on 33 occasions, and agent 4 on
|
395 |
+
65 occasions. This shows that our technique of
|
396 |
+
using contextual dynamic prompts for generating
|
397 |
+
responses is effective (Examples in Appendix B).
|
398 |
+
1https://github.com/Tomiinek/MultiWOZ_
|
399 |
+
Evaluation
|
400 |
+
Additionally, on the conversation level, we no-
|
401 |
+
ticed that across 100 conversations, 37 were tied,
|
402 |
+
and agents 3 and 4 were preferred in a total of
|
403 |
+
53 conversations confirming our hypothesis that
|
404 |
+
incorporating context into prompts leads to better
|
405 |
+
responses. We request readers to refer to Appendix
|
406 |
+
A and B for more details about the annotation task.
|
407 |
+
6
|
408 |
+
Results
|
409 |
+
As shown in Table 1, contextual dynamic prompt-
|
410 |
+
ing with dialog states obtains a combined score of
|
411 |
+
90.94, a 20 point jump from our baseline (prefix-
|
412 |
+
tuning). In addition, even though we can’t explic-
|
413 |
+
itly explain the drop in BLEU, the massive jumps in
|
414 |
+
both success and inform suggest more transparency
|
415 |
+
and coherence for the responses generated by the
|
416 |
+
input-dependent prefix-tuning as these metrics fo-
|
417 |
+
cus on quality of informational and request-able
|
418 |
+
slots in each turn. When comparing our results
|
419 |
+
with the human annotations, we also see that Agent
|
420 |
+
4 - which uses contextual dynamic prompting -
|
421 |
+
wins 38 conversations (out of 100). This is major-
|
422 |
+
ity of wins compared to Agent 1 winning only 3
|
423 |
+
conversations, and Agent 2 winning 7. This fur-
|
424 |
+
ther emphasized that adding contextual dynamic
|
425 |
+
prompts leads to better quality of responses.
|
426 |
+
7
|
427 |
+
Conclusion
|
428 |
+
In our work, we proposed an approach that
|
429 |
+
performs contextual dynamic prompting where
|
430 |
+
prompts are learnt from dialog contexts with the
|
431 |
+
goal of distilling useful prompting signals. In our
|
432 |
+
experiments, we showed that contextual dynamic
|
433 |
+
prompts improve response generation in terms of
|
434 |
+
combined score (Mehri et al., 2019a) by 3 points,
|
435 |
+
and by 20 points when dialog states are incorpo-
|
436 |
+
rated compared to the baseline. Our technique does
|
437 |
+
not expose the models to additional knowledge
|
438 |
+
sources. Human annotation on these conversations
|
439 |
+
found that agents which incorporate context into
|
440 |
+
prompts were preferred over agents with vanilla
|
441 |
+
prefix-tuning.
|
442 |
+
|
443 |
+
Limitations
|
444 |
+
While our work explores a new technique of con-
|
445 |
+
textual dynamic prompts for response generation,
|
446 |
+
we carried out our experiments on a dataset which
|
447 |
+
is in the English language. A potential limitation
|
448 |
+
of this work would be the transfer of our findings
|
449 |
+
on an English dataset to a multi-lingual dataset or
|
450 |
+
a mono-lingual dataset on a language other than
|
451 |
+
English. We plan to address this in our future work
|
452 |
+
and also request the help of the research community
|
453 |
+
in doing so.
|
454 |
+
References
|
455 |
+
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
|
456 |
+
Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
|
457 |
+
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
|
458 |
+
Askell, et al. 2020. Language models are few-shot
|
459 |
+
learners. Advances in neural information processing
|
460 |
+
systems, 33:1877–1901.
|
461 |
+
Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang
|
462 |
+
Tseng, Iñigo Casanueva, Ultes Stefan, Ramadan Os-
|
463 |
+
man, and Milica Gaši´c. 2018. Multiwoz - a large-
|
464 |
+
scale multi-domain wizard-of-oz dataset for task-
|
465 |
+
oriented dialogue modelling. In Proceedings of the
|
466 |
+
2018 Conference on Empirical Methods in Natural
|
467 |
+
Language Processing (EMNLP).
|
468 |
+
Kyunghyun Cho, Bart van Merriënboer, Dzmitry Bah-
|
469 |
+
danau, and Yoshua Bengio. 2014. On the properties
|
470 |
+
of neural machine translation: Encoder–decoder ap-
|
471 |
+
proaches. In Proceedings of SSST-8, Eighth Work-
|
472 |
+
shop on Syntax, Semantics and Structure in Statisti-
|
473 |
+
cal Translation, pages 103–111, Doha, Qatar. Asso-
|
474 |
+
ciation for Computational Linguistics.
|
475 |
+
Tianyu Gao, Adam Fisch, and Danqi Chen. 2020.
|
476 |
+
Making pre-trained language models better few-shot
|
477 |
+
learners. arXiv preprint arXiv:2012.15723.
|
478 |
+
Donghoon Ham, Jeong-Gwan Lee, Youngsoo Jang,
|
479 |
+
and Kee-Eung Kim. 2020.
|
480 |
+
End-to-end neural
|
481 |
+
pipeline for goal-oriented dialogue systems using
|
482 |
+
gpt-2.
|
483 |
+
In Proceedings of the 58th Annual Meet-
|
484 |
+
ing of the Association for Computational Linguistics,
|
485 |
+
pages 583–592.
|
486 |
+
Wanwei He, Yinpei Dai, Yinhe Zheng, Yuchuan Wu,
|
487 |
+
Zheng Cao, Dermot Liu, Peng Jiang, Min Yang, Fei
|
488 |
+
Huang, Luo Si, Jian Sun, and Yongbin Li. 2022.
|
489 |
+
Galaxy: A generative pre-trained model for task-
|
490 |
+
oriented dialog with semi-supervised learning and
|
491 |
+
explicit policy injection. Proceedings of the AAAI
|
492 |
+
Conference on Artificial Intelligence, 36(10):10749–
|
493 |
+
10757.
|
494 |
+
Ehsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu,
|
495 |
+
Semih Yavuz, and Richard Socher. 2020.
|
496 |
+
A sim-
|
497 |
+
ple language model for task-oriented dialogue. Ad-
|
498 |
+
vances in Neural Information Processing Systems,
|
499 |
+
33:20179–20191.
|
500 |
+
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski,
|
501 |
+
Bruna Morrone, Quentin de Laroussilhe, Andrea
|
502 |
+
Gesmundo, Mona Attariyan, and Sylvain Gelly.
|
503 |
+
2019. Parameter-efficient transfer learning for NLP.
|
504 |
+
CoRR, abs/1902.00751.
|
505 |
+
Chia-Hsuan Lee, Hao Cheng, and Mari Ostendorf.
|
506 |
+
2021.
|
507 |
+
Dialogue state tracking with a language
|
508 |
+
model using schema-driven prompting. In Proceed-
|
509 |
+
ings of the 2021 Conference on Empirical Methods
|
510 |
+
in Natural Language Processing, pages 4937–4949,
|
511 |
+
Online and Punta Cana, Dominican Republic. Asso-
|
512 |
+
ciation for Computational Linguistics.
|
513 |
+
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
|
514 |
+
The power of scale for parameter-efficient prompt
|
515 |
+
tuning. arXiv preprint arXiv:2104.08691.
|
516 |
+
Xiang Lisa Li and Percy Liang. 2021.
|
517 |
+
Prefix-
|
518 |
+
tuning: Optimizing continuous prompts for genera-
|
519 |
+
tion. arXiv preprint arXiv:2101.00190.
|
520 |
+
Zhaojiang Lin, Andrea Madotto, and Pascale Fung.
|
521 |
+
2020a.
|
522 |
+
Exploring versatile generative language
|
523 |
+
model via parameter-efficient transfer learning. In
|
524 |
+
Findings of the Association for Computational Lin-
|
525 |
+
guistics: EMNLP 2020, pages 441–459, Online. As-
|
526 |
+
sociation for Computational Linguistics.
|
527 |
+
Zhaojiang Lin, Andrea Madotto, Genta Indra Winata,
|
528 |
+
and Pascale Fung. 2020b.
|
529 |
+
Mintl:
|
530 |
+
Minimalist
|
531 |
+
transfer learning for task-oriented dialogue systems.
|
532 |
+
arXiv preprint arXiv:2009.12005.
|
533 |
+
Qi Liu, Lei Yu, Laura Rimell, and Phil Blunsom.
|
534 |
+
2021. Pretraining the noisy channel model for task-
|
535 |
+
oriented dialogue. Transactions of the Association
|
536 |
+
for Computational Linguistics, 9:657–674.
|
537 |
+
Shikib Mehri, Tejas Srinivasan, and Maxine Eskenazi.
|
538 |
+
2019a. Structured fusion networks for dialog. In
|
539 |
+
Proceedings of the 20th Annual SIGdial Meeting
|
540 |
+
on Discourse and Dialogue, pages 165–177, Stock-
|
541 |
+
holm, Sweden. Association for Computational Lin-
|
542 |
+
guistics.
|
543 |
+
Shikib Mehri, Tejas Srinivasan, and Maxine Eskenazi.
|
544 |
+
2019b. Structured fusion networks for dialog. arXiv
|
545 |
+
preprint arXiv:1907.10016.
|
546 |
+
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-
|
547 |
+
Jing Zhu. 2002. Bleu: a method for automatic eval-
|
548 |
+
uation of machine translation. In Proceedings of the
|
549 |
+
40th annual meeting of the Association for Compu-
|
550 |
+
tational Linguistics, pages 311–318.
|
551 |
+
Baolin Peng, Chunyuan Li, Jinchao Li, Shahin Shayan-
|
552 |
+
deh, Lars Liden, and Jianfeng Gao. 2021. Soloist:
|
553 |
+
Buildingtask bots at scale with transfer learning and
|
554 |
+
machine teaching. Transactions of the Association
|
555 |
+
for Computational Linguistics, 9:807–824.
|
556 |
+
Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton
|
557 |
+
Bakhtin, Yuxiang Wu, Alexander H Miller, and Se-
|
558 |
+
bastian Riedel. 2019. Language models as knowl-
|
559 |
+
edge bases? arXiv preprint arXiv:1909.01066.
|
560 |
+
|
561 |
+
Alec Radford, Karthik Narasimhan, Tim Salimans,
|
562 |
+
Ilya Sutskever,
|
563 |
+
et al. 2018.
|
564 |
+
Improving lan-
|
565 |
+
guage understanding by generative pre-training.
|
566 |
+
https://openai.com/blog/language-unsupervised/.
|
567 |
+
Alec Radford, Jeffrey Wu, Rewon Child, David Luan,
|
568 |
+
Dario Amodei, Ilya Sutskever, et al. 2019.
|
569 |
+
Lan-
|
570 |
+
guage models are unsupervised multitask learners.
|
571 |
+
OpenAI blog, 1(8):9.
|
572 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine
|
573 |
+
Lee, Sharan Narang, Michael Matena, Yanqi Zhou,
|
574 |
+
Wei Li, Peter J Liu, et al. 2020. Exploring the limits
|
575 |
+
of transfer learning with a unified text-to-text trans-
|
576 |
+
former. J. Mach. Learn. Res., 21(140):1–67.
|
577 |
+
Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea
|
578 |
+
Vedaldi. 2017.
|
579 |
+
Learning multiple visual domains
|
580 |
+
with residual adapters. CoRR, abs/1705.08045.
|
581 |
+
Timo Schick and Hinrich Schütze. 2020.
|
582 |
+
Exploit-
|
583 |
+
ing cloze questions for few shot text classification
|
584 |
+
and natural language inference.
|
585 |
+
arXiv preprint
|
586 |
+
arXiv:2001.07676.
|
587 |
+
Taylor Shin, Yasaman Razeghi, Robert L Logan IV,
|
588 |
+
Eric Wallace, and Sameer Singh. 2020. Autoprompt:
|
589 |
+
Eliciting knowledge from language models with
|
590 |
+
automatically generated prompts.
|
591 |
+
arXiv preprint
|
592 |
+
arXiv:2010.15980.
|
593 |
+
Yixuan Su, Lei Shu, Elman Mansimov, Arshit Gupta,
|
594 |
+
Deng Cai, Yi-An Lai, and Yi Zhang. 2021. Multi-
|
595 |
+
task pre-training for plug-and-play task-oriented di-
|
596 |
+
alogue system. arXiv preprint arXiv:2109.14739.
|
597 |
+
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014.
|
598 |
+
Sequence to sequence learning with neural networks.
|
599 |
+
In Proceedings of the 27th International Conference
|
600 |
+
on Neural Information Processing Systems - Vol-
|
601 |
+
ume 2, NIPS’14, page 3104–3112, Cambridge, MA,
|
602 |
+
USA. MIT Press.
|
603 |
+
Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei-
|
604 |
+
Hao Su, David Vandyke, and Steve Young. 2015. Se-
|
605 |
+
mantically conditioned lstm-based natural language
|
606 |
+
generation for spoken dialogue systems.
|
607 |
+
arXiv
|
608 |
+
preprint arXiv:1508.01745.
|
609 |
+
Yunyi Yang, Yunhao Li, and Xiaojun Quan. 2021.
|
610 |
+
Ubar: Towards fully end-to-end task-oriented dialog
|
611 |
+
system with gpt-2. In Proceedings of the AAAI Con-
|
612 |
+
ference on Artificial Intelligence, volume 35, pages
|
613 |
+
14230–14238.
|
614 |
+
Xiaoxue Zang, Abhinav Rastogi, Srinivas Sunkara,
|
615 |
+
Raghav Gupta, Jianguo Zhang, and Jindong Chen.
|
616 |
+
2020. Multiwoz 2.2: A dialogue dataset with addi-
|
617 |
+
tional annotation corrections and state tracking base-
|
618 |
+
lines. In Proceedings of the 2nd Workshop on Nat-
|
619 |
+
ural Language Processing for Conversational AI,
|
620 |
+
ACL 2020, pages 109–117.
|
621 |
+
Qi Zhu, Bing Li, Fei Mi, Xiaoyan Zhu, and Minlie
|
622 |
+
Huang. 2022. Continual prompt tuning for dialog
|
623 |
+
state tracking. arXiv preprint arXiv:2203.06654.
|
624 |
+
A
|
625 |
+
Human Evaluation Task
|
626 |
+
We explored contextual dynamic prompting
|
627 |
+
strategies for the response generation task using
|
628 |
+
the MultiWOZ 2.2 (Budzianowski et al., 2018;
|
629 |
+
Zang et al., 2020) dataset and noticed that the
|
630 |
+
combined score that we obtained was significantly
|
631 |
+
better than the baseline prefix-tuning method
|
632 |
+
of response generation.
|
633 |
+
To understand if the
|
634 |
+
agents which incorporated contextual dynamic
|
635 |
+
prompts did indeed provide a better conversational
|
636 |
+
experience, we designed a small human evaluation
|
637 |
+
task to test our hypothesis.
|
638 |
+
We picked a random subset of 10% of the
|
639 |
+
conversations from the original MultiWOZ test
|
640 |
+
data to perform this analysis. Once we obtained
|
641 |
+
this random set, we ran our four model variants
|
642 |
+
as described in Section 4 on the conversations to
|
643 |
+
obtain system responses for each of them. We then
|
644 |
+
presented the different agents’ responses to the
|
645 |
+
annotator as shown in Table 2 below. In order to
|
646 |
+
avoid potential biases, we shuffled the order of the
|
647 |
+
agents between our annotators i.e., Agent 1 for
|
648 |
+
annotator a would not be Agent 1 for annotator b.
|
649 |
+
We kept track of which agents corresponded to
|
650 |
+
which of our four methods prior to distribution of
|
651 |
+
data amongst the annotators.
|
652 |
+
The annotators were given instructions to read
|
653 |
+
every turn of conversation and provide a number
|
654 |
+
between 1 and 4 for the agent which they thought
|
655 |
+
performed the best for that turn. If the annotators
|
656 |
+
found that there was a tie, they could pick more
|
657 |
+
than one agent as [agent_a, agent_b]. In addition
|
658 |
+
to this instruction, annotators were asked to read
|
659 |
+
the entire conversation and pick the agent which
|
660 |
+
performed the best - once again with an option to
|
661 |
+
pick multiple. Table 3 below shows an example
|
662 |
+
annotation style for a single conversation spanning
|
663 |
+
6 turns. There is an annotation at every turn and a
|
664 |
+
single annotation at the end of the conversation.
|
665 |
+
We tallied results and re-mapped all agents back
|
666 |
+
to their methods and found that agents 3 and 4
|
667 |
+
as mentioned in Section 5.2 were preferred at the
|
668 |
+
conversation level in a total of 53 of the 100 conver-
|
669 |
+
sations while agents 1 and 2 were only preferred
|
670 |
+
10 conversations in the entire set of 100.
|
671 |
+
|
672 |
+
Turn num
|
673 |
+
User turn
|
674 |
+
Agent 1 response
|
675 |
+
Agent 2 response
|
676 |
+
Agent 3 response
|
677 |
+
Agent 4 response
|
678 |
+
1
|
679 |
+
2
|
680 |
+
3
|
681 |
+
4
|
682 |
+
5
|
683 |
+
6
|
684 |
+
7
|
685 |
+
Table 2: The format which is presented to annotators while performing turn-level and conversation-level annotation.
|
686 |
+
The agents are shuffled between the annotators to avoid biasing them.
|
687 |
+
Turn num
|
688 |
+
Turn level
|
689 |
+
Conversation level
|
690 |
+
1
|
691 |
+
2
|
692 |
+
2
|
693 |
+
[3,4]
|
694 |
+
3
|
695 |
+
2
|
696 |
+
4
|
697 |
+
3
|
698 |
+
5
|
699 |
+
4
|
700 |
+
6
|
701 |
+
[3,4]
|
702 |
+
3
|
703 |
+
Table 3: We asked annotators to provide two levels of annotation for each conversation - turn level and conversation
|
704 |
+
level. Annotators were allowed to pick more than agent in case of a tie.
|
705 |
+
B
|
706 |
+
Examples
|
707 |
+
In the tables below, we provide some examples
|
708 |
+
from the human annotation task where annotators
|
709 |
+
picked agent 4 as the best performing agent for the
|
710 |
+
conversation (we performed the re-mapping to fit
|
711 |
+
the agent numbers in Section 5.2). Table 4 shows an
|
712 |
+
example where Agent 4 was rightly able to ask the
|
713 |
+
customer about their area of preference for dining,
|
714 |
+
and also considers a booking time change from the
|
715 |
+
user. Table 5 shows an example where agent 4 is
|
716 |
+
able to take into consideration all the different con-
|
717 |
+
traints mentioned by the user in terms of slots in the
|
718 |
+
response that it generates. Table 6 shows an exam-
|
719 |
+
ple where the agent 4 is able to correctly respond to
|
720 |
+
the user request for number of people to reserve the
|
721 |
+
table for - which none of the other agents captured.
|
722 |
+
All these examples showed that agent 4, which uses
|
723 |
+
contextual dynamic prompts with dialog states was
|
724 |
+
indeed able to provide smoother responses to the
|
725 |
+
customer input compared to the other agents.
|
726 |
+
C
|
727 |
+
Compute Infrastructure and
|
728 |
+
Experimental settings used
|
729 |
+
All our experiments were carried out on public
|
730 |
+
AWS p3 GPU instances which are backed by Tesla
|
731 |
+
V100 GPU cores. All four variants of our models
|
732 |
+
required 18-24 hours to reach convergence on these
|
733 |
+
machines. Learning rates were varied based on
|
734 |
+
performance of models on the validation split of
|
735 |
+
our dataset.
|
736 |
+
|
737 |
+
Turn num
|
738 |
+
User turn
|
739 |
+
Agent 1 response
|
740 |
+
Agent 2 response
|
741 |
+
Agent 3 response
|
742 |
+
Agent 4 response
|
743 |
+
1
|
744 |
+
i am looking for
|
745 |
+
a
|
746 |
+
place
|
747 |
+
to
|
748 |
+
dine.
|
749 |
+
the
|
750 |
+
restaurant
|
751 |
+
should be expensive
|
752 |
+
and should serve
|
753 |
+
chinese food.
|
754 |
+
there are [choice]
|
755 |
+
[pricerange] [food]
|
756 |
+
restaurants in town.
|
757 |
+
do you have a pref-
|
758 |
+
erence for area or
|
759 |
+
price range?
|
760 |
+
i
|
761 |
+
have
|
762 |
+
[choice]
|
763 |
+
[pricerange] [food]
|
764 |
+
restaurants. do you
|
765 |
+
have a preference
|
766 |
+
for area?
|
767 |
+
there are [choice]
|
768 |
+
[pricerange] [food]
|
769 |
+
restaurants. do you
|
770 |
+
have a price range
|
771 |
+
in mind?
|
772 |
+
there are [choice]
|
773 |
+
[food]
|
774 |
+
restaurants
|
775 |
+
in the [pricerange]
|
776 |
+
price range. is there
|
777 |
+
a
|
778 |
+
particular
|
779 |
+
area
|
780 |
+
you are interested
|
781 |
+
in?
|
782 |
+
2
|
783 |
+
not in any specific
|
784 |
+
area please. i need
|
785 |
+
to book a table for
|
786 |
+
3 at 14:00 on satur-
|
787 |
+
day.
|
788 |
+
booking was suc-
|
789 |
+
cessful.
|
790 |
+
the table
|
791 |
+
will be reserved for
|
792 |
+
15 minutes.
|
793 |
+
ref-
|
794 |
+
erence number is :
|
795 |
+
[ref]. is there any-
|
796 |
+
thing else i can help
|
797 |
+
you with?
|
798 |
+
booking was suc-
|
799 |
+
cessful.
|
800 |
+
the table
|
801 |
+
will be reserved for
|
802 |
+
15 minutes.
|
803 |
+
ref-
|
804 |
+
erence number is :
|
805 |
+
[ref].
|
806 |
+
i was able to book
|
807 |
+
you at [name] for
|
808 |
+
[bookday] at [book-
|
809 |
+
time].
|
810 |
+
your ref-
|
811 |
+
erence number is
|
812 |
+
[ref]. is there any-
|
813 |
+
thing else i can help
|
814 |
+
you with?
|
815 |
+
i have made a reser-
|
816 |
+
vation for you at
|
817 |
+
[name].
|
818 |
+
the table
|
819 |
+
will be reserved for
|
820 |
+
15 minutes.
|
821 |
+
the
|
822 |
+
reference number is
|
823 |
+
[ref]. is there any-
|
824 |
+
thing else i can help
|
825 |
+
you with?
|
826 |
+
3
|
827 |
+
how about 13:00
|
828 |
+
then?
|
829 |
+
booking was suc-
|
830 |
+
cessful.
|
831 |
+
the table
|
832 |
+
will be reserved for
|
833 |
+
15 minutes.
|
834 |
+
ref-
|
835 |
+
erence number is :
|
836 |
+
[ref].
|
837 |
+
booking was suc-
|
838 |
+
cessful.
|
839 |
+
the table
|
840 |
+
will be reserved for
|
841 |
+
15 minutes.
|
842 |
+
ref-
|
843 |
+
erence number is :
|
844 |
+
[ref].
|
845 |
+
booking was suc-
|
846 |
+
cessful.
|
847 |
+
the table
|
848 |
+
will be reserved for
|
849 |
+
15 minutes.
|
850 |
+
ref-
|
851 |
+
erence number is :
|
852 |
+
[ref]. is there any-
|
853 |
+
thing else i can help
|
854 |
+
you with?
|
855 |
+
ok,
|
856 |
+
i
|
857 |
+
have
|
858 |
+
you
|
859 |
+
booked at [name].
|
860 |
+
your
|
861 |
+
reference
|
862 |
+
number is [ref]. is
|
863 |
+
there anything else i
|
864 |
+
can help you with?
|
865 |
+
4
|
866 |
+
what is the refer-
|
867 |
+
ence number for my
|
868 |
+
booking?
|
869 |
+
your reference num-
|
870 |
+
ber is [ref]. is there
|
871 |
+
anything else i can
|
872 |
+
help you with?
|
873 |
+
the reference num-
|
874 |
+
ber is [ref]. is there
|
875 |
+
anything else i can
|
876 |
+
help you with?
|
877 |
+
your reference num-
|
878 |
+
ber is [ref]. is there
|
879 |
+
anything else i can
|
880 |
+
help you with?
|
881 |
+
the reference num-
|
882 |
+
ber is [ref].
|
883 |
+
5
|
884 |
+
no thank you, that is
|
885 |
+
all i needed!
|
886 |
+
you’re
|
887 |
+
welcome.
|
888 |
+
have a great day!
|
889 |
+
you’re
|
890 |
+
welcome.
|
891 |
+
have a great day!
|
892 |
+
you’re
|
893 |
+
welcome.
|
894 |
+
have a great day!
|
895 |
+
you’re
|
896 |
+
welcome,
|
897 |
+
have a great day!
|
898 |
+
Table 4: An example where Agent 4 was picked as the best performing agent in a conversation.
|
899 |
+
|
900 |
+
Turn num
|
901 |
+
User turn
|
902 |
+
Agent 1 response
|
903 |
+
Agent 2 response
|
904 |
+
Agent 3 response
|
905 |
+
Agent 4 response
|
906 |
+
1
|
907 |
+
i am looking for
|
908 |
+
a train that departs
|
909 |
+
from peterborough
|
910 |
+
and goes to cam-
|
911 |
+
bridge.
|
912 |
+
there are [choice]
|
913 |
+
trains leaving from
|
914 |
+
[departure] to [des-
|
915 |
+
tination]. what day
|
916 |
+
would you like to
|
917 |
+
travel?
|
918 |
+
what day would you
|
919 |
+
like to travel?
|
920 |
+
there are [choice]
|
921 |
+
trains leaving [de-
|
922 |
+
parture] from [de-
|
923 |
+
parture] to [destina-
|
924 |
+
tion].
|
925 |
+
what day
|
926 |
+
would you like to
|
927 |
+
travel?
|
928 |
+
i
|
929 |
+
have
|
930 |
+
[choice]
|
931 |
+
trains
|
932 |
+
departing
|
933 |
+
from [departure] to
|
934 |
+
[destination]. what
|
935 |
+
day would you like
|
936 |
+
to travel?
|
937 |
+
2
|
938 |
+
we will be leav-
|
939 |
+
ing on sunday after-
|
940 |
+
noon.
|
941 |
+
[trainid] leaves [de-
|
942 |
+
parture] at [leaveat]
|
943 |
+
and arrives in [desti-
|
944 |
+
nation] at [arriveby].
|
945 |
+
would you like me
|
946 |
+
to book that for
|
947 |
+
you?
|
948 |
+
i
|
949 |
+
have
|
950 |
+
[choice]
|
951 |
+
trains
|
952 |
+
available.
|
953 |
+
what time would
|
954 |
+
you like to leave?
|
955 |
+
[trainid] leaves at
|
956 |
+
[leaveat] and arrives
|
957 |
+
in [destination] at
|
958 |
+
[arriveby].
|
959 |
+
would
|
960 |
+
you like me to book
|
961 |
+
it for you?
|
962 |
+
[trainid] leaves [de-
|
963 |
+
parture] at [leaveat]
|
964 |
+
and arrives at [desti-
|
965 |
+
nation] at [arriveby].
|
966 |
+
would that work for
|
967 |
+
you?
|
968 |
+
3
|
969 |
+
yes, it should leave
|
970 |
+
after
|
971 |
+
20:15
|
972 |
+
and
|
973 |
+
leave on monday,
|
974 |
+
not sunday. please
|
975 |
+
give me a price,
|
976 |
+
travel
|
977 |
+
time,
|
978 |
+
and
|
979 |
+
arrival time for any
|
980 |
+
train.
|
981 |
+
[trainid] leaves at
|
982 |
+
[leaveat] and arrives
|
983 |
+
in [destination] at
|
984 |
+
[arriveby]. the price
|
985 |
+
is [price].
|
986 |
+
[trainid] leaves at
|
987 |
+
[leaveat] and arrives
|
988 |
+
at [arriveby]. would
|
989 |
+
you like me to book
|
990 |
+
that for you?
|
991 |
+
[trainid] leaves at
|
992 |
+
[leaveat] and arrives
|
993 |
+
at [arriveby].
|
994 |
+
the
|
995 |
+
price
|
996 |
+
is
|
997 |
+
[price].
|
998 |
+
would you like me
|
999 |
+
to book it for you?
|
1000 |
+
[trainid] leaves [de-
|
1001 |
+
parture] at [leaveat]
|
1002 |
+
and arrives in [desti-
|
1003 |
+
nation] at [arriveby].
|
1004 |
+
the cost is [price]
|
1005 |
+
and the travel time
|
1006 |
+
is [duration].
|
1007 |
+
4
|
1008 |
+
yes please.
|
1009 |
+
what
|
1010 |
+
is the travel time?
|
1011 |
+
i’ll also need a ho-
|
1012 |
+
tel in the expensive
|
1013 |
+
price range. can you
|
1014 |
+
help me with that as
|
1015 |
+
well?
|
1016 |
+
the travel time is
|
1017 |
+
[duration]. is there
|
1018 |
+
anything else i can
|
1019 |
+
help you with?
|
1020 |
+
the travel time is
|
1021 |
+
[duration].
|
1022 |
+
the travel time is
|
1023 |
+
[duration]. is there
|
1024 |
+
anything else i can
|
1025 |
+
help you with?
|
1026 |
+
the travel time is
|
1027 |
+
[duration]. is there
|
1028 |
+
anything else i can
|
1029 |
+
help you with?
|
1030 |
+
5
|
1031 |
+
yes, i’d like a 2 star
|
1032 |
+
rating and i’ll need
|
1033 |
+
it to be in the east
|
1034 |
+
side of town.
|
1035 |
+
i
|
1036 |
+
have
|
1037 |
+
[choice]
|
1038 |
+
[type] in the [area]
|
1039 |
+
with
|
1040 |
+
[stars]
|
1041 |
+
stars.
|
1042 |
+
do
|
1043 |
+
you
|
1044 |
+
have
|
1045 |
+
a
|
1046 |
+
preference?
|
1047 |
+
there are [choice]
|
1048 |
+
[type] in the [area]
|
1049 |
+
that are [stars] stars.
|
1050 |
+
do you have a pref-
|
1051 |
+
erence?
|
1052 |
+
there are [choice]
|
1053 |
+
[type] in the [area]
|
1054 |
+
that are [pricerange]
|
1055 |
+
and
|
1056 |
+
[pricerange].
|
1057 |
+
do
|
1058 |
+
you
|
1059 |
+
have
|
1060 |
+
a
|
1061 |
+
preference?
|
1062 |
+
[name] is a [stars]
|
1063 |
+
star hotel in the
|
1064 |
+
[area]. would you
|
1065 |
+
like me to book it
|
1066 |
+
for you?
|
1067 |
+
6
|
1068 |
+
yes, i would like to
|
1069 |
+
book it for 3 people
|
1070 |
+
starting that same
|
1071 |
+
day for 4 nights.
|
1072 |
+
booking was suc-
|
1073 |
+
cessful.
|
1074 |
+
reference
|
1075 |
+
number is : [ref]. is
|
1076 |
+
there anything else i
|
1077 |
+
can help you with?
|
1078 |
+
booking was suc-
|
1079 |
+
cessful.
|
1080 |
+
reference
|
1081 |
+
number is : [ref].
|
1082 |
+
booking was suc-
|
1083 |
+
cessful, the total fee
|
1084 |
+
is [price] payable
|
1085 |
+
at the station. ref-
|
1086 |
+
erence number is :
|
1087 |
+
[ref].
|
1088 |
+
booking was suc-
|
1089 |
+
cessful. the total fee
|
1090 |
+
is [price] payable
|
1091 |
+
at the station. ref-
|
1092 |
+
erence number is :
|
1093 |
+
[ref].
|
1094 |
+
7
|
1095 |
+
that is all thank you
|
1096 |
+
goodbye.
|
1097 |
+
you’re
|
1098 |
+
welcome.
|
1099 |
+
have a great day!
|
1100 |
+
you’re
|
1101 |
+
welcome.
|
1102 |
+
have a great day.
|
1103 |
+
you’re
|
1104 |
+
welcome.
|
1105 |
+
have a great day.
|
1106 |
+
you’re
|
1107 |
+
welcome.
|
1108 |
+
have a great day.
|
1109 |
+
Table 5: Another example where Agent 4 was picked as the best performing agent in a conversation.
|
1110 |
+
|
1111 |
+
Turn num
|
1112 |
+
User turn
|
1113 |
+
Agent 1 response
|
1114 |
+
Agent 2 response
|
1115 |
+
Agent 3 response
|
1116 |
+
Agent 4 response
|
1117 |
+
1
|
1118 |
+
can you help me
|
1119 |
+
find
|
1120 |
+
a
|
1121 |
+
restaurant
|
1122 |
+
that serves african
|
1123 |
+
food with a mod-
|
1124 |
+
erate
|
1125 |
+
price
|
1126 |
+
range
|
1127 |
+
please
|
1128 |
+
there are no [food]
|
1129 |
+
restaurants in the
|
1130 |
+
[pricerange]
|
1131 |
+
price
|
1132 |
+
range.
|
1133 |
+
would you
|
1134 |
+
like to try a different
|
1135 |
+
type of food?
|
1136 |
+
there are no [food]
|
1137 |
+
restaurants in the
|
1138 |
+
[pricerange]
|
1139 |
+
price
|
1140 |
+
range.
|
1141 |
+
would you
|
1142 |
+
like to try a different
|
1143 |
+
type of food?
|
1144 |
+
there are no [food]
|
1145 |
+
restaurants in the
|
1146 |
+
[pricerange]
|
1147 |
+
price
|
1148 |
+
range.
|
1149 |
+
would you
|
1150 |
+
like to try a different
|
1151 |
+
type of food?
|
1152 |
+
[name] is a [food]
|
1153 |
+
restaurant
|
1154 |
+
in
|
1155 |
+
the
|
1156 |
+
[area]. would you
|
1157 |
+
like to try it?
|
1158 |
+
2
|
1159 |
+
yes how about asian
|
1160 |
+
food in the same
|
1161 |
+
price range?
|
1162 |
+
there are [choice]
|
1163 |
+
[food]
|
1164 |
+
restaurants
|
1165 |
+
in the [area] and
|
1166 |
+
[choice]
|
1167 |
+
in
|
1168 |
+
the
|
1169 |
+
[area]. do you have
|
1170 |
+
a preference?
|
1171 |
+
there are [choice]
|
1172 |
+
[food]
|
1173 |
+
restaurants
|
1174 |
+
in the [pricerange]
|
1175 |
+
price range. do you
|
1176 |
+
have a preference
|
1177 |
+
for area?
|
1178 |
+
there are [choice]
|
1179 |
+
[pricerange] [food]
|
1180 |
+
restaurants in the
|
1181 |
+
[area]. do you have
|
1182 |
+
a price range in
|
1183 |
+
mind?
|
1184 |
+
[name] is a [food]
|
1185 |
+
restaurant
|
1186 |
+
in
|
1187 |
+
the
|
1188 |
+
[pricerange]
|
1189 |
+
price
|
1190 |
+
range.
|
1191 |
+
3
|
1192 |
+
will
|
1193 |
+
you
|
1194 |
+
please
|
1195 |
+
book a table for 6 at
|
1196 |
+
13:45 on friday?
|
1197 |
+
booking was suc-
|
1198 |
+
cessful.
|
1199 |
+
the table
|
1200 |
+
will be reserved for
|
1201 |
+
15 minutes.
|
1202 |
+
ref-
|
1203 |
+
erence number is :
|
1204 |
+
[ref]. is there any-
|
1205 |
+
thing else i can help
|
1206 |
+
you with?
|
1207 |
+
booking was suc-
|
1208 |
+
cessful.
|
1209 |
+
the table
|
1210 |
+
will be reserved for
|
1211 |
+
15 minutes.
|
1212 |
+
ref-
|
1213 |
+
erence number is :
|
1214 |
+
[ref].
|
1215 |
+
i was able to book
|
1216 |
+
you at [name] for
|
1217 |
+
[bookday] at [book-
|
1218 |
+
time].
|
1219 |
+
your ref-
|
1220 |
+
erence number is
|
1221 |
+
[ref]. is there any-
|
1222 |
+
thing else i can help
|
1223 |
+
you with?
|
1224 |
+
i
|
1225 |
+
have
|
1226 |
+
made
|
1227 |
+
a
|
1228 |
+
reservation
|
1229 |
+
for
|
1230 |
+
[bookpeople]
|
1231 |
+
at
|
1232 |
+
[booktime]
|
1233 |
+
on
|
1234 |
+
[bookday] at [book-
|
1235 |
+
time].
|
1236 |
+
the table
|
1237 |
+
will
|
1238 |
+
be
|
1239 |
+
reserved
|
1240 |
+
for 15 minutes. the
|
1241 |
+
reference
|
1242 |
+
number
|
1243 |
+
is [ref].
|
1244 |
+
is there
|
1245 |
+
anything else i can
|
1246 |
+
help you with?
|
1247 |
+
4
|
1248 |
+
that’s all i needed.
|
1249 |
+
thanks a lot for the
|
1250 |
+
help!
|
1251 |
+
you’re
|
1252 |
+
welcome.
|
1253 |
+
have a great day!
|
1254 |
+
you’re
|
1255 |
+
welcome.
|
1256 |
+
have a great day!
|
1257 |
+
you’re
|
1258 |
+
welcome.
|
1259 |
+
have a great day!
|
1260 |
+
you’re
|
1261 |
+
welcome.
|
1262 |
+
have a great day!
|
1263 |
+
Table 6: An example where Agent 4 was picked as the best performing agent in a conversation.
|
1264 |
+
|
AtFQT4oBgHgl3EQfMjZB/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
B9E1T4oBgHgl3EQfVwR_/content/2301.03106v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d6ebe162d490b486384600db2c6c90ff26771958cd56b76294a51398e4606dcb
|
3 |
+
size 273895
|
B9E1T4oBgHgl3EQfVwR_/vector_store/index.faiss
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b62d135c8c43e6745bdf4f33677c3451a1c74bd279910262932a8da7ae2533fd
|
3 |
+
size 589869
|
B9E1T4oBgHgl3EQfVwR_/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d9dc42340d775d48481006ef64d7027ae33a7856710a836d9f62b22521d7a043
|
3 |
+
size 27538
|
BdE0T4oBgHgl3EQfPwDB/content/tmp_files/2301.02184v1.pdf.txt
ADDED
@@ -0,0 +1,2231 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Chat2Map: Efficient Scene Mapping from Multi-Ego Conversations
|
2 |
+
Sagnik Majumder1,2*
|
3 |
+
Hao Jiang2
|
4 |
+
Pierre Moulon2
|
5 |
+
Ethan Henderson2
|
6 |
+
Paul Calamia2
|
7 |
+
Kristen Grauman1,3
|
8 |
+
Vamsi Krishna Ithapu2
|
9 |
+
1UT Austin
|
10 |
+
2Reality Labs Research, Meta
|
11 |
+
3FAIR
|
12 |
+
Abstract
|
13 |
+
Can conversational videos captured from multiple egocen-
|
14 |
+
tric viewpoints reveal the map of a scene in a cost-efficient
|
15 |
+
way? We seek to answer this question by proposing a new
|
16 |
+
problem: efficiently building the map of a previously un-
|
17 |
+
seen 3D environment by exploiting shared information in
|
18 |
+
the egocentric audio-visual observations of participants in
|
19 |
+
a natural conversation. Our hypothesis is that as multi-
|
20 |
+
ple people (“egos") move in a scene and talk among them-
|
21 |
+
selves, they receive rich audio-visual cues that can help
|
22 |
+
uncover the unseen areas of the scene. Given the high cost
|
23 |
+
of continuously processing egocentric visual streams, we
|
24 |
+
further explore how to actively coordinate the sampling of
|
25 |
+
visual information, so as to minimize redundancy and re-
|
26 |
+
duce power use. To that end, we present an audio-visual
|
27 |
+
deep reinforcement learning approach that works with our
|
28 |
+
shared scene mapper to selectively turn on the camera to ef-
|
29 |
+
ficiently chart out the space. We evaluate the approach using
|
30 |
+
a state-of-the-art audio-visual simulator for 3D scenes as
|
31 |
+
well as real-world video. Our model outperforms previous
|
32 |
+
state-of-the-art mapping methods, and achieves an excellent
|
33 |
+
cost-accuracy tradeoff. Project: http://vision.cs.
|
34 |
+
utexas.edu/projects/chat2map.
|
35 |
+
1. Introduction
|
36 |
+
The spatial layout of the environment around us is fun-
|
37 |
+
damental to understanding our physical context. By repre-
|
38 |
+
senting the walls, furniture, and other major structures in a
|
39 |
+
space, scene maps ground activity and objects in a persis-
|
40 |
+
tent frame of reference, facilitating high-level reasoning for
|
41 |
+
many downstream applications in augmented reality (AR)
|
42 |
+
and robotics. For example, episodic memory [18, 30] aims to
|
43 |
+
relocalize lost objects observed in first-person video (where
|
44 |
+
are my keys?); floorplan estimation [10, 45, 53] aims to chart
|
45 |
+
out the area and shapes of complex buildings; navigating
|
46 |
+
agents try to discover routes in unfamiliar spaces [4, 11, 60].
|
47 |
+
While traditional computer vision approaches for map-
|
48 |
+
*Work done during an internship at Reality Labs Research, Meta
|
49 |
+
Ego 2’s observations
|
50 |
+
View
|
51 |
+
1
|
52 |
+
Speech
|
53 |
+
View
|
54 |
+
3
|
55 |
+
Speech
|
56 |
+
2
|
57 |
+
Speech
|
58 |
+
View
|
59 |
+
Ego 1’s observations
|
60 |
+
1
|
61 |
+
View
|
62 |
+
Speech
|
63 |
+
View
|
64 |
+
2
|
65 |
+
Speech
|
66 |
+
View
|
67 |
+
3
|
68 |
+
Speech
|
69 |
+
1
|
70 |
+
3
|
71 |
+
2
|
72 |
+
3
|
73 |
+
2
|
74 |
+
1
|
75 |
+
Unmapped
|
76 |
+
3D scene
|
77 |
+
Scene
|
78 |
+
Occupancy Map
|
79 |
+
Legend
|
80 |
+
Occupied
|
81 |
+
Free
|
82 |
+
Figure 1. Given egocentric audio-visual observations from multi-
|
83 |
+
ple people wearing AR glasses and moving and conversing (left),
|
84 |
+
we aim to accurately map the scene (right). To mitigate cost, our
|
85 |
+
model receives audio continuously but learns to selectively em-
|
86 |
+
ploy the ego cameras only when the visual input is expected to be
|
87 |
+
informative.
|
88 |
+
ping (e.g., visual SLAM) are highly effective when extensive
|
89 |
+
exposure to the environment is possible, in many real-world
|
90 |
+
scenarios only a fraction of the space is observed by the cam-
|
91 |
+
era. Recent work shows the promise of sensing 3D spaces
|
92 |
+
with both sight and sound [8, 14, 26, 28, 59]: listening to
|
93 |
+
echoes bounce around the room can reveal the depth and
|
94 |
+
shape of surrounding surfaces, and even help extrapolate
|
95 |
+
a floorplan beyond the camera’s field of view or behind
|
96 |
+
occluded objects [59].
|
97 |
+
While we are inspired by these advances, they also have
|
98 |
+
certain limitations. Often systems will emit sounds (e.g., a
|
99 |
+
frequency sweep) into the environment to ping for spatial
|
100 |
+
information [1, 14, 15, 24, 28, 44, 59, 69], which is intrusive
|
101 |
+
if done around people. Furthermore, existing audio-visual
|
102 |
+
models assume that the camera is always on grabbing new
|
103 |
+
frames, which is wasteful if not intractable, particularly on
|
104 |
+
lightweight, low-power computing devices in AR settings.
|
105 |
+
We introduce Chat2Map, a new scene mapping task
|
106 |
+
1
|
107 |
+
arXiv:2301.02184v1 [cs.CV] 4 Jan 2023
|
108 |
+
|
109 |
+
1aimed at eliminating these challenges. In the proposed set-
|
110 |
+
ting, multiple people converse as they move casually through
|
111 |
+
the scene while wearing AR glasses equipped with an ego-
|
112 |
+
centric camera, microphones, and potentially other sensors
|
113 |
+
(e.g., for odometry).1 Given their egocentric audio-visual
|
114 |
+
data streams, the goal is to infer the ground-plane occupancy
|
115 |
+
map for the larger environment around them. See Figure 1.
|
116 |
+
We observe that audio-visual data from the egos’ inter-
|
117 |
+
actions will naturally reflect scene structure. First, as they
|
118 |
+
walk and talk, their movements reveal spaces like corridors,
|
119 |
+
doorways, and large rooms, in both modalities. Second,
|
120 |
+
the speech captured by the device-wearer’s cameras and
|
121 |
+
microphones can be localized to different speakers, which,
|
122 |
+
compared to active sound emission, is non-intrusive.
|
123 |
+
To realize this vision, we develop a novel approach to
|
124 |
+
efficient scene mapping from multi-ego conversations. Our
|
125 |
+
approach has two key elements: a shared scene mapper
|
126 |
+
and a visual sampling policy. For the former, we devise a
|
127 |
+
transformer-based mapper that incorporates the multiple data
|
128 |
+
streams to infer a map beyond the directly observed areas,
|
129 |
+
and, most importantly, that enables communication among
|
130 |
+
the egos about their observations and states in the 3D space
|
131 |
+
to improve mapping accuracy. For the latter, our idea is to
|
132 |
+
relax the common assumption of an “always-on" camera, and
|
133 |
+
instead actively select when to sample visual frames from
|
134 |
+
any one of the ego cameras. Intuitively, certain regions where
|
135 |
+
the egos move will be more or less important for mapping
|
136 |
+
(e.g., corners of the room, doors). We train a sampling
|
137 |
+
policy with deep reinforcement learning that activates the
|
138 |
+
visual feed only when it is anticipated to complement the
|
139 |
+
continuous audio feed. This is a cost-conscious approach,
|
140 |
+
mindful that switching on a camera is much more power
|
141 |
+
consuming than sensing audio with microphones [2].
|
142 |
+
We demonstrate our approach using a state-of-the-art
|
143 |
+
audio-visual simulator for 3D scenes as well as some real-
|
144 |
+
world video input. We can successfully map an unfamil-
|
145 |
+
iar environment given only partial visibility via multiple
|
146 |
+
conversing people moving about the scene. Compared to
|
147 |
+
sampling all visual frames, our model reduces the visual
|
148 |
+
processing by 87.5% while the mapping accuracy declines
|
149 |
+
marginally (∼ 9%).
|
150 |
+
2. Related Work
|
151 |
+
Visual scene mapping.
|
152 |
+
Past works tackle scene mapping
|
153 |
+
using 3D Manhattan layouts [20, 73, 80, 85, 86], detailed
|
154 |
+
floorplans [10, 45, 53, 71, 78], occupancy [23, 39, 52, 61,
|
155 |
+
67, 68], and semantic maps [51]. Manhattan layouts in-
|
156 |
+
clude structured outputs like scene boundaries [73, 85, 86],
|
157 |
+
corners [85, 86], and floor/ceilings [80, 86], but do not gen-
|
158 |
+
eralize to unseen environment regions. Floorplan estimation
|
159 |
+
1Throughout, we call each person participating in the conversation an
|
160 |
+
“ego" for short.
|
161 |
+
methods use dense scans of 3D scenes to predict geometric
|
162 |
+
(walls, exterior/ interior) and semantic layouts (room type,
|
163 |
+
object type, etc.), rely on extensive human walkthroughs
|
164 |
+
with RGB-D [10, 45] or 3D point cloud [53, 71] scans, and
|
165 |
+
are usually limited to polygonal layouts [10, 45, 53, 71, 78].
|
166 |
+
Occupancy maps traditionally rely on wide field-of-view
|
167 |
+
(FoV) LiDAR scanners [62] or evaluate on simple 2D envi-
|
168 |
+
ronments wihtout non-wall obstacles [23, 39, 68, 68]. More
|
169 |
+
recent methods [4, 5, 11, 60] train an embodied agent to
|
170 |
+
explore and build topdown maps of more complex scenes
|
171 |
+
using RGB-D. On the contrary, our method uses both vision
|
172 |
+
and audio from the observations of a group of conversing
|
173 |
+
people for mapping. Rather than steer the camera of a robot
|
174 |
+
to map the scene, our task requires processing passive video
|
175 |
+
from human camera wearers.
|
176 |
+
Audio-visual scene mapping.
|
177 |
+
To our knowledge, the only
|
178 |
+
prior work to translate audio-visual inputs into a general (ar-
|
179 |
+
bitrarily shaped) floorplan maps is AV-Floorplan [59]. Un-
|
180 |
+
like AV-Floorplan, our method maps from speech in natural
|
181 |
+
human conversations, which avoids emitting intrusive fre-
|
182 |
+
quency sweep signals to generate echoes. In addition, a
|
183 |
+
key goal of our work is to reduce mapping cost by skipping
|
184 |
+
redundant visual frames. Our experiments demonstrate the
|
185 |
+
benefits of our model design over AV-Floorplan [59].
|
186 |
+
Audio(-visual) spatial understanding.
|
187 |
+
More broadly, be-
|
188 |
+
yond the mapping task, various methods leverage audio for
|
189 |
+
geometric and material information about the 3D scene and
|
190 |
+
its constituent objects. Prior work relies on acoustic reflec-
|
191 |
+
tions to estimate the shape of an objects [44]. Echolocation
|
192 |
+
is used in robotics to estimate proximity to surrounding sur-
|
193 |
+
faces [1, 15, 24, 69]. Together, vision and audio can better
|
194 |
+
reveal the shape and materials of objects [54, 65, 84], self-
|
195 |
+
supervise imagery [28], and improve depth sensing [40, 81].
|
196 |
+
Recent work exploits correlations between spatial audio and
|
197 |
+
imagery to reason about scene acoustics [7, 49] or aid active
|
198 |
+
embodied navigation [6, 9, 19, 27, 83] and source separa-
|
199 |
+
tion [47, 48]. No prior work intelligently captures images
|
200 |
+
during conversations to efficiently map a scene.
|
201 |
+
Multi-agent spatial understanding.
|
202 |
+
There is existing
|
203 |
+
work [17, 33, 35, 36, 57] in the visual multi-agent reinforce-
|
204 |
+
ment learning (MARL) community that learns collaborative
|
205 |
+
agents for performing tasks like relocating furniture [35, 36],
|
206 |
+
playing 3D multi-player games [34], coordinated scene ex-
|
207 |
+
ploration [33], or multi-object navigation [57]. In such set-
|
208 |
+
tings, the collaborative agents actively interact with the envi-
|
209 |
+
ronment to learn a shared scene representation for success-
|
210 |
+
fully completing their task. In contrast, we aim to learn a
|
211 |
+
shared geometric map of a 3D scene given passive observa-
|
212 |
+
tions that come from the trajectories chosen by a group of
|
213 |
+
people involved in a natural conversation.
|
214 |
+
Efficient visual sampling in video.
|
215 |
+
Efficient visual sam-
|
216 |
+
pling has been studied in the context of video recogni-
|
217 |
+
2
|
218 |
+
|
219 |
+
tion [29, 42, 43, 79, 82] and summarization [12, 72] with
|
220 |
+
the goal of selectively and smartly processing informative
|
221 |
+
frames, which can both reduce computational cost and im-
|
222 |
+
prove recognition performance. More closely related to
|
223 |
+
our approach are methods that use audio for the decision-
|
224 |
+
making [29, 42, 56]. Different from the above, we use ef-
|
225 |
+
ficient visual sampling in the context of mapping scenes.
|
226 |
+
Furthermore, in our case an online sampling decision needs
|
227 |
+
to be made at every step before looking at the current visual
|
228 |
+
frame (or frames from future steps).
|
229 |
+
3. Chat2Map Task Formulation
|
230 |
+
We propose a novel task: efficient and shared mapping of
|
231 |
+
scenes from multi-ego conversations.
|
232 |
+
Without loss of generality, we consider two egos, E1
|
233 |
+
and E2, each wearing AR glasses equipped with an RGB-D
|
234 |
+
camera and a multi-channel microphone array. The egos
|
235 |
+
have a conversation and move around in an unmapped
|
236 |
+
3D environment. Each conversation is T steps long. At
|
237 |
+
each step t, the ego Ei’s glasses receives an observation
|
238 |
+
Oi,t = (Vi,t, Si,t, Pi,t, S
|
239 |
+
′
|
240 |
+
i,t, P
|
241 |
+
′
|
242 |
+
i,t). Vi,t is the 90◦ FOV RGB-
|
243 |
+
D image and Si,t is the speech waveform uttered by Ei,
|
244 |
+
as observed from its pose Pi,t = (xi,t, yi,t, θi,t), where
|
245 |
+
(xi,t, yi,t) denotes its location and θi,t denotes its orientation
|
246 |
+
in the 3D scene. S
|
247 |
+
′
|
248 |
+
i,t is the speech of the other ego E
|
249 |
+
′
|
250 |
+
i (the
|
251 |
+
other person involved in the conversation), as perceived by
|
252 |
+
Ei (note, the voice sounds different depending on the lis-
|
253 |
+
tener position), and P
|
254 |
+
′
|
255 |
+
i,t is E
|
256 |
+
′
|
257 |
+
i’s pose relative to Ei. Modern
|
258 |
+
AR glasses, like Bose Frames or Facebook Aria already sup-
|
259 |
+
port capturing such multi-sensory observations, making it
|
260 |
+
possible to have a real-world instantiation of our task.
|
261 |
+
Given the real-time observation stream O for the egos,
|
262 |
+
where O =
|
263 |
+
�
|
264 |
+
Oi,t : i = 1, . . . , 2, t = 1 . . . T
|
265 |
+
�
|
266 |
+
and a total
|
267 |
+
budget of visual frames B, we aim to learn a model that can
|
268 |
+
accurately estimate the top-down occupancy map M of the
|
269 |
+
scene without exceeding the visual budget. We assume the
|
270 |
+
first visual frames (at t = 0) for both egos to be observed
|
271 |
+
by the model. Thus we aim to learn a policy that samples B
|
272 |
+
frames from 2 ∗ (T − 1) choices—which are not considered
|
273 |
+
a batch, but rather unfold in sequence—and a mapper that
|
274 |
+
predicts the scene map given the sampled frames. Recall that
|
275 |
+
our goal is to build a model that samples the expensive visual
|
276 |
+
frames only when absolutely needed for scene mapping. This
|
277 |
+
is captured by the constraint 1 ≤ B <<2 ∗ (T − 1).
|
278 |
+
There are three important aspects to our task. First, it
|
279 |
+
requires learning from both vision and audio. While the
|
280 |
+
visual signal carries rich information about the local scene
|
281 |
+
geometry, there can be a high amount of redundancy in the
|
282 |
+
visual feed captured during a conversation (e.g., the egos
|
283 |
+
may visit the same location more than once or change their
|
284 |
+
viewpoint only marginally). Second, not only does the long-
|
285 |
+
range nature of audio help uncover the global scene prop-
|
286 |
+
erties [21, 59] like shape and size—beyond what’s visible
|
287 |
+
in images—we can also exploit audio to undersample the
|
288 |
+
visual frames, thereby reducing the cost of capturing and
|
289 |
+
processing sensory inputs for mapping. Third, shared map-
|
290 |
+
ping of a scene implies jointly leveraging the complementary
|
291 |
+
information in the audio (speech) from self and other egos,
|
292 |
+
and the synergy of the audio-visual cues from multiple egos.
|
293 |
+
These insights form the basis of our key hypothesis in this
|
294 |
+
task—selectively sampling visual frames during a conversa-
|
295 |
+
tion involving egos that share information with each other
|
296 |
+
can facilitate efficient mapping of a scene.
|
297 |
+
4. Approach
|
298 |
+
We solve the task by learning a model that estimates the
|
299 |
+
scene map given the egos’ audio-visual observations and
|
300 |
+
also sequentially decides when to sample visual frames for
|
301 |
+
mapping given the audio stream, ego poses, and previously
|
302 |
+
sampled frames, if any. Here, "sampling" refers to individu-
|
303 |
+
ally deciding for each ego whether to use its camera or not
|
304 |
+
to capture the visuals at every step of its trajectory in the
|
305 |
+
scene. The sampling is preemptive in nature, i.e. the policy
|
306 |
+
selects or skips a frame without capturing it first.
|
307 |
+
Our model has two main components (see Fig. 2): (1) a
|
308 |
+
shared scene mapper, and (2) a visual sampling policy. At
|
309 |
+
every step t, the shared mapper has two functions. First,
|
310 |
+
it estimates the map of a previously unseen environment
|
311 |
+
by exploiting the shared spatial cues in the audio-visual
|
312 |
+
observations of the two egos. Second, it informs the policy
|
313 |
+
about the utility of sampling a certain visual frame. Guided
|
314 |
+
by the mapper, the policy samples only the most informative
|
315 |
+
visual frames that can boost mapping significantly over using
|
316 |
+
just audio. Note that, unlike the visuals, we observe audio
|
317 |
+
continuously as it is less resource-intensive vis-a-vis storage
|
318 |
+
and power requirements for processing [2].
|
319 |
+
We learn our task through the synergy of the mapper and
|
320 |
+
the policy, such that under the constraint of a limited visual
|
321 |
+
budget B, our model implicitly understands which visual
|
322 |
+
frames are critical for mapping.
|
323 |
+
First, we describe the steps involved to prepare our model
|
324 |
+
inputs (Sec. 4.1). Next, we introduce our visual sampling
|
325 |
+
policy (Sec. 4.2) and shared scene mapper (Sec. 4.3). Finally,
|
326 |
+
we present model training details (Sec. 4.4). Through the
|
327 |
+
rest of the text, we use separate notations to distinguish the
|
328 |
+
egos’ observations O (i.e. what the egos receive from the
|
329 |
+
environment) from our model inputs O (i.e. what we capture
|
330 |
+
and feed to our model for efficient mapping).
|
331 |
+
4.1. Model input preparation
|
332 |
+
We prepare our model inputs by separately preprocessing
|
333 |
+
the visual and audio modalities. If our policy decides to
|
334 |
+
sample an image V, we transform it into V = (V R, V M).
|
335 |
+
V R denotes the normalized RGB image with pixel values
|
336 |
+
∈ [0, 1]. V M denotes the 90◦ FoV topdown occupancy map
|
337 |
+
created by projecting the depth image. To do the depth
|
338 |
+
3
|
339 |
+
|
340 |
+
a) Visual sampling policy 𝞹!
|
341 |
+
Policy input encoder for
|
342 |
+
ego 𝐸! at step 𝑡
|
343 |
+
Fusion
|
344 |
+
Fusion
|
345 |
+
Fusion
|
346 |
+
Fusion
|
347 |
+
Fusion
|
348 |
+
𝑒!,#
|
349 |
+
Pose at 𝑡 - 1
|
350 |
+
Pose at 𝑡
|
351 |
+
𝑃!,#$%
|
352 |
+
𝑃!,#
|
353 |
+
1
|
354 |
+
C
|
355 |
+
Speech from Self
|
356 |
+
at 𝑡-1… 𝑡
|
357 |
+
Pose at 𝑡 - 1 … 𝑡
|
358 |
+
𝑃!,#$%…#
|
359 |
+
𝑆!,#$%…#
|
360 |
+
Other Ego’s Relative
|
361 |
+
Pose at 𝑡 -1 … 𝑡
|
362 |
+
𝑃′!,#$%…#
|
363 |
+
𝑆!,#$%…#
|
364 |
+
'
|
365 |
+
If sampled by
|
366 |
+
policy at t - 1
|
367 |
+
Pose
|
368 |
+
𝑉!,#$%
|
369 |
+
𝑃!,#$%
|
370 |
+
RGB
|
371 |
+
CNN
|
372 |
+
CNN
|
373 |
+
CNN
|
374 |
+
Embedding
|
375 |
+
Embedding
|
376 |
+
Embedding
|
377 |
+
Embedding
|
378 |
+
Embedding
|
379 |
+
𝑜#,%
|
380 |
+
!
|
381 |
+
𝑜&,'
|
382 |
+
(
|
383 |
+
𝑜),)
|
384 |
+
(
|
385 |
+
𝑜#,%
|
386 |
+
(
|
387 |
+
𝑜),)
|
388 |
+
!
|
389 |
+
𝑜&,'
|
390 |
+
!
|
391 |
+
𝑜&,'
|
392 |
+
(!
|
393 |
+
𝑜),)
|
394 |
+
(!
|
395 |
+
𝑜#,%
|
396 |
+
(!
|
397 |
+
Multi-
|
398 |
+
modal
|
399 |
+
Memory
|
400 |
+
Ego Pose
|
401 |
+
Embedding
|
402 |
+
Query
|
403 |
+
Transformer
|
404 |
+
Transpose
|
405 |
+
Convolutions
|
406 |
+
Modality Tag
|
407 |
+
Fusion
|
408 |
+
Embedding
|
409 |
+
Other Ego’s
|
410 |
+
Relative Pose
|
411 |
+
Modality Tag
|
412 |
+
Fusion
|
413 |
+
Embedding
|
414 |
+
Pose
|
415 |
+
CNN
|
416 |
+
1
|
417 |
+
C
|
418 |
+
Speech from
|
419 |
+
Self
|
420 |
+
If sampled
|
421 |
+
by policy
|
422 |
+
Modality Tag
|
423 |
+
Fusion
|
424 |
+
Embedding
|
425 |
+
Pose
|
426 |
+
RGB
|
427 |
+
𝑝),)
|
428 |
+
*
|
429 |
+
𝑝&,'
|
430 |
+
*
|
431 |
+
𝑝#,%
|
432 |
+
*
|
433 |
+
𝑑),)
|
434 |
+
𝑑&,'
|
435 |
+
𝑑#,%
|
436 |
+
𝑉!,(
|
437 |
+
𝑃!,(
|
438 |
+
𝑆!,(
|
439 |
+
𝑃!,(
|
440 |
+
𝑃!,(
|
441 |
+
'
|
442 |
+
𝑆!,(
|
443 |
+
'
|
444 |
+
𝑝*
|
445 |
+
𝑑
|
446 |
+
𝑣!,#$%
|
447 |
+
Map input encoder
|
448 |
+
for ego 𝐸! at step 𝑡
|
449 |
+
𝑃!,(
|
450 |
+
Feature-wise
|
451 |
+
Raster Index
|
452 |
+
CNN
|
453 |
+
Feature-wise
|
454 |
+
Raster Index
|
455 |
+
CNN
|
456 |
+
Feature-wise
|
457 |
+
Raster Index
|
458 |
+
Fusion
|
459 |
+
𝒉𝒕"𝟏
|
460 |
+
𝒉𝒕
|
461 |
+
GRU
|
462 |
+
𝒈𝒕
|
463 |
+
Critic
|
464 |
+
Actor
|
465 |
+
𝑎%,#
|
466 |
+
𝑒%,#
|
467 |
+
𝑒&,#
|
468 |
+
𝑎&,#
|
469 |
+
b) Shared scene mapper 𝑓*
|
470 |
+
Map legend
|
471 |
+
1.
|
472 |
+
Ego pose
|
473 |
+
2.
|
474 |
+
Free / Occupied
|
475 |
+
+𝑣!,(
|
476 |
+
𝑝!,#$%
|
477 |
+
𝑣!,#$%…#
|
478 |
+
𝑝!,#$%…#
|
479 |
+
𝑠′!,#$%…#
|
480 |
+
𝑝′!,#$%…#
|
481 |
+
𝑝!,#$%
|
482 |
+
𝑝!,#
|
483 |
+
+𝑝!,(
|
484 |
+
+𝑝!,(
|
485 |
+
̂𝑠!,(
|
486 |
+
̂𝑠′!,(
|
487 |
+
+𝑝′!,(
|
488 |
+
/𝑚+!
|
489 |
+
/𝑚+
|
490 |
+
/𝑚,
|
491 |
+
90° FoV Map
|
492 |
+
90° FoV Map
|
493 |
+
1𝑀!,(
|
494 |
+
1
|
495 |
+
C
|
496 |
+
Speech from Other
|
497 |
+
Ego at 𝑡 -1… 𝑡
|
498 |
+
Speech from
|
499 |
+
Other Ego
|
500 |
+
1
|
501 |
+
C
|
502 |
+
Figure 2. Our model has two main components: a) a visual sampling policy (left), and b) a shared scene mapper (right). At each step, our
|
503 |
+
policy receives the current audio along with the previous audio(-visual) observations for the egos and decides for each ego individually
|
504 |
+
whether to capture its visual frame at the current step. As per the policy predictions, the shared mapper conditionally uses the current visual
|
505 |
+
frame(s) and audio along with the past audio(-visual) observations to predict the occupancy map of the scene, a ground-plane map showing
|
506 |
+
where obstacles and freespace are (shown in green and white).
|
507 |
+
projection, we first backproject it into the world coordinates
|
508 |
+
using the camera’s intrinsic parameters to compute the local
|
509 |
+
visible scene’s 3D point cloud. Next, we project these points
|
510 |
+
to obtain a two-channel binary topdown map of size h×w×2,
|
511 |
+
where the first channel of the map reveals occupied/free
|
512 |
+
areas, and the second channel reveals seen/unseen areas. If
|
513 |
+
our policy skips V, we set V R and V M to all-zero matrices
|
514 |
+
of the appropriate size.
|
515 |
+
For a speech waveform S, we calculate the short-time
|
516 |
+
Fourier transform (STFT) magnitude spectrogram denoted
|
517 |
+
by S of size F × T × C, where F, T , and C are the number
|
518 |
+
of frequency bins, time windows, and ambisonic microphone
|
519 |
+
channels, respectively. Lastly, we normalize each pose Pi,t
|
520 |
+
to be relative to P1,1. See Sec. 5 and Supp. Sec. 7.6 for more
|
521 |
+
details.
|
522 |
+
4.2. Visual sampling policy
|
523 |
+
At every step t, our visual sampling policy πV (Fig. 2
|
524 |
+
left) receives Oπ(t) as input and makes the decision to either
|
525 |
+
capture or skip the visual frame Vi,t for each ego Ei. Oπ(t)
|
526 |
+
comprises the visual cue from the last step along with the
|
527 |
+
speech cues and the poses from the current step and the
|
528 |
+
last step for both egos. Formally, Oπ(t) =
|
529 |
+
�
|
530 |
+
Oπ
|
531 |
+
i (t) : i =
|
532 |
+
1 . . . 2
|
533 |
+
�
|
534 |
+
, where Oπ
|
535 |
+
i (t) =
|
536 |
+
�
|
537 |
+
Vi,t−1, Si,j, Pi,j, S
|
538 |
+
′
|
539 |
+
i,j, P
|
540 |
+
′
|
541 |
+
i,j : j =
|
542 |
+
t − 1 . . . t
|
543 |
+
�
|
544 |
+
. The policy first uses an encoder network to
|
545 |
+
generate a multi-modal embedding of Oπ(t), and then passes
|
546 |
+
the embedding to a policy network that makes a sampling
|
547 |
+
decision per ego. At t = 1, as per our problem definition
|
548 |
+
(Sec. 3), the policy always chooses to sample the visual
|
549 |
+
frames for both egos, i.e., the cameras are initially on.
|
550 |
+
Multi-modal policy embedding.
|
551 |
+
To process ego Ei’s
|
552 |
+
visual input Vi,t−1 from the last step, we encode the
|
553 |
+
RGB image V R
|
554 |
+
i,t−1 and map V M
|
555 |
+
i,t−1 with separate CNNs.
|
556 |
+
We then concatenate the two features to generate the
|
557 |
+
visual embedding vi,t−1.
|
558 |
+
To encode the pose in-
|
559 |
+
puts
|
560 |
+
�
|
561 |
+
Pi,t−1, P
|
562 |
+
′
|
563 |
+
i,t−1, Pi,t, P
|
564 |
+
′
|
565 |
+
i,t
|
566 |
+
�
|
567 |
+
, we use a linear layer
|
568 |
+
and generate pose embeddings
|
569 |
+
�
|
570 |
+
pi,t−1, p
|
571 |
+
′
|
572 |
+
i,t−1, pi,t, p
|
573 |
+
′
|
574 |
+
i,t
|
575 |
+
�
|
576 |
+
.
|
577 |
+
We process the speech inputs
|
578 |
+
�
|
579 |
+
Si,t−1, Si,t−1, Si,t, S
|
580 |
+
′
|
581 |
+
i,t
|
582 |
+
�
|
583 |
+
using another CNN and create speech embeddings
|
584 |
+
�
|
585 |
+
si,t−1, s
|
586 |
+
′
|
587 |
+
i,t−1, si,t, s
|
588 |
+
′
|
589 |
+
i,t
|
590 |
+
�
|
591 |
+
. Next, we fuse the visual, speech
|
592 |
+
and pose embeddings using linear layers (see Fig. 2 left for
|
593 |
+
details) to obtain the multi-modal policy embedding ei,t for
|
594 |
+
Ei. Finally, we fuse the policy embeddings for the two egos,
|
595 |
+
e1,t and e2,t with a linear layer to produce the multi-modal
|
596 |
+
policy embedding et.
|
597 |
+
The visual, audio, and pose inputs carry complementary
|
598 |
+
cues required for efficient visual sampling. Whereas the
|
599 |
+
pose inputs from the last and current steps explicitly reveal
|
600 |
+
the viewpoint change between the steps, the previous and
|
601 |
+
current speech inputs provide information about the changes
|
602 |
+
in the local and global scene structures as a function of the
|
603 |
+
previously sampled visual inputs, which together suggest the
|
604 |
+
value of sampling a visual frame at the current step. Further-
|
605 |
+
more, guided by our training reward (below in Sec. 4.4), the
|
606 |
+
previously observed visual frames and audio together enable
|
607 |
+
4
|
608 |
+
|
609 |
+
our policy to anticipate the current frames and skip them
|
610 |
+
if they are deemed redundant, thereby improving mapping
|
611 |
+
accuracy for a low visual budget.
|
612 |
+
Policy network.
|
613 |
+
The policy network consists of a GRU
|
614 |
+
that estimates an updated history ht along with the current
|
615 |
+
state representation gt, using the fused embedding et and
|
616 |
+
the history of states ht−1. An actor-critic module takes
|
617 |
+
gt and ht−1 as inputs and predicts a policy distribution
|
618 |
+
πθ(ai,t|gt, ht−1) per ego along with the value of the state
|
619 |
+
Hθ(gt, ht−1) (θ are policy parameters). The policy samples
|
620 |
+
an action ai,t ∈
|
621 |
+
�
|
622 |
+
0, 1
|
623 |
+
�
|
624 |
+
for every Ei. ai,t = 1 corresponds
|
625 |
+
to selecting Vi,t, ai,t = 0 otherwise.
|
626 |
+
4.3. Shared scene mapper
|
627 |
+
Whereas Oπ(t) denotes our policy input (Sec. 4.2),
|
628 |
+
OM(t) denotes the input to our shared scene mapper f M
|
629 |
+
at step t, such that OM(t) =
|
630 |
+
�
|
631 |
+
(Vi,j, Si,j, S
|
632 |
+
′
|
633 |
+
i,j, Pi,j, P
|
634 |
+
′
|
635 |
+
i,j) :
|
636 |
+
i = 1 . . . 2, j = 1 . . . t
|
637 |
+
�
|
638 |
+
. f m starts by embedding each
|
639 |
+
component of OM(t) using a separate network. This is fol-
|
640 |
+
lowed by a multi-modal memory that stores the embeddings
|
641 |
+
since the start of the episode. Finally, a transformer [76]
|
642 |
+
predicts an estimate ˜
|
643 |
+
M(t) of the scene map conditioned on
|
644 |
+
the multi-modal memory and the egos’ poses in the episode.
|
645 |
+
Multi-modal mapper embedding.
|
646 |
+
For the visual input
|
647 |
+
Vi,j, we encode V R
|
648 |
+
i,j and V M
|
649 |
+
i,j using separate CNNs and
|
650 |
+
do a channel-wise concatenation to get visual features ˆvi,j.
|
651 |
+
Similarly speech is encoded using separate CNNs to get ˆsi,j
|
652 |
+
and ˆs
|
653 |
+
′
|
654 |
+
i,j. Each of ˆv, ˆs and ˆs
|
655 |
+
′ is of size 4 × 4 × 1024.
|
656 |
+
For both vision and speech, we compute two positional
|
657 |
+
embeddings, pI and pII. They encode the pose of the egos
|
658 |
+
in the 3D space, and the index of each 1024-dimensional
|
659 |
+
feature in the visual or speech features in the raster order
|
660 |
+
respectively. Whereas pI helps discover spatial cues as a
|
661 |
+
function of the egos’ location in the 3D scene, pII enables
|
662 |
+
our model to attend to different modalities in a more fine-
|
663 |
+
grained manner. For both, we compute an 8-dimensional
|
664 |
+
sinusoidal positional encoding [76] and then pass it through a
|
665 |
+
linear layer to obtain a 1024-dimensional embedding. For pII,
|
666 |
+
we additionally repeat this process for every feature index in
|
667 |
+
the raster order. Lastly, we reshape pI and add it with pII to
|
668 |
+
produce 4 × 4 × 1024-dimensional positional embeddings,
|
669 |
+
ˆpi,j for ˆvi,j and ˆsi,j, and ˆp
|
670 |
+
′
|
671 |
+
i,j for ˆs
|
672 |
+
′
|
673 |
+
i,j.
|
674 |
+
Following [49], we also learn an embedding ˆmi,j ∈
|
675 |
+
�
|
676 |
+
ˆmV , ˆmS, ˆmS′�
|
677 |
+
to capture different modality types, where
|
678 |
+
ˆmV represents vision, and ˆmS and ˆmS′ represent the speech
|
679 |
+
from self and that of the other ego, respectively.
|
680 |
+
The
|
681 |
+
modality-based embeddings help our model differentiate
|
682 |
+
between different modalities and better map the scene by
|
683 |
+
learning complementary spatial cues from them.
|
684 |
+
Multi-modal memory.
|
685 |
+
For the visual input Vi,j, we add
|
686 |
+
its embedding ˆvi,j with its positional embedding ˆpi,j and
|
687 |
+
modality embedding ˆmV
|
688 |
+
i,j, and flatten the sum to get a 16 ×
|
689 |
+
1024-dimensional embedding. Similarly, we fuse the speech
|
690 |
+
embeddings by taking their sum and flattening it. This gen-
|
691 |
+
erates a multi-modal memory of fused embeddings o, such
|
692 |
+
that o =
|
693 |
+
�
|
694 |
+
oV
|
695 |
+
1,1, . . . , oV
|
696 |
+
2,t, oS
|
697 |
+
1,1, . . . , oS
|
698 |
+
2,t, oS′
|
699 |
+
1,1, . . . , oS′
|
700 |
+
2,t
|
701 |
+
�
|
702 |
+
.
|
703 |
+
Occupancy prediction.
|
704 |
+
To predict the underlying scene
|
705 |
+
occupancy, we first use a transformer encoder [76] to attend
|
706 |
+
to the embeddings in o and capture short- and long-range
|
707 |
+
correlations within and across modalities using a stack of
|
708 |
+
self-attention layers. This generates an audio-visual repre-
|
709 |
+
sentation that models the spatial layout of the 3D scene.
|
710 |
+
Next, we use a transformer decoder [76] to perform cross-
|
711 |
+
attention on the audio-visual representation of the scene
|
712 |
+
conditioned on the embedding ˆpi,j for every pose Pi,j in
|
713 |
+
OM(t) and generate an embedding di,j for the pose. Finally,
|
714 |
+
we upsample di,j using a multi-layer network U comprising
|
715 |
+
transpose convolutions and a sigmoid layer at the end to
|
716 |
+
predict an estimate ˜
|
717 |
+
Mi,j of the ground-truth local 360◦ FoV
|
718 |
+
map for the pose, Mi,j. Both Mi,j and its estimate ˜
|
719 |
+
Mi,j are
|
720 |
+
two-channel binary occupancy maps of size H × W. To
|
721 |
+
obtain the estimated map ˜
|
722 |
+
M(t) for the scene, we register
|
723 |
+
each prediction ˜
|
724 |
+
Mi,j onto a larger shared map using the pose
|
725 |
+
Pi,j and threshold the final shared map at 0.5 (see Supp.
|
726 |
+
Sec. 7.6 for map registration details). Importantly, the shared
|
727 |
+
map allows communication between both egos’ data streams
|
728 |
+
for more informed mapping and sampling, as we show in
|
729 |
+
results.
|
730 |
+
4.4. Model training
|
731 |
+
Policy training.
|
732 |
+
We propose a novel dense RL reward to
|
733 |
+
train policy πV :
|
734 |
+
r(t) = ∆Q(t) − η ∗ ρ(t).
|
735 |
+
∆Q(t) measures the improvement in mapping from taking
|
736 |
+
actions
|
737 |
+
�
|
738 |
+
ai,t : i = 1 . . . 2
|
739 |
+
�
|
740 |
+
over not sampling any visual
|
741 |
+
frame at step t. ρ(t) is a penalty term to discourage sampling
|
742 |
+
a frame from the same pose more than once, which we
|
743 |
+
weight by η. We define ∆Q(t) as
|
744 |
+
∆Q(t) = Q
|
745 |
+
� ˜
|
746 |
+
M(t) | OM(t)
|
747 |
+
�
|
748 |
+
− Q
|
749 |
+
� ˜
|
750 |
+
M(t) | (OM(t) \ Vt)
|
751 |
+
�
|
752 |
+
,
|
753 |
+
where Q is a map quality measure, Q(X|Y ) represents the
|
754 |
+
quality of map estimate X given inputs Y , and (OM(t) \ Vt)
|
755 |
+
denotes the mapper inputs devoid of any visual frame for the
|
756 |
+
current step. We define ρ(t) as
|
757 |
+
ρ(t) =
|
758 |
+
�
|
759 |
+
i=1...2
|
760 |
+
ai,t ∗ 1(Vi,t ∈ OM(t − 1)),
|
761 |
+
where the indicator function checks if Vi,t was used in map-
|
762 |
+
ping before. While ∆Q(t) incentivizes sampling frames that
|
763 |
+
provide a big boost to the mapping accuracy over skipping
|
764 |
+
them, ρ(t) penalizes wasting the visual budget on redundant
|
765 |
+
sampling, thereby maximizing mapping performance within
|
766 |
+
5
|
767 |
+
|
768 |
+
the constraints of a limited budget. We set ρ = 0.03 in all
|
769 |
+
our experiments and define Q as the average F1 score over
|
770 |
+
the occupied and free classes in a predicted occupancy map.
|
771 |
+
We train πV with Decentralized Distributed PPO (DD-
|
772 |
+
PPO) [77]. The DD-PPO loss consists of a value loss, policy
|
773 |
+
loss and an entropy loss to promote exploration (see Supp.
|
774 |
+
Sec. 7.8.4 for details).
|
775 |
+
Mapper training.
|
776 |
+
At each step t, we train the shared map-
|
777 |
+
per f m with a loss LM(t), such that
|
778 |
+
LM(t) =
|
779 |
+
1
|
780 |
+
2 × t
|
781 |
+
�
|
782 |
+
i=1...2
|
783 |
+
�
|
784 |
+
j=1...t
|
785 |
+
BCE( ˜
|
786 |
+
Mi,j, Mi,j),
|
787 |
+
where BCE( ˜
|
788 |
+
Mi,j, Mi,j) is the average binary cross entropy
|
789 |
+
loss between ˜
|
790 |
+
Mi,j and Mi,j.
|
791 |
+
Training curriculum.
|
792 |
+
To train our model, we first pretrain
|
793 |
+
mapper f m in two phases and then train the policy πV while
|
794 |
+
keeping f m frozen. In phase 1, we train f m without visual
|
795 |
+
sampling, i.e. all visual frames are provided at each step.
|
796 |
+
In phase 2, we finetune the pretrained weights of f m from
|
797 |
+
phase 1 on episodes where we randomly drop views to satisfy
|
798 |
+
the budget B. While phase 1 improves convergence when
|
799 |
+
training with visual sampling, phase 2 helps with reward
|
800 |
+
stationarity when training our RL policy.
|
801 |
+
5. Experiments
|
802 |
+
Experimental setup.
|
803 |
+
For our main experiments, we use
|
804 |
+
SoundSpaces [8] acoustic simulations with AI-Habitat [63]
|
805 |
+
and Matterport3D [3] visual scenes. While Matterport3D
|
806 |
+
provides dense 3D meshes and image scans of real-world
|
807 |
+
houses and other indoor scenes, SoundSpaces provides room
|
808 |
+
impulse responses (RIRs) at a spatial resolution of 1m for
|
809 |
+
Matterport3D that model all real-world acoustic phenom-
|
810 |
+
ena [8]. This setup allows us to evaluate with as many as 83
|
811 |
+
scenes, split in 56/10/17 for train/val/test, compare against
|
812 |
+
relevant prior work [59, 60] and report reproducible results.
|
813 |
+
We also collect real-world data in a mock-up apartment
|
814 |
+
due to the absence of a publicly available alternative suited
|
815 |
+
for our task. We capture a dense set of RGB images us-
|
816 |
+
ing a Samsung S22 camera and generate the corresponding
|
817 |
+
depth images using monocular depth estimation [22, 38]. To
|
818 |
+
compute the RIRs, following [25], we generate a sinusoidal
|
819 |
+
sweep sound from 20Hz-20kHz with a loudspeaker at source
|
820 |
+
location, capture it with an Eigenmike at a receiver location,
|
821 |
+
and convolve the spatial sound with the inverse of the sweep
|
822 |
+
sound to retrieve the RIR. All capturing devices are placed
|
823 |
+
at a height of 1.5 m. We generate occupancy maps by back-
|
824 |
+
projecting the depth images (cf. Sec. 4.1) and register them
|
825 |
+
onto a shared topdown map before taking egocentric crops
|
826 |
+
to generate the local occupancy inputs and targets.
|
827 |
+
Note that both datasets have real-world visuals as they
|
828 |
+
are captured in the real environments; SoundSpaces has
|
829 |
+
simulated audio while the apartment data has real-world
|
830 |
+
collected audio RIRs.
|
831 |
+
Conversation episode.
|
832 |
+
For each episode (both simula-
|
833 |
+
tion and real), we randomly place the two egos in a scene.
|
834 |
+
Episode length is T = 16 and 8 for simulation and real
|
835 |
+
resp. At each step, the egos execute a movement from
|
836 |
+
A =
|
837 |
+
�
|
838 |
+
MoveForward, TurnLeft, TurnRight
|
839 |
+
�
|
840 |
+
, where
|
841 |
+
MoveForward moves an ego forward by 1 m, and the
|
842 |
+
Turn actions rotate the ego by 90◦. Further, either of the
|
843 |
+
egos speaks or both speak with equal probability of 1
|
844 |
+
3 at
|
845 |
+
every step, i.e., there are no moments of silence. The egos
|
846 |
+
stay between 1 − 3m from each other so that they don’t col-
|
847 |
+
lide and so that each ego is audible by the other at all times.
|
848 |
+
This results in train/val splits of 1,955,334/100 episodes in
|
849 |
+
simulation, and a simulated/real-world test split of 1000/27
|
850 |
+
episodes. Visual budget B = 2 for our main experiments
|
851 |
+
(see Supp. Sec. 7.3 for B = 4, 6 evaluations). Note that
|
852 |
+
these episodes are simply to generate video data; our task
|
853 |
+
requires processing passive video, not controlling embodied
|
854 |
+
agents.
|
855 |
+
Observations and model output.
|
856 |
+
For the occupancy
|
857 |
+
maps, we generate 31 × 31 × 2-dimensional input maps
|
858 |
+
that cover 3.1 × 3.1 m2 [4, 11, 60] in area at a resolution of
|
859 |
+
0.1 m, and set the local target map size to H×W = 6.4×6.4
|
860 |
+
m2 (∼ 41 m2). For speech, we use 100 distinct speakers
|
861 |
+
from LibriSpeech [55], split in 80/11 for heard/unheard,
|
862 |
+
where unheard speech is only used in testing. We assume
|
863 |
+
access to correct camera poses since modern AR devices
|
864 |
+
are equipped with motion sensors that can robustly estimate
|
865 |
+
relative poses [46]. We test our robustness to ambient sounds
|
866 |
+
that get mixed with the egos’ speech, and incorporate odom-
|
867 |
+
etry noise models [59, 60] (see Supp. Sec. 7.4).
|
868 |
+
Evaluation settings.
|
869 |
+
We evaluate our model in two set-
|
870 |
+
tings: 1) passive mapping, the mapper has access to all
|
871 |
+
visual frames in an episode (i.e., the camera is always-on),
|
872 |
+
and 2) active mapping, where the mapping agent has to ac-
|
873 |
+
tively sample frames to meet the visual budget B. This helps
|
874 |
+
disentangle our modeling contributions—whereas passive
|
875 |
+
mapping lets us show improvements in the mapper hM over
|
876 |
+
existing methods [59, 60], active mapping helps demonstrate
|
877 |
+
the benefits of smart visual sampling.
|
878 |
+
We use standard evaluation metrics [60]: F1 score and
|
879 |
+
IoU (intersection over union) between the predicted and
|
880 |
+
target scene maps. For both metrics, we report the mean
|
881 |
+
over the free and occupied classes. For active mapping,
|
882 |
+
we average the metrics over 3 random seeds. We use the
|
883 |
+
following baselines to compare our model‘s efficacy.
|
884 |
+
Passive mapping:
|
885 |
+
• All-occupied: a naive baseline that predicts all locations
|
886 |
+
in its map estimate as occupied
|
887 |
+
• Register-inputs: a naive baseline that registers the input
|
888 |
+
maps onto a shared map and uses it as its prediction
|
889 |
+
6
|
890 |
+
|
891 |
+
Simulation
|
892 |
+
Real world
|
893 |
+
Model
|
894 |
+
F1 score ↑ IoU ↑ F1 score ↑ IoU ↑
|
895 |
+
All-occupied
|
896 |
+
63.4
|
897 |
+
48.8
|
898 |
+
36.2
|
899 |
+
23.8
|
900 |
+
Register-inputs
|
901 |
+
72.6
|
902 |
+
60.1
|
903 |
+
50.8
|
904 |
+
35.0
|
905 |
+
OccAnt [60]
|
906 |
+
74.5
|
907 |
+
62.7
|
908 |
+
53.9
|
909 |
+
38.3
|
910 |
+
AV-Floorplan [59]
|
911 |
+
79.3
|
912 |
+
67.9
|
913 |
+
54.5
|
914 |
+
38.7
|
915 |
+
Ours
|
916 |
+
81.8
|
917 |
+
71.4
|
918 |
+
55.5
|
919 |
+
39.2
|
920 |
+
Ours w/o vision
|
921 |
+
72.8
|
922 |
+
60.3
|
923 |
+
50.8
|
924 |
+
35.0
|
925 |
+
Ours w/o audio
|
926 |
+
78.1
|
927 |
+
66.7
|
928 |
+
54.1
|
929 |
+
38.0
|
930 |
+
Ours w/o E
|
931 |
+
′
|
932 |
+
i’s speech
|
933 |
+
81.5
|
934 |
+
70.9
|
935 |
+
55.4
|
936 |
+
39.1
|
937 |
+
Ours w/o shared mapping
|
938 |
+
80.7
|
939 |
+
70.0
|
940 |
+
54.9
|
941 |
+
38.6
|
942 |
+
Table 1. Passive mapping performance (%).
|
943 |
+
Model
|
944 |
+
F1 score ↑
|
945 |
+
IoU ↑
|
946 |
+
All-occupied
|
947 |
+
63.4
|
948 |
+
48.8
|
949 |
+
Register-inputs
|
950 |
+
72.6
|
951 |
+
60.1
|
952 |
+
OccAnt [60]
|
953 |
+
74.5
|
954 |
+
62.7
|
955 |
+
AV-Floorplan [59]
|
956 |
+
78.7
|
957 |
+
67.5
|
958 |
+
Ours
|
959 |
+
81.9
|
960 |
+
71.5
|
961 |
+
Ours w/o vision
|
962 |
+
73.5
|
963 |
+
61.2
|
964 |
+
Ours w/o audio
|
965 |
+
78.1
|
966 |
+
66.7
|
967 |
+
Ours w/o E
|
968 |
+
′
|
969 |
+
i’s speech
|
970 |
+
81.5
|
971 |
+
70.9
|
972 |
+
Ours w/o shared mapping
|
973 |
+
80.0
|
974 |
+
69.1
|
975 |
+
Table 2. Passive mapping performance (%) with ambient sounds.
|
976 |
+
• OccAnt [60]: a vision-only SOTA model that uses the
|
977 |
+
RGB-D images at each step to anticipate the occupancy of
|
978 |
+
the area around an ego that’s outside its visible range.
|
979 |
+
• AV-Floorplan [59]: an audio-visual SOTA model that
|
980 |
+
passively predicts the floorplan of a scene using a walk-
|
981 |
+
through in it, where the audio is either self-generated or
|
982 |
+
comes from semantic sources in the scene. We adapt the
|
983 |
+
model for our occupancy prediction task and give it the
|
984 |
+
exact same audio-visual observations as our model.
|
985 |
+
Active mapping:
|
986 |
+
• Random: an agent that selects visual frames randomly
|
987 |
+
for each ego as long as the budget allows
|
988 |
+
• Greedy: an agent that greedily uses up the visual budget
|
989 |
+
by sampling frames as early as possible
|
990 |
+
• Unique-pose: an agent that samples a frame for every
|
991 |
+
new ego pose in the episode
|
992 |
+
In active mapping, we use the model from the second pre-
|
993 |
+
training phase (Sec. 4.4) as the mapper for all models for fair
|
994 |
+
comparison. Thus, any difference in performance is due to
|
995 |
+
the quality of each method’s sampling decisions.
|
996 |
+
See Supp. for all other details like network architectures
|
997 |
+
and training hyperparameters (Sec. 7.8), and baseline imple-
|
998 |
+
mentation (Sec. 7.7).
|
999 |
+
5.1. Map prediction results
|
1000 |
+
Passive mapping.
|
1001 |
+
Table 1 (top) reports the prediction
|
1002 |
+
quality of all models in the passive mapping setting. Naive
|
1003 |
+
baselines (All-occupied, Register-inputs) perform worse than
|
1004 |
+
1
|
1005 |
+
4
|
1006 |
+
8
|
1007 |
+
12
|
1008 |
+
16
|
1009 |
+
Episode step
|
1010 |
+
62
|
1011 |
+
64
|
1012 |
+
66
|
1013 |
+
68
|
1014 |
+
70
|
1015 |
+
Mean F1 score (%)
|
1016 |
+
Random
|
1017 |
+
Unique pose
|
1018 |
+
Greedy
|
1019 |
+
Ours w/o audio for
|
1020 |
+
V
|
1021 |
+
Ours
|
1022 |
+
(a) Simulation
|
1023 |
+
1
|
1024 |
+
2
|
1025 |
+
4
|
1026 |
+
6
|
1027 |
+
8
|
1028 |
+
Episode step
|
1029 |
+
44
|
1030 |
+
46
|
1031 |
+
48
|
1032 |
+
50
|
1033 |
+
52
|
1034 |
+
Mean F1 score (%)
|
1035 |
+
Random
|
1036 |
+
Unique pose
|
1037 |
+
Greedy
|
1038 |
+
Ours w/o audio for
|
1039 |
+
V
|
1040 |
+
Ours
|
1041 |
+
(b) Real world
|
1042 |
+
Figure 3. Active mapping performance vs. episode step.
|
1043 |
+
1
|
1044 |
+
4
|
1045 |
+
8
|
1046 |
+
12
|
1047 |
+
16
|
1048 |
+
Episode step
|
1049 |
+
62
|
1050 |
+
64
|
1051 |
+
66
|
1052 |
+
68
|
1053 |
+
70
|
1054 |
+
Mean F1 score (%)
|
1055 |
+
Random
|
1056 |
+
Unique pose
|
1057 |
+
Greedy
|
1058 |
+
Ours w/o audio for
|
1059 |
+
V
|
1060 |
+
Ours
|
1061 |
+
(a) Effect of ambient sounds
|
1062 |
+
1-3
|
1063 |
+
3-5
|
1064 |
+
5-7
|
1065 |
+
7-9
|
1066 |
+
Inter-ego distance thresholds (m)
|
1067 |
+
0.4
|
1068 |
+
0.6
|
1069 |
+
0.8
|
1070 |
+
1.0
|
1071 |
+
1.2
|
1072 |
+
Ours
|
1073 |
+
Ours w/o E
|
1074 |
+
′
|
1075 |
+
i's speech
|
1076 |
+
Mean F1 score (%)
|
1077 |
+
Mean IoU (%)
|
1078 |
+
(b) Impact of ego E
|
1079 |
+
′
|
1080 |
+
i’s speech
|
1081 |
+
Figure 4. (a) Effect of ambient environment sounds on active
|
1082 |
+
mapping (b) Impact of the other ego’s speech on passive mapping
|
1083 |
+
vs. distance between the egos.
|
1084 |
+
the learned models, showing the complexity of our map pre-
|
1085 |
+
diction task. AV-Floorplan [59] fares the best among all
|
1086 |
+
baselines. Its improvement over OccAnt [60] demonstrates
|
1087 |
+
the benefits of exploiting the spatial cues in audio for map-
|
1088 |
+
ping and using an attention-based model to leverage the long-
|
1089 |
+
and short-range correlations in the audio-visual inputs.
|
1090 |
+
Our method outperforms all baselines. Its improvement
|
1091 |
+
over AV-Floorplan [59] underlines the efficacy of perform-
|
1092 |
+
ing attention at different granularities—across modalities,
|
1093 |
+
within a single modality and within a single input—guided
|
1094 |
+
by our positional and modality type embeddings. It also
|
1095 |
+
generalizes to the real-world setting and retains its benefits
|
1096 |
+
over the baselines, even without retraining on the real-world
|
1097 |
+
data. However, we do observe a drop in performance gains,
|
1098 |
+
probably due to the large sim-to-real gap.
|
1099 |
+
Active mapping.
|
1100 |
+
Fig. 3 shows the active mapping per-
|
1101 |
+
formance as a function of episode progress. Employing
|
1102 |
+
naive heuristics for sampling, like Random or Greedy, isn’t
|
1103 |
+
enough for high-quality mapping, which emphasizes the
|
1104 |
+
high levels of redundancy in the visual frames. Unique-pose
|
1105 |
+
improves over both Random and Greedy, showing that sam-
|
1106 |
+
pling diverse viewpoints provides more information about
|
1107 |
+
the underlying scene geometry.
|
1108 |
+
Even though the baselines make progress initially, they
|
1109 |
+
flatten quickly and our model eventually outperforms them
|
1110 |
+
all, on both real-world and simulated data. This highlights
|
1111 |
+
the benefits of learning a smart policy that, given the audio
|
1112 |
+
streams and its visual samples from the past, understands
|
1113 |
+
the value of sampling a visual frame for mapping by taking
|
1114 |
+
7
|
1115 |
+
|
1116 |
+
Example 1
|
1117 |
+
Example 2
|
1118 |
+
Sampled views
|
1119 |
+
1
|
1120 |
+
2
|
1121 |
+
3
|
1122 |
+
4
|
1123 |
+
Sampled views
|
1124 |
+
1
|
1125 |
+
2
|
1126 |
+
3
|
1127 |
+
4
|
1128 |
+
1
|
1129 |
+
2
|
1130 |
+
3
|
1131 |
+
4
|
1132 |
+
3
|
1133 |
+
4
|
1134 |
+
1
|
1135 |
+
2
|
1136 |
+
Example 2
|
1137 |
+
Legend
|
1138 |
+
View
|
1139 |
+
Correct prediction
|
1140 |
+
Occupied
|
1141 |
+
Seen
|
1142 |
+
Free
|
1143 |
+
Occupied
|
1144 |
+
Unseen
|
1145 |
+
Incorrect prediction
|
1146 |
+
Occupied
|
1147 |
+
Free
|
1148 |
+
Sampled
|
1149 |
+
Skipped
|
1150 |
+
Figure 5. Sample episodes for our active mapping model. While our policy samples only the salient visual frames, our mapper can both
|
1151 |
+
complete partially seen objects as well as anticipate objects never seen before in the sampled visuals (red boxes on the maps).
|
1152 |
+
cues from our novel reward. Moreover, on the real-world
|
1153 |
+
data, we see improved performance margins over the base-
|
1154 |
+
lines towards end of episodes, showing that our policy can
|
1155 |
+
adaptively postpone visual sampling to improve mapping.
|
1156 |
+
Owing to our smart sampling, the per-episode reduction in
|
1157 |
+
processing for B = 2 is 7.2 GFLOPS in simulation and 3.6
|
1158 |
+
GFLOPS for the real-world data.
|
1159 |
+
5.2. Model analysis
|
1160 |
+
Ablations.
|
1161 |
+
In Table 1 (bottom), we ablate the components
|
1162 |
+
of our model for passive mapping. Upon removing audio,
|
1163 |
+
our model experiences a large drop in mapping performance,
|
1164 |
+
which indicates that our model leverages complementary spa-
|
1165 |
+
tial cues in audio and vision. We also see a drop in the map
|
1166 |
+
quality when our model doesn’t have access to the speech
|
1167 |
+
from the other ego (E
|
1168 |
+
′
|
1169 |
+
i). This shows that E
|
1170 |
+
′
|
1171 |
+
i’s speech can
|
1172 |
+
better reveal the more global scene geometry than Ei’s own
|
1173 |
+
speech. Fig. 4b further shows that the impact of the other
|
1174 |
+
ego’s speech becomes more prominent for larger inter-ego
|
1175 |
+
distances (3 − 5 m vs. 1 − 3 m), in which case the two
|
1176 |
+
types of speech are dissimilar enough to carry complemen-
|
1177 |
+
tary geometric cues, but reduces for even larger distances
|
1178 |
+
(5 m or more), in which case E
|
1179 |
+
′
|
1180 |
+
i is too far for its speech to
|
1181 |
+
carry useful cues about Ei’s local scene geometry. Moreover,
|
1182 |
+
unlike the ablation that doesn’t perform shared mapping, our
|
1183 |
+
model benefits significantly from jointly attending to the
|
1184 |
+
observations of the egos and exploiting the complementary
|
1185 |
+
information in them—even though both models use the exact
|
1186 |
+
same audio-visual observations, including both speech from
|
1187 |
+
self and the other ego.
|
1188 |
+
For active mapping, Fig. 3 shows a drop in the mapping
|
1189 |
+
performance upon removing audio from the policy inputs.
|
1190 |
+
This implies that our policy exploits audio to reason about
|
1191 |
+
the level of redundancy in a new visual frame and improve
|
1192 |
+
the mapping quality vs. visual budget tradeoff. On the more
|
1193 |
+
challenging real-world setting, audio plays an even bigger
|
1194 |
+
role, as shown by the larger performance drop in Fig. 3b.
|
1195 |
+
See Supp. for similar results with 1) unheard speech
|
1196 |
+
(Sec. 7.2), 2) higher values of budget B (Sec. 7.3), 3) sensor
|
1197 |
+
noise (Sec. 7.4), and 4) larger target map sizes (Sec. 7.5).
|
1198 |
+
Ambient and background sounds.
|
1199 |
+
We also test our
|
1200 |
+
model’s robustness to ambient and background sounds by
|
1201 |
+
inserting a non-speech sound (e.g. running AC, dog barking,
|
1202 |
+
etc.) at a random location outside the egos’ trajectories. Al-
|
1203 |
+
though quite challenging, our model performs better than
|
1204 |
+
the baselines for both passive (Table 2) and active mapping
|
1205 |
+
(Fig. 4a). Hence, even without explicit audio separation, our
|
1206 |
+
model is able to implicitly ground its audio representations
|
1207 |
+
in the corresponding pose features for accurate mapping.
|
1208 |
+
Qualitative results.
|
1209 |
+
Fig. 5 shows two successful active
|
1210 |
+
mapping episodes of our method. Note how our model
|
1211 |
+
samples views that tend have to little visual overlap but are
|
1212 |
+
informative of the surrounding geometry (both occupied and
|
1213 |
+
free spaces). Besides, it is able to complete structures only
|
1214 |
+
partially visible in the sampled views, and more interestingly,
|
1215 |
+
leverage the synergy of audio and vision to anticipate unseen
|
1216 |
+
areas (red boxes on the occupancy maps in Fig. 5).
|
1217 |
+
Failure cases.
|
1218 |
+
We notice two common failure cases with
|
1219 |
+
active mapping: episodes where the people stay at the same
|
1220 |
+
location, leading to very few informative visual frames to
|
1221 |
+
sample from; and episodes with highly unique visual samples
|
1222 |
+
at every trajectory step, in which case each sample is useful
|
1223 |
+
and our model behaves similar to Unique-pose or Greedy.
|
1224 |
+
For passive mapping, our model fails with very complex
|
1225 |
+
scenes that commonly have objects in spaces where both
|
1226 |
+
vision and audio can’t reach (e.g. narrow corners)
|
1227 |
+
8
|
1228 |
+
|
1229 |
+
6. Conclusion
|
1230 |
+
We introduce Chat2Map, a new task aimed at scene map-
|
1231 |
+
ping using audio-visual feeds from egocentric conversations.
|
1232 |
+
We develop a novel approach for Chat2Map comprised of
|
1233 |
+
a shared scene mapper and a visual sampling policy based
|
1234 |
+
on a novel reinforcement learner that smartly samples the
|
1235 |
+
visuals only when necessary. We show promising perfor-
|
1236 |
+
mance on both simulated data and real-world data from over
|
1237 |
+
80 environments.
|
1238 |
+
References
|
1239 |
+
[1] Ego-Noise Predictions for Echolocation in Wheeled Robots,
|
1240 |
+
volume ALIFE 2019: The 2019 Conference on Artificial Life
|
1241 |
+
of ALIFE 2022: The 2022 Conference on Artificial Life, 07
|
1242 |
+
2019. 1, 2
|
1243 |
+
[2] Aaron Carroll and Gernot Heiser. An analysis of power con-
|
1244 |
+
sumption in a smartphone. In 2010 USENIX Annual Technical
|
1245 |
+
Conference (USENIX ATC 10), 2010. 2, 3
|
1246 |
+
[3] Angel Chang, Angela Dai, Thomas Funkhouser, Maciej
|
1247 |
+
Halber, Matthias Niessner, Manolis Savva, Shuran Song,
|
1248 |
+
Andy Zeng, and Yinda Zhang.
|
1249 |
+
Matterport3d: Learning
|
1250 |
+
from rgb-d data in indoor environments.
|
1251 |
+
arXiv preprint
|
1252 |
+
arXiv:1709.06158, 2017. 6
|
1253 |
+
[4] Devendra Singh Chaplot, Dhiraj Gandhi, Saurabh Gupta, Ab-
|
1254 |
+
hinav Gupta, and Ruslan Salakhutdinov. Learning to explore
|
1255 |
+
using active neural slam. In International Conference on
|
1256 |
+
Learning Representations, 2020. 1, 2, 6
|
1257 |
+
[5] Devendra Singh Chaplot, Dhiraj Prakashchand Gandhi, Abhi-
|
1258 |
+
nav Gupta, and Russ R Salakhutdinov. Object goal navigation
|
1259 |
+
using goal-oriented semantic exploration. Advances in Neural
|
1260 |
+
Information Processing Systems, 33:4247–4258, 2020. 2
|
1261 |
+
[6] Changan Chen, Ziad Al-Halah, and Kristen Grauman. Seman-
|
1262 |
+
tic audio-visual navigation. In Proceedings of the IEEE/CVF
|
1263 |
+
Conference on Computer Vision and Pattern Recognition,
|
1264 |
+
pages 15516–15525, 2021. 2
|
1265 |
+
[7] Changan Chen, Ruohan Gao, Paul T. Calamia, and Kristen
|
1266 |
+
Grauman. Visual acoustic matching. 2022 IEEE/CVF Con-
|
1267 |
+
ference on Computer Vision and Pattern Recognition (CVPR),
|
1268 |
+
pages 18836–18846, 2022. 2
|
1269 |
+
[8] Changan Chen, Unnat Jain, Carl Schissler, Sebastia Vi-
|
1270 |
+
cenc Amengual Gari, Ziad Al-Halah, Vamsi Krishna Ithapu,
|
1271 |
+
Philip Robinson, and Kristen Grauman. Soundspaces: Audio-
|
1272 |
+
visual navigation in 3d environments. In ECCV, 2020. 1, 6,
|
1273 |
+
13, 14
|
1274 |
+
[9] Changan Chen, Sagnik Majumder, Ziad Al-Halah, Ruohan
|
1275 |
+
Gao, Santhosh Kumar Ramakrishnan, and Kristen Grauman.
|
1276 |
+
Learning to set waypoints for audio-visual navigation. In
|
1277 |
+
International Conference on Learning Representations, 2021.
|
1278 |
+
2
|
1279 |
+
[10] Jiacheng Chen, Chen Liu, Jiaye Wu, and Yasutaka Furukawa.
|
1280 |
+
Floor-sp: Inverse cad for floorplans by sequential room-wise
|
1281 |
+
shortest path. In Proceedings of the IEEE/CVF International
|
1282 |
+
Conference on Computer Vision, pages 2661–2670, 2019. 1,
|
1283 |
+
2
|
1284 |
+
[11] Tao Chen, Saurabh Gupta, and Abhinav Gupta.
|
1285 |
+
Learn-
|
1286 |
+
ing exploration policies for navigation.
|
1287 |
+
arXiv preprint
|
1288 |
+
arXiv:1903.01959, 2019. 1, 2, 6
|
1289 |
+
[12] Yangyu Chen, Shuhui Wang, Weigang Zhang, and Qingming
|
1290 |
+
Huang. Less is more: Picking informative frames for video
|
1291 |
+
captioning. In Proceedings of the European conference on
|
1292 |
+
computer vision (ECCV), pages 358–373, 2018. 3
|
1293 |
+
[13] Sungjoon Choi, Qian-Yi Zhou, and Vladlen Koltun. Robust
|
1294 |
+
reconstruction of indoor scenes. In 2015 IEEE Conference
|
1295 |
+
on Computer Vision and Pattern Recognition (CVPR), pages
|
1296 |
+
5556–5565, 2015. 14
|
1297 |
+
[14] Jesper Christensen, Sascha Hornauer, and Stella Yu. Batvision
|
1298 |
+
- learning to see 3d spatial layout with two ears. In ICRA,
|
1299 |
+
2020. 1
|
1300 |
+
[15] Jesper Haahr Christensen, Sascha Hornauer, and Stella X.
|
1301 |
+
Yu. Batvision: Learning to see 3d spatial layout with two
|
1302 |
+
ears. In 2020 IEEE International Conference on Robotics and
|
1303 |
+
Automation (ICRA), pages 1581–1587, 2020. 1, 2
|
1304 |
+
[16] Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel,
|
1305 |
+
Aaron C Courville, and Yoshua Bengio. A recurrent latent
|
1306 |
+
variable model for sequential data. In C. Cortes, N. Lawrence,
|
1307 |
+
D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in
|
1308 |
+
Neural Information Processing Systems, volume 28. Curran
|
1309 |
+
Associates, Inc., 2015. 16
|
1310 |
+
[17] Abhishek Das, Théophile Gervet, Joshua Romoff, Dhruv
|
1311 |
+
Batra, Devi Parikh, Michael G. Rabbat, and Joelle Pineau.
|
1312 |
+
Tarmac:
|
1313 |
+
Targeted multi-agent communication.
|
1314 |
+
ArXiv,
|
1315 |
+
abs/1810.11187, 2019. 2
|
1316 |
+
[18] Samyak Datta, Sameer Dharur, Vincent Cartillier, Ruta Desai,
|
1317 |
+
Mukul Khanna, Dhruv Batra, and Devi Parikh. Episodic
|
1318 |
+
memory question answering. 2022. 1
|
1319 |
+
[19] Victoria Dean, Shubham Tulsiani, and Abhinav Gupta. See,
|
1320 |
+
hear, explore: Curiosity via audio-visual association. Ad-
|
1321 |
+
vances in Neural Information Processing Systems, 33:14961–
|
1322 |
+
14972, 2020. 2
|
1323 |
+
[20] Helisa Dhamo, Nassir Navab, and Federico Tombari. Object-
|
1324 |
+
driven multi-layer scene decomposition from a single image.
|
1325 |
+
In Proceedings of the IEEE/CVF International Conference
|
1326 |
+
on Computer Vision, pages 5369–5378, 2019. 2
|
1327 |
+
[21] Ivan Dokmani´c, Reza Parhizkar, Andreas Walther, Yue M.
|
1328 |
+
Lu, and Martin Vetterli.
|
1329 |
+
Acoustic echoes reveal room
|
1330 |
+
shape. Proceedings of the National Academy of Sciences,
|
1331 |
+
110(30):12186–12191, 2013. 3
|
1332 |
+
[22] Ainaz Eftekhar, Alexander Sax, Jitendra Malik, and Amir
|
1333 |
+
Zamir. Omnidata: A scalable pipeline for making multi-task
|
1334 |
+
mid-level vision datasets from 3d scans. In Proceedings of
|
1335 |
+
the IEEE/CVF International Conference on Computer Vision,
|
1336 |
+
pages 10786–10796, 2021. 6
|
1337 |
+
[23] Amine Elhafsi, Boris Ivanovic, Lucas Janson, and Marco
|
1338 |
+
Pavone. Map-predictive motion planning in unknown environ-
|
1339 |
+
ments. In 2020 IEEE International Conference on Robotics
|
1340 |
+
and Automation (ICRA), pages 8552–8558. IEEE, 2020. 2
|
1341 |
+
[24] Itamar Eliakim, Zahi Cohen, Gábor Kósa, and Yossi Yovel.
|
1342 |
+
A fully autonomous terrestrial bat-like acoustic robot. PLoS
|
1343 |
+
Computational Biology, 14, 2018. 1, 2
|
1344 |
+
[25] Angelo Farina. Simultaneous measurement of impulse re-
|
1345 |
+
sponse and distortion with a swept-sine technique. Journal of
|
1346 |
+
The Audio Engineering Society, 2000. 6
|
1347 |
+
[26] Chuang Gan, Jeremy Schwartz, Seth Alter, Martin Schrimpf,
|
1348 |
+
James Traer, Julian De Freitas, Jonas Kubilius, Abhishek
|
1349 |
+
Bhandwaldar, Nick Haber, Megumi Sano, Kuno Kim, Elias
|
1350 |
+
Wang, Damian Mrowca, Michael Lingelbach, Aidan Curtis,
|
1351 |
+
9
|
1352 |
+
|
1353 |
+
Kevin T. Feigelis, Daniel M. Bear, Dan Gutfreund, David D.
|
1354 |
+
Cox, James J. DiCarlo, Josh H. McDermott, Joshua B. Tenen-
|
1355 |
+
baum, and Daniel L. K. Yamins. Threedworld: A platform
|
1356 |
+
for interactive multi-modal physical simulation. In NeurIPS
|
1357 |
+
Track on Datasets and Benchmarks, 2021. 1
|
1358 |
+
[27] Chuang Gan, Yiwei Zhang, Jiajun Wu, Boqing Gong, and
|
1359 |
+
Joshua B. Tenenbaum. Look, listen, and act: Towards audio-
|
1360 |
+
visual embodied navigation. In 2020 IEEE International
|
1361 |
+
Conference on Robotics and Automation (ICRA), pages 9701–
|
1362 |
+
9707, 2020. 2
|
1363 |
+
[28] Ruohan Gao, Changan Chen, Ziad Al-Halah, Carl Schissler,
|
1364 |
+
and Kristen Grauman. Visualechoes: Spatial image repre-
|
1365 |
+
sentation learning through echolocation. In Andrea Vedaldi,
|
1366 |
+
Horst Bischof, Thomas Brox, and Jan-Michael Frahm, edi-
|
1367 |
+
tors, Computer Vision – ECCV 2020, pages 658–676, Cham,
|
1368 |
+
2020. Springer International Publishing. 1, 2
|
1369 |
+
[29] Ruohan Gao, Tae-Hyun Oh, Kristen Grauman, and Lorenzo
|
1370 |
+
Torresani. Listen to look: Action recognition by previewing
|
1371 |
+
audio. 2020 IEEE/CVF Conference on Computer Vision and
|
1372 |
+
Pattern Recognition (CVPR), pages 10454–10464, 2020. 3
|
1373 |
+
[30] Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary
|
1374 |
+
Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger,
|
1375 |
+
Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar
|
1376 |
+
Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrish-
|
1377 |
+
nan, Fiona Ryan, Jayant Sharma, Michael Wray, Meng-
|
1378 |
+
meng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal,
|
1379 |
+
Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie
|
1380 |
+
Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano
|
1381 |
+
Fragomeni, Qichen Fu, Christian Fuegen, Abrham Gebrese-
|
1382 |
+
lasie, Cristina Gonzalez, James Hillis, Xuhua Huang, Yifei
|
1383 |
+
Huang, Wenqi Jia, Weslie Khoo, Jachym Kolar, Satwik Kot-
|
1384 |
+
tur, Anurag Kumar, Federico Landini, Chao Li, Yanghao
|
1385 |
+
Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu,
|
1386 |
+
Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will
|
1387 |
+
Price, Paola Ruiz Puentes, Merey Ramazanova, Leda Sari,
|
1388 |
+
Kiran Somasundaram, Audrey Southerland, Yusuke Sugano,
|
1389 |
+
Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi,
|
1390 |
+
Yunyi Zhu, Pablo Arbelaez, David Crandall, Dima Damen,
|
1391 |
+
Giovanni Maria Farinella, Bernard Ghanem, Vamsi Krishna
|
1392 |
+
Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li,
|
1393 |
+
Richard Newcombe, Aude Oliva, Hyun Soo Park, James M.
|
1394 |
+
Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio
|
1395 |
+
Torralba, Lorenzo Torresani, Mingfei Yan, and Jitendra Ma-
|
1396 |
+
lik. Ego4d: Around the World in 3,000 Hours of Egocentric
|
1397 |
+
Video. In IEEE/CVF Computer Vision and Pattern Recogni-
|
1398 |
+
tion (CVPR), 2022. 1
|
1399 |
+
[31] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
|
1400 |
+
Delving deep into rectifiers: Surpassing human-level per-
|
1401 |
+
formance on imagenet classification. In Proceedings of the
|
1402 |
+
IEEE international conference on computer vision, pages
|
1403 |
+
1026–1034, 2015. 16
|
1404 |
+
[32] Sergey Ioffe and Christian Szegedy. Batch normalization:
|
1405 |
+
Accelerating deep network training by reducing internal co-
|
1406 |
+
variate shift. In Francis Bach and David Blei, editors, Pro-
|
1407 |
+
ceedings of the 32nd International Conference on Machine
|
1408 |
+
Learning, volume 37 of Proceedings of Machine Learning Re-
|
1409 |
+
search, pages 448–456, Lille, France, 07–09 Jul 2015. PMLR.
|
1410 |
+
15, 16
|
1411 |
+
[33] Shariq Iqbal and Fei Sha. Coordinated exploration via in-
|
1412 |
+
trinsic rewards for multi-agent reinforcement learning. arXiv
|
1413 |
+
preprint arXiv:1905.12127, 2019. 2
|
1414 |
+
[34] Max Jaderberg, Wojciech M. Czarnecki, Iain Dunning, Luke
|
1415 |
+
Marris, Guy Lever, Antonio García Castañeda, Charlie Beat-
|
1416 |
+
tie, Neil C. Rabinowitz, Ari S. Morcos, Avraham Ruder-
|
1417 |
+
man, Nicolas Sonnerat, Tim Green, Louise Deason, Joel Z.
|
1418 |
+
Leibo, David Silver, Demis Hassabis, Koray Kavukcuoglu,
|
1419 |
+
and Thore Graepel. Human-level performance in 3d multi-
|
1420 |
+
player games with population-based reinforcement learning.
|
1421 |
+
Science, 364:859 – 865, 2019. 2
|
1422 |
+
[35] Unnat Jain, Luca Weihs, Eric Kolve, Ali Farhadi, Svetlana
|
1423 |
+
Lazebnik, Aniruddha Kembhavi, and Alexander G. Schwing.
|
1424 |
+
A cordial sync: Going beyond marginal policies for multi-
|
1425 |
+
agent embodied tasks. ArXiv, abs/2007.04979, 2020. 2
|
1426 |
+
[36] Unnat Jain, Luca Weihs, Eric Kolve, Mohammad Rastegari,
|
1427 |
+
Svetlana Lazebnik, Ali Farhadi, Alexander G. Schwing, and
|
1428 |
+
Aniruddha Kembhavi. Two body problem: Collaborative
|
1429 |
+
visual task completion. 2019 IEEE/CVF Conference on Com-
|
1430 |
+
puter Vision and Pattern Recognition (CVPR), pages 6682–
|
1431 |
+
6692, 2019. 2
|
1432 |
+
[37] Hao Jiang, Calvin Murdock, and Vamsi Krishna Ithapu. Ego-
|
1433 |
+
centric deep multi-channel audio-visual active speaker local-
|
1434 |
+
ization. 2022 IEEE/CVF Conference on Computer Vision and
|
1435 |
+
Pattern Recognition (CVPR), pages 10534–10542, 2022. 15
|
1436 |
+
[38] O˘guzhan Fatih Kar, Teresa Yeo, Andrei Atanov, and Amir
|
1437 |
+
Zamir. 3d common corruptions and data augmentation. In
|
1438 |
+
Proceedings of the IEEE/CVF Conference on Computer Vi-
|
1439 |
+
sion and Pattern Recognition, pages 18963–18974, 2022. 6
|
1440 |
+
[39] Kapil Katyal, Katie Popek, Chris Paxton, Phil Burlina, and
|
1441 |
+
Gregory D. Hager. Uncertainty-aware occupancy map pre-
|
1442 |
+
diction using generative networks for robot navigation. In
|
1443 |
+
2019 International Conference on Robotics and Automation
|
1444 |
+
(ICRA), pages 5453–5459, 2019. 2
|
1445 |
+
[40] Hansung Kim, Luca Remaggi, Philip JB Jackson, Fil-
|
1446 |
+
ippo Maria Fazi, and Adrian Hilton. 3d room geometry recon-
|
1447 |
+
struction using audio-visual sensors. In 2017 International
|
1448 |
+
Conference on 3D Vision (3DV), pages 621–629, 2017. 2
|
1449 |
+
[41] Diederik P Kingma and Jimmy Ba. Adam: A method for
|
1450 |
+
stochastic optimization.
|
1451 |
+
arXiv preprint arXiv:1412.6980,
|
1452 |
+
2014. 16
|
1453 |
+
[42] Bruno Korbar, Du Tran, and Lorenzo Torresani. Scsampler:
|
1454 |
+
Sampling salient clips from video for efficient action recogni-
|
1455 |
+
tion. 2019 IEEE/CVF International Conference on Computer
|
1456 |
+
Vision (ICCV), pages 6231–6241, 2019. 3
|
1457 |
+
[43] Jintao Lin, Haodong Duan, Kai Chen, Dahua Lin, and Limin
|
1458 |
+
Wang. Ocsampler: Compressing videos to one clip with
|
1459 |
+
single-step sampling. In Proceedings of the IEEE/CVF Con-
|
1460 |
+
ference on Computer Vision and Pattern Recognition, pages
|
1461 |
+
13894–13903, 2022. 3
|
1462 |
+
[44] David B. Lindell, Gordon Wetzstein, and Vladlen Koltun.
|
1463 |
+
Acoustic non-line-of-sight imaging. In 2019 IEEE/CVF Con-
|
1464 |
+
ference on Computer Vision and Pattern Recognition (CVPR),
|
1465 |
+
pages 6773–6782, 2019. 1, 2
|
1466 |
+
[45] Chen Liu, Jiaye Wu, and Yasutaka Furukawa. Floornet: A uni-
|
1467 |
+
fied framework for floorplan reconstruction from 3d scans. In
|
1468 |
+
Proceedings of the European conference on computer vision
|
1469 |
+
(ECCV), pages 201–217, 2018. 1, 2
|
1470 |
+
[46] Wenxin Liu, David Caruso, Eddy Ilg, Jing Dong, Anastasios I
|
1471 |
+
Mourikis, Kostas Daniilidis, Vijay Kumar, and Jakob Engel.
|
1472 |
+
10
|
1473 |
+
|
1474 |
+
Tlio: Tight learned inertial odometry. IEEE Robotics and
|
1475 |
+
Automation Letters, 5(4):5653–5660, 2020. 6
|
1476 |
+
[47] Sagnik Majumder, Ziad Al-Halah, and Kristen Grauman.
|
1477 |
+
Move2hear: Active audio-visual source separation. In Pro-
|
1478 |
+
ceedings of the IEEE/CVF International Conference on Com-
|
1479 |
+
puter Vision, pages 275–285, 2021. 2
|
1480 |
+
[48] Sagnik Majumder, Ziad Al-Halah, and Kristen Grauman. Ac-
|
1481 |
+
tive audio-visual separation of dynamic sound sources. In
|
1482 |
+
European Conference on Computer Vision. Springer, 2022. 2
|
1483 |
+
[49] Sagnik Majumder, Changan Chen, Ziad Al-Halah, and Kris-
|
1484 |
+
ten Grauman. Few-shot audio-visual learning of environment
|
1485 |
+
acoustics. In Thirty-Sixth Conference on Neural Information
|
1486 |
+
Processing Systems, 2022. 2, 5
|
1487 |
+
[50] Vinod Nair and Geoffrey E. Hinton. Rectified Linear Units
|
1488 |
+
Improve Restricted Boltzmann Machines. In Proceedings
|
1489 |
+
of the 27th International Conference on Machine Learning,
|
1490 |
+
pages 807–814. Omnipress, 2010. 15, 16
|
1491 |
+
[51] Medhini Narasimhan, Erik Wijmans, Xinlei Chen, Trevor Dar-
|
1492 |
+
rell, Dhruv Batra, Devi Parikh, and Amanpreet Singh. Seeing
|
1493 |
+
the un-scene: Learning amodal semantic maps for room navi-
|
1494 |
+
gation. In European Conference on Computer Vision, pages
|
1495 |
+
513–529. Springer, 2020. 2
|
1496 |
+
[52] Simon T O’Callaghan and Fabio T Ramos. Gaussian process
|
1497 |
+
occupancy maps*. Int. J. Rob. Res., 31(1):42–62, jan 2012. 2
|
1498 |
+
[53] Brian Okorn, Xuehan Xiong, and Burcu Akinci. Toward
|
1499 |
+
automated modeling of floor plans. In 3D PVT, 2009. 1, 2
|
1500 |
+
[54] Andrew Owens, Phillip Isola, Josh McDermott, Antonio Tor-
|
1501 |
+
ralba, Edward H. Adelson, and William T. Freeman. Visually
|
1502 |
+
indicated sounds. In 2016 IEEE Conference on Computer
|
1503 |
+
Vision and Pattern Recognition (CVPR), pages 2405–2413,
|
1504 |
+
2016. 2
|
1505 |
+
[55] Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev
|
1506 |
+
Khudanpur. Librispeech: An asr corpus based on public
|
1507 |
+
domain audio books. In 2015 IEEE International Conference
|
1508 |
+
on Acoustics, Speech and Signal Processing (ICASSP), pages
|
1509 |
+
5206–5210, 2015. 6
|
1510 |
+
[56] Rameswar Panda, Chun-Fu Richard Chen, Quanfu Fan, Xi-
|
1511 |
+
meng Sun, Kate Saenko, Aude Oliva, and Rogerio Feris.
|
1512 |
+
Adamml: Adaptive multi-modal learning for efficient video
|
1513 |
+
recognition. In Proceedings of the IEEE/CVF International
|
1514 |
+
Conference on Computer Vision, pages 7576–7585, 2021. 3
|
1515 |
+
[57] Shivansh Patel, Saim Wani, Unnat Jain, Alexander G.
|
1516 |
+
Schwing, Svetlana Lazebnik, Manolis Savva, and Angel X.
|
1517 |
+
Chang. Interpretation of emergent communication in het-
|
1518 |
+
erogeneous collaborative embodied agents. 2021 IEEE/CVF
|
1519 |
+
International Conference on Computer Vision (ICCV), pages
|
1520 |
+
15993–15943, 2021. 2
|
1521 |
+
[58] Katharine Patterson, Kevin W. Wilson, Scott Wisdom, and
|
1522 |
+
John R. Hershey. Distance-based sound separation. In IN-
|
1523 |
+
TERSPEECH, 2022. 15
|
1524 |
+
[59] Senthil Purushwalkam, Sebastia Vicenc Amengual Gari,
|
1525 |
+
Vamsi Krishna Ithapu, Carl Schissler, Philip Robinson, Ab-
|
1526 |
+
hinav Gupta, and Kristen Grauman. Audio-visual floorplan
|
1527 |
+
reconstruction. In Proceedings of the IEEE/CVF Interna-
|
1528 |
+
tional Conference on Computer Vision, pages 1183–1192,
|
1529 |
+
2021. 1, 2, 3, 6, 7, 13, 14
|
1530 |
+
[60] Santhosh K Ramakrishnan, Ziad Al-Halah, and Kristen Grau-
|
1531 |
+
man. Occupancy anticipation for efficient exploration and
|
1532 |
+
navigation. In European Conference on Computer Vision,
|
1533 |
+
pages 400–418. Springer, 2020. 1, 2, 6, 7, 13, 14
|
1534 |
+
[61] Fabio Ramos and Lionel Ott. Hilbert maps: Scalable con-
|
1535 |
+
tinuous occupancy mapping with stochastic gradient descent.
|
1536 |
+
The International Journal of Robotics Research, 35(14):1717–
|
1537 |
+
1730, 2016. 2
|
1538 |
+
[62] João Machado Santos, David Portugal, and Rui P. Rocha. An
|
1539 |
+
evaluation of 2d slam techniques available in robot operating
|
1540 |
+
system. In 2013 IEEE International Symposium on Safety,
|
1541 |
+
Security, and Rescue Robotics (SSRR), pages 1–6, 2013. 2
|
1542 |
+
[63] Manolis Savva, Abhishek Kadian, Oleksandr Maksymets,
|
1543 |
+
Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu,
|
1544 |
+
Vladlen Koltun, Jitendra Malik, et al. Habitat: A platform
|
1545 |
+
for embodied ai research. In Proceedings of the IEEE/CVF
|
1546 |
+
International Conference on Computer Vision, pages 9339–
|
1547 |
+
9347, 2019. 6, 14
|
1548 |
+
[64] Andrew M. Saxe, James L. McClelland, and Surya Ganguli.
|
1549 |
+
Exact solutions to the nonlinear dynamics of learning in deep
|
1550 |
+
linear neural networks. CoRR, abs/1312.6120, 2014. 16
|
1551 |
+
[65] Carl Schissler, Christian Loftin, and Dinesh Manocha. Acous-
|
1552 |
+
tic classification and optimization for multi-modal rendering
|
1553 |
+
of real-world scenes. IEEE Transactions on Visualization and
|
1554 |
+
Computer Graphics, 24:1246–1259, 2018. 2
|
1555 |
+
[66] John Schulman, Philipp Moritz, Sergey Levine, Michael I.
|
1556 |
+
Jordan, and P. Abbeel. High-dimensional continuous control
|
1557 |
+
using generalized advantage estimation. CoRR, 2016. 16
|
1558 |
+
[67] Ransalu Senanayake, Thushan Ganegedara, and Fabio Ramos.
|
1559 |
+
Deep occupancy maps: a continuous mapping technique for
|
1560 |
+
dynamic environments. In NIPS 2017 Workshop MLITS, 2017.
|
1561 |
+
2
|
1562 |
+
[68] Rakesh Shrestha, Fei-Peng Tian, Wei Feng, Ping Tan, and
|
1563 |
+
Richard Vaughan.
|
1564 |
+
Learned map prediction for enhanced
|
1565 |
+
mobile robot exploration. In 2019 International Conference
|
1566 |
+
on Robotics and Automation (ICRA), pages 1197–1204, 2019.
|
1567 |
+
2
|
1568 |
+
[69] Jascha Sohl-Dickstein, Santani Teng, Benjamin M. Gaub,
|
1569 |
+
Chris C. Rodgers, Crystal Li, Michael R. DeWeese, and
|
1570 |
+
Nicol S. Harper. A device for human ultrasonic echolocation.
|
1571 |
+
IEEE Transactions on Biomedical Engineering, 62(6):1526–
|
1572 |
+
1534, 2015. 1, 2
|
1573 |
+
[70] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya
|
1574 |
+
Sutskever, and Ruslan Salakhutdinov. Dropout: A simple
|
1575 |
+
way to prevent neural networks from overfitting. Journal of
|
1576 |
+
Machine Learning Research, 15(56):1929–1958, 2014. 16
|
1577 |
+
[71] Wei Sui, Lingfeng Wang, Bin Fan, Hongfei Xiao, Huaiyu
|
1578 |
+
Wu, and Chunhong Pan. Layer-wise floorplan extraction for
|
1579 |
+
automatic urban building reconstruction. IEEE Transactions
|
1580 |
+
on Visualization and Computer Graphics, 22(3):1261–1277,
|
1581 |
+
2016. 2
|
1582 |
+
[72] Maitreya Suin and A. N. Rajagopalan. An efficient frame-
|
1583 |
+
work for dense video captioning. Proceedings of the AAAI
|
1584 |
+
Conference on Artificial Intelligence, 34(07):12039–12046,
|
1585 |
+
Apr. 2020. 3
|
1586 |
+
[73] Cheng Sun, Chi-Wei Hsiao, Min Sun, and Hwann-Tzong
|
1587 |
+
Chen. Horizonnet: Learning room layout with 1d representa-
|
1588 |
+
tion and pano stretch data augmentation. In Proceedings of
|
1589 |
+
the IEEE/CVF Conference on Computer Vision and Pattern
|
1590 |
+
Recognition, pages 1047–1056, 2019. 2
|
1591 |
+
[74] Yi Sun, Xiaogang Wang, and Xiaoou Tang. Deeply learned
|
1592 |
+
face representations are sparse, selective, and robust. 2015
|
1593 |
+
11
|
1594 |
+
|
1595 |
+
IEEE Conference on Computer Vision and Pattern Recogni-
|
1596 |
+
tion (CVPR), pages 2892–2900, 2015. 15, 16
|
1597 |
+
[75] Ryu Takeda, Yoshiki Kudo, Kazuki Takashima, Yoshifumi
|
1598 |
+
Kitamura, and Kazunori Komatani. Unsupervised adaptation
|
1599 |
+
of neural networks for discriminative sound source local-
|
1600 |
+
ization with eliminative constraint. In 2018 IEEE Interna-
|
1601 |
+
tional Conference on Acoustics, Speech and Signal Process-
|
1602 |
+
ing (ICASSP), pages 3514–3518, 2018. 14
|
1603 |
+
[76] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko-
|
1604 |
+
reit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia
|
1605 |
+
Polosukhin. Attention is all you need. Advances in neural
|
1606 |
+
information processing systems, 30, 2017. 5, 16
|
1607 |
+
[77] Erik Wijmans, Abhishek Kadian, Ari S. Morcos, Stefan Lee,
|
1608 |
+
Irfan Essa, Devi Parikh, Manolis Savva, and Dhruv Batra.
|
1609 |
+
Dd-ppo: Learning near-perfect pointgoal navigators from 2.5
|
1610 |
+
billion frames. In ICLR, 2020. 6, 16
|
1611 |
+
[78] Wenming Wu, Xiao-Ming Fu, Rui Tang, Yuhan Wang, Yu-
|
1612 |
+
Hao Qi, and Ligang Liu. Data-driven interior plan generation
|
1613 |
+
for residential buildings. ACM Trans. Graph., 38(6), nov
|
1614 |
+
2019. 2
|
1615 |
+
[79] Zuxuan Wu, Caiming Xiong, Chih-Yao Ma, Richard Socher,
|
1616 |
+
and Larry S Davis. Adaframe: Adaptive frame selection
|
1617 |
+
for fast video recognition. In Proceedings of the IEEE/CVF
|
1618 |
+
Conference on Computer Vision and Pattern Recognition,
|
1619 |
+
pages 1278–1287, 2019. 3
|
1620 |
+
[80] Shang-Ta Yang, Fu-En Wang, Chi-Han Peng, Peter Wonka,
|
1621 |
+
Min Sun, and Hung-Kuo Chu. Dula-net: A dual-projection
|
1622 |
+
network for estimating room layouts from a single rgb
|
1623 |
+
panorama. In Proceedings of the IEEE/CVF Conference on
|
1624 |
+
Computer Vision and Pattern Recognition, pages 3363–3372,
|
1625 |
+
2019. 2
|
1626 |
+
[81] Mao Ye, Yu Zhang, Ruigang Yang, and Dinesh Manocha.
|
1627 |
+
3d reconstruction in the presence of glasses by acoustic and
|
1628 |
+
stereo fusion. In 2015 IEEE Conference on Computer Vision
|
1629 |
+
and Pattern Recognition (CVPR), pages 4885–4893, 2015. 2
|
1630 |
+
[82] Serena Yeung, Olga Russakovsky, Greg Mori, and Li Fei-Fei.
|
1631 |
+
End-to-end learning of action detection from frame glimpses
|
1632 |
+
in videos. In Proceedings of the IEEE conference on computer
|
1633 |
+
vision and pattern recognition, pages 2678–2687, 2016. 3
|
1634 |
+
[83] Yinfeng Yu, Wenbing Huang, Fuchun Sun, Changan Chen,
|
1635 |
+
Yikai Wang, and Xiaohong Liu. Sound adversarial audio-
|
1636 |
+
visual navigation. In International Conference on Learning
|
1637 |
+
Representations, 2022. 2
|
1638 |
+
[84] Zhoutong Zhang, Jiajun Wu, Qiujia Li, Zhengjia Huang,
|
1639 |
+
James Traer, Josh H. McDermott, Joshua B. Tenenbaum, and
|
1640 |
+
William T. Freeman. Generative modeling of audible shapes
|
1641 |
+
for object perception. 2017 IEEE International Conference
|
1642 |
+
on Computer Vision (ICCV), pages 1260–1269, 2017. 2
|
1643 |
+
[85] Chuhang Zou, Alex Colburn, Qi Shan, and Derek Hoiem.
|
1644 |
+
Layoutnet: Reconstructing the 3d room layout from a single
|
1645 |
+
rgb image. In Proceedings of the IEEE conference on com-
|
1646 |
+
puter vision and pattern recognition, pages 2051–2059, 2018.
|
1647 |
+
2
|
1648 |
+
[86] Chuhang Zou, Jheng-Wei Su, Chi-Han Peng, Alex Colburn,
|
1649 |
+
Qi Shan, Peter Wonka, Hung-Kuo Chu, and Derek Hoiem.
|
1650 |
+
Manhattan room layout reconstruction from a single 360◦
|
1651 |
+
image: A comparative study of state-of-the-art methods. In-
|
1652 |
+
ternational Journal of Computer Vision, 129(5):1410–1431,
|
1653 |
+
2021. 2
|
1654 |
+
12
|
1655 |
+
|
1656 |
+
7. Supplementary Material
|
1657 |
+
In this supplementary material we provide additional de-
|
1658 |
+
tails about:
|
1659 |
+
• Video (with audio) for qualitative illustration of our
|
1660 |
+
task and qualitative assessment of our map predictions
|
1661 |
+
(Sec. 7.1)
|
1662 |
+
• Experiment to show the effect of unheard sounds
|
1663 |
+
(Sec. 5 in main) on map predictions (Sec. 7.2), as noted
|
1664 |
+
in Sec. 5.2 in main
|
1665 |
+
• Experiment to show the impact of the visual budget
|
1666 |
+
B (Sec. 3 in main) on mapping quality (Sec. 7.3), as
|
1667 |
+
referenced in Sec. 5 and 5.2 in main.
|
1668 |
+
• Experiment to show the effect of sensor noise on map-
|
1669 |
+
ping accuracy (Sec. 7.4), as mentioned in Sec. 5 and 5.2
|
1670 |
+
in main.
|
1671 |
+
• Experiment to show mapping performance as function
|
1672 |
+
of the target map size (Sec. 7.5), as noted in Sec. 5.2 in
|
1673 |
+
main.
|
1674 |
+
• Dataset details (Sec. 7.6) in addition to what’s provided
|
1675 |
+
in Sec. 5 in main.
|
1676 |
+
• Additional
|
1677 |
+
baseline
|
1678 |
+
details
|
1679 |
+
for
|
1680 |
+
reproducibility
|
1681 |
+
(Sec. 7.7), as referenced in Sec. 5 in main.
|
1682 |
+
• Architecture and training details (Sec. 7.8), as noted in
|
1683 |
+
Sec. 5 in main.
|
1684 |
+
7.1. Supplementary video
|
1685 |
+
The supplementary video qualitatively depicts our task,
|
1686 |
+
Chat2Map:Efficient Scene Mapping from Multi-Ego Con-
|
1687 |
+
versations. Moreover, we qualitatively show our model’s
|
1688 |
+
mapping quality by comparing the predictions against the
|
1689 |
+
ground truths and the visual samples chosen by our sampling
|
1690 |
+
policy for efficient mapping. Please use headphones to hear
|
1691 |
+
the spatial audio correctly. We also demonstrate the acous-
|
1692 |
+
tically realistic SoundSpaces [8] audio simulation platform
|
1693 |
+
that we use for our core experiments. The video is available
|
1694 |
+
at http://vision.cs.utexas.edu/projects/
|
1695 |
+
chat2map.
|
1696 |
+
7.2. Unheard sounds
|
1697 |
+
In Sec. 5.1 in main, we showed results with heard sounds
|
1698 |
+
(Sec. 5 in main), i.e. the anechoic speech sounds uttered by
|
1699 |
+
the egos are shared between train and test splits. However,
|
1700 |
+
due to our use of unseen environments in test (Sec. 5 in main),
|
1701 |
+
the spatial speech sounds input to our model during test are
|
1702 |
+
not heard in training. To make the evaluation even more
|
1703 |
+
challenging, we conduct a parallel experiment here, where
|
1704 |
+
even the anechoic speech is distinct from what’s used in
|
1705 |
+
Model
|
1706 |
+
F1 score ↑ IoU ↑
|
1707 |
+
All-occupied
|
1708 |
+
63.4
|
1709 |
+
48.8
|
1710 |
+
Register-inputs
|
1711 |
+
72.6
|
1712 |
+
60.1
|
1713 |
+
OccAnt [60]
|
1714 |
+
74.5
|
1715 |
+
62.7
|
1716 |
+
AV-Floorplan [59]
|
1717 |
+
79.0
|
1718 |
+
67.7
|
1719 |
+
Ours
|
1720 |
+
81.6
|
1721 |
+
71.1
|
1722 |
+
Ours w/o vision
|
1723 |
+
72.6
|
1724 |
+
60.1
|
1725 |
+
Ours w/o audio
|
1726 |
+
78.1
|
1727 |
+
66.7
|
1728 |
+
Ours w/o E
|
1729 |
+
′
|
1730 |
+
i’s speech
|
1731 |
+
81.3
|
1732 |
+
70.7
|
1733 |
+
Ours w/o shared mapping
|
1734 |
+
80.7
|
1735 |
+
70.0
|
1736 |
+
Table 3. Passive mapping performance (%) on unheard sounds.
|
1737 |
+
1
|
1738 |
+
4
|
1739 |
+
8
|
1740 |
+
12
|
1741 |
+
16
|
1742 |
+
Episode step
|
1743 |
+
62
|
1744 |
+
64
|
1745 |
+
66
|
1746 |
+
68
|
1747 |
+
70
|
1748 |
+
Mean F1 score (%)
|
1749 |
+
Random
|
1750 |
+
Unique pose
|
1751 |
+
Greedy
|
1752 |
+
Ours w/o audio for
|
1753 |
+
V
|
1754 |
+
Ours
|
1755 |
+
Figure 6. Active mapping performance vs. episode step on unheard
|
1756 |
+
sounds.
|
1757 |
+
training, which we call as the unheard sound setting (Sec. 5
|
1758 |
+
in main).
|
1759 |
+
Table 3 shows our passive mapping results in the unheard
|
1760 |
+
sound setting. Our model is able to retain its performance
|
1761 |
+
margins over all baselines even in this more challenging
|
1762 |
+
scenario.
|
1763 |
+
We notice a similar trend upon evaluating our model for
|
1764 |
+
active mapping on unheard sounds. Fig. 6 shows that our
|
1765 |
+
model is able to generalize to novel sounds better than all
|
1766 |
+
baselines.
|
1767 |
+
This indicates that both our mapper f M and visual sam-
|
1768 |
+
pling policy πV are able to learn useful spatial cues from
|
1769 |
+
audio that are agnostic of the speech content and semantics.
|
1770 |
+
7.3. Visual budget value
|
1771 |
+
So far, we have shown active mapping results with the
|
1772 |
+
visual budget set to B = 2 (Sec. 5.1 and Fig. 3 in main). To
|
1773 |
+
analyze the effect of larger values of B, we show our active
|
1774 |
+
mapping performance for B ∈
|
1775 |
+
�
|
1776 |
+
4, 6
|
1777 |
+
�
|
1778 |
+
in Fig. 7. Our model
|
1779 |
+
outperforms all baselines even for these larger B values. We
|
1780 |
+
also observe that the lower the visual budget, the higher the
|
1781 |
+
performance margins are for our model. This shows that our
|
1782 |
+
model is particularly more robust to the lack of visuals in
|
1783 |
+
extremely low-resource settings.
|
1784 |
+
13
|
1785 |
+
|
1786 |
+
1
|
1787 |
+
4
|
1788 |
+
8
|
1789 |
+
12
|
1790 |
+
16
|
1791 |
+
Episode step
|
1792 |
+
62.5
|
1793 |
+
65.0
|
1794 |
+
67.5
|
1795 |
+
70.0
|
1796 |
+
72.5
|
1797 |
+
Mean F1 score (%)
|
1798 |
+
Random
|
1799 |
+
Unique pose
|
1800 |
+
Greedy
|
1801 |
+
Ours w/o audio for
|
1802 |
+
V
|
1803 |
+
Ours
|
1804 |
+
(a) B = 4
|
1805 |
+
1
|
1806 |
+
4
|
1807 |
+
8
|
1808 |
+
12
|
1809 |
+
16
|
1810 |
+
Episode step
|
1811 |
+
65
|
1812 |
+
70
|
1813 |
+
75
|
1814 |
+
Mean F1 score (%)
|
1815 |
+
Random
|
1816 |
+
Unique pose
|
1817 |
+
Greedy
|
1818 |
+
Ours w/o audio for
|
1819 |
+
V
|
1820 |
+
Ours
|
1821 |
+
(b) B = 6
|
1822 |
+
Figure 7. Active mapping performance vs. episode step with
|
1823 |
+
B ∈
|
1824 |
+
�
|
1825 |
+
4, 6
|
1826 |
+
�
|
1827 |
+
.
|
1828 |
+
Model
|
1829 |
+
F1 score ↑ IoU ↑
|
1830 |
+
All-occupied
|
1831 |
+
63.0
|
1832 |
+
48.3
|
1833 |
+
Register-inputs
|
1834 |
+
72.3
|
1835 |
+
59.7
|
1836 |
+
OccAnt [60]
|
1837 |
+
74.7
|
1838 |
+
63.0
|
1839 |
+
AV-Floorplan [59]
|
1840 |
+
77.6
|
1841 |
+
65.8
|
1842 |
+
Ours
|
1843 |
+
79.1
|
1844 |
+
68.0
|
1845 |
+
Ours w/o vision
|
1846 |
+
72.6
|
1847 |
+
60.0
|
1848 |
+
Ours w/o audio
|
1849 |
+
76.7
|
1850 |
+
65.1
|
1851 |
+
Ours w/o E
|
1852 |
+
′
|
1853 |
+
i’s speech
|
1854 |
+
78.8
|
1855 |
+
67.7
|
1856 |
+
Ours w/o shared mapping
|
1857 |
+
78.5
|
1858 |
+
67.2
|
1859 |
+
Table 4. Passive mapping performance (%) with sensor noise.
|
1860 |
+
1
|
1861 |
+
4
|
1862 |
+
8
|
1863 |
+
12
|
1864 |
+
16
|
1865 |
+
Episode step
|
1866 |
+
60
|
1867 |
+
62
|
1868 |
+
64
|
1869 |
+
66
|
1870 |
+
68
|
1871 |
+
Mean F1 score (%)
|
1872 |
+
Random
|
1873 |
+
Unique pose
|
1874 |
+
Greedy
|
1875 |
+
Ours w/o audio for
|
1876 |
+
V
|
1877 |
+
Ours
|
1878 |
+
Figure 8. Active mapping performance vs. episode step with sensor
|
1879 |
+
noise.
|
1880 |
+
7.4. Sensor noise
|
1881 |
+
Here, we test our model’s robustness to sensor noise by
|
1882 |
+
adding noise of the appropriate type individually to each
|
1883 |
+
sensor.
|
1884 |
+
For RGB images, we sample the noise from a
|
1885 |
+
Gaussian distribution with a mean of 0 and a standard de-
|
1886 |
+
viation of 0.2 [60, 63]. For depth, we use the Redwood
|
1887 |
+
depth noise model [13, 60, 63], where the amount of noise
|
1888 |
+
is higher for higher depth values and vice-versa. Follow-
|
1889 |
+
ing [60], we sample pose noise from a truncated Gaus-
|
1890 |
+
sian with a mean of 0.025 m and a standard deviation of
|
1891 |
+
0.001 m for the spatial location component of an ego pose
|
1892 |
+
�
|
1893 |
+
(x, y) in Sec. 3 in main
|
1894 |
+
�
|
1895 |
+
. For orientation θ (Sec. 3 in main),
|
1896 |
+
we use another truncated Gaussian with a mean of 0.9◦ and
|
1897 |
+
H = W = 8 m H = W = 9.6 m
|
1898 |
+
Model
|
1899 |
+
F1 score ↑ IoU ↑ F1 score ↑ IoU ↑
|
1900 |
+
All-occupied
|
1901 |
+
53.5
|
1902 |
+
37.9
|
1903 |
+
46.4
|
1904 |
+
31.2
|
1905 |
+
Register-inputs
|
1906 |
+
65.9
|
1907 |
+
53.4
|
1908 |
+
61.6
|
1909 |
+
49.6
|
1910 |
+
OccAnt [60]
|
1911 |
+
67.8
|
1912 |
+
55.7
|
1913 |
+
63.0
|
1914 |
+
51.3
|
1915 |
+
AV-Floorplan [59]
|
1916 |
+
71.4
|
1917 |
+
59.1
|
1918 |
+
68.7
|
1919 |
+
53.1
|
1920 |
+
Ours
|
1921 |
+
73.4
|
1922 |
+
60.7
|
1923 |
+
72.0
|
1924 |
+
54.4
|
1925 |
+
Ours w/o vision
|
1926 |
+
66.1
|
1927 |
+
53.5
|
1928 |
+
62.6
|
1929 |
+
50.3
|
1930 |
+
Ours w/o audio
|
1931 |
+
71.1
|
1932 |
+
58.1
|
1933 |
+
63.8
|
1934 |
+
51.3
|
1935 |
+
Ours w/o E
|
1936 |
+
′
|
1937 |
+
i’s speech
|
1938 |
+
73.3
|
1939 |
+
60.5
|
1940 |
+
67.6
|
1941 |
+
54.0
|
1942 |
+
Ours w/o shared mapping
|
1943 |
+
72.9
|
1944 |
+
60.3
|
1945 |
+
68.0
|
1946 |
+
54.5
|
1947 |
+
Table 5. Passive mapping performance (%) for larger target map
|
1948 |
+
sizes.
|
1949 |
+
1
|
1950 |
+
4
|
1951 |
+
8
|
1952 |
+
12
|
1953 |
+
16
|
1954 |
+
Episode step
|
1955 |
+
60
|
1956 |
+
62
|
1957 |
+
64
|
1958 |
+
Mean F1 score (%)
|
1959 |
+
Random
|
1960 |
+
Unique pose
|
1961 |
+
Greedy
|
1962 |
+
Ours w/o audio for
|
1963 |
+
V
|
1964 |
+
Ours
|
1965 |
+
(a) H = W = 8 m
|
1966 |
+
1
|
1967 |
+
4
|
1968 |
+
8
|
1969 |
+
12
|
1970 |
+
16
|
1971 |
+
Episode step
|
1972 |
+
57
|
1973 |
+
58
|
1974 |
+
59
|
1975 |
+
60
|
1976 |
+
61
|
1977 |
+
Mean F1 score (%)
|
1978 |
+
Random
|
1979 |
+
Unique pose
|
1980 |
+
Greedy
|
1981 |
+
Ours w/o audio for
|
1982 |
+
V
|
1983 |
+
Ours
|
1984 |
+
(b) H = W = 9.6 m
|
1985 |
+
Figure 9. Active mapping performance vs. episode step for larger
|
1986 |
+
target map sizes.
|
1987 |
+
a standard deviation of 0.057◦. Both distributions are trun-
|
1988 |
+
cated at 2 standard deviations. For our multi-channel mi-
|
1989 |
+
crophones (Sec. 3 in main), we add a high amount of noise
|
1990 |
+
(SNR of 40 dB) [8] using a standard noise model [13, 75].
|
1991 |
+
Table 4 and Fig. 8 report our passive and active mapping
|
1992 |
+
performance, respectively, in the face of sensor noise. In
|
1993 |
+
both settings, although our model’s performance declines in
|
1994 |
+
comparison to the noise-free setting (cf. Table 1 and Fig. 3
|
1995 |
+
in main), it generalizes better than all baselines, thereby
|
1996 |
+
underlining the effectiveness of our method.
|
1997 |
+
7.5. Target map size
|
1998 |
+
In main (Sec. 5.1), we showed mapping results with H ×
|
1999 |
+
W = 6.4 × 6.4 m2(∼ 41 m2), where H and W denote the
|
2000 |
+
height and width of the ground-truth local 360◦ FoV maps
|
2001 |
+
(Sec. 4.3 in main). To analyze the impact of larger target
|
2002 |
+
map sizes on the mapping quality, we also test our model
|
2003 |
+
with H ×W ∈
|
2004 |
+
�
|
2005 |
+
8×8 m2(64 m2), 9.6×9.6 m2(∼ 92 m2)
|
2006 |
+
�
|
2007 |
+
.
|
2008 |
+
Table 5 and Fig. 9 show the corresponding results for passive
|
2009 |
+
and active mapping, respectively. In both cases, our model
|
2010 |
+
outperforms all baselines by a substantial margin, showing
|
2011 |
+
that our method is also robust to higher target map sizes.
|
2012 |
+
7.6. Dataset details
|
2013 |
+
Here, we provide additional dataset details. We will re-
|
2014 |
+
lease our datasets.
|
2015 |
+
14
|
2016 |
+
|
2017 |
+
Visual data.
|
2018 |
+
All RGB-D images in our experiments have
|
2019 |
+
a resolution of 128 × 128.
|
2020 |
+
To generate the topdown occupancy maps, we threshold
|
2021 |
+
the local pointcloud computed from the 90◦ FoV depth im-
|
2022 |
+
ages (Sec. 4.1 in main) using a lower and upper height limit
|
2023 |
+
of 0.2 and 1.5 m, respectively, such that a map cell is con-
|
2024 |
+
sidered occupied if there is a 3D point for it in the 0.2-1.5 m
|
2025 |
+
range, and free otherwise.
|
2026 |
+
To generate an estimate of the scene map, we register the
|
2027 |
+
estimates of ground-truth local 360◦ FoV maps, ˜
|
2028 |
+
Mi,js onto a
|
2029 |
+
shared scene map ˜
|
2030 |
+
M (Sec. 4.3 in main) and maintain a count
|
2031 |
+
of the number of updates undergone by every cell in the
|
2032 |
+
shared map. To register a local estimate ˜
|
2033 |
+
Mi,j, we first trans-
|
2034 |
+
late and rotate ˜
|
2035 |
+
Mi,j within ˜
|
2036 |
+
M on the basis of its normalized
|
2037 |
+
pose Pi,j. Next, we add ˜
|
2038 |
+
Mi,j with the corresponding part
|
2039 |
+
of ˜
|
2040 |
+
M and update the counter for every map cell that’s been
|
2041 |
+
changed through the registration. We repeat this process for
|
2042 |
+
every ˜
|
2043 |
+
Mi,j in the episode. Finally, we normalize the updated
|
2044 |
+
˜
|
2045 |
+
M by dividing each cell in it by its number of updates from
|
2046 |
+
the counter, and thresholding at 0.5. In our experiments, ˜
|
2047 |
+
M
|
2048 |
+
covers a maximum area of 128.4 × 128.4 m2.
|
2049 |
+
Audio data.
|
2050 |
+
For each conversation episode, we randomly
|
2051 |
+
choose 2 speakers from the same split – heard or unheard
|
2052 |
+
(Sec. 5 in main). Starting at a random time in the audio clip
|
2053 |
+
for each speaker, we choose contiguous 3s slices from each
|
2054 |
+
clip for T steps to use as the anechoic audio for the two egos
|
2055 |
+
in the episode, where T denotes the episode length (Sec. 3
|
2056 |
+
in main). Further, we normalize each slice to have the same
|
2057 |
+
RMS value of 400 across the whole dataset, where all audio
|
2058 |
+
is sampled at 16 kHz and stored using the standard 16-bit
|
2059 |
+
integer format.
|
2060 |
+
To generate the spectrograms, we convolve a speech slice
|
2061 |
+
with the appropriate 9-channel RIR sampled at 16 kHz and
|
2062 |
+
compute its STFT with a Hann window of 31.93 ms, hop
|
2063 |
+
length of 8.31 ms, and FFT size of 511 to generate 9-channel
|
2064 |
+
magnitude spectrograms, where each channel has 256 fre-
|
2065 |
+
quency bins and 257 overlapping temporal windows. We
|
2066 |
+
assume access to detected and separated speech from the
|
2067 |
+
egos at all times since on-device microphones of AR glasses
|
2068 |
+
can tackle nearby and distant speaker detection [37] and
|
2069 |
+
separation [58].
|
2070 |
+
7.7. Baselines
|
2071 |
+
Here, we provide additional implementation details for
|
2072 |
+
our active mapping baselines for reproducibility (Sec. 5 in
|
2073 |
+
main).
|
2074 |
+
• Random. At each step t, we generate a random num-
|
2075 |
+
ber between 0 and 1 from a uniform distribution. De-
|
2076 |
+
pending on which quartile of the 0-1 range the random
|
2077 |
+
number lies in, we skip visual frames for both egos,
|
2078 |
+
sample for just one ego, or sample for both egos.
|
2079 |
+
• Greedy. Starting at t = 2, we sample visual frames for
|
2080 |
+
both egos at every step until we run out of the visual
|
2081 |
+
budget B. If the value of B is such that it allows sam-
|
2082 |
+
pling only one visual frame at a certain step (i.e. B is
|
2083 |
+
odd), we randomly choose the ego for which we sample
|
2084 |
+
the frame at that step.
|
2085 |
+
• Unique-pose. To implement this baseline, we keep
|
2086 |
+
track of the egos’ poses during an episode. At any step
|
2087 |
+
t, we sample the frame for an ego if it’s current pose
|
2088 |
+
has never been assumed before by either of the egos in
|
2089 |
+
that episode.
|
2090 |
+
7.8. Architecture and training
|
2091 |
+
Here, we provide our architecture and additional training
|
2092 |
+
details for reproducibility. We will release our code.
|
2093 |
+
7.8.1
|
2094 |
+
Policy architecture
|
2095 |
+
Visual encoder.
|
2096 |
+
To encode local occupancy map inputs,
|
2097 |
+
our policy πV (Sec. 4.2 in main) uses a 6-layer CNN con-
|
2098 |
+
sisting of 5 convolutional (conv.) layers followed by an
|
2099 |
+
adaptive average pooling layer. The first three conv. layers
|
2100 |
+
use a kernel size of 4 and a stride of 2, while the last two
|
2101 |
+
conv. layers use a kernel size of 3 and a stride of 1. All
|
2102 |
+
conv. layers use a zero padding of 1, except for the third
|
2103 |
+
conv. layer, which uses a zero padding of 2. The number
|
2104 |
+
of output channels of the conv. layers are [64, 64, 128, 256,
|
2105 |
+
512], respectively. Each convolution is followed by a leaky
|
2106 |
+
ReLU [50, 74] activation with a negative slope of 0.2, and
|
2107 |
+
a Batch Normalization [32] of 1e−5. The adaptive average
|
2108 |
+
pooling layer reduces the output of the last conv. layer to a
|
2109 |
+
feature of size 1 × 1 × 512.
|
2110 |
+
To encode RGB images (Sec. 4.2 in main), πV uses a
|
2111 |
+
separate CNN with 5 conv. layers and an adaptive average
|
2112 |
+
pooling layer. Each conv. layer has a kernel size of 4, stride
|
2113 |
+
of 2 and zero padding of 1. The number of output channels
|
2114 |
+
are [64, 64, 128, 256, 512], respectively. Similar to the
|
2115 |
+
occupancy map encoder, each convolution is followed by a
|
2116 |
+
leaky ReLU [50, 74] activation with a negative slope of 0.2
|
2117 |
+
and a Batch Normalization [32] of 1e−5, and the adaptive
|
2118 |
+
average pooling layer reduces the output of the last conv.
|
2119 |
+
layer to a feature of size 1 × 1 × 512.
|
2120 |
+
We fuse the occupancy and RGB features by concate-
|
2121 |
+
nating them and passing through a single linear layer that
|
2122 |
+
produces a 512-dimensional visual embedding v (Sec. 4.2 in
|
2123 |
+
main).
|
2124 |
+
Speech encoder.
|
2125 |
+
The speech encoder (Sec. 4.2 in main) in
|
2126 |
+
πV is a CNN with 5 conv. layers and an adaptive average
|
2127 |
+
pooling layer. Each conv. layer has a kernel size of 4, stride
|
2128 |
+
15
|
2129 |
+
|
2130 |
+
of 2 and a padding of 1, except for the second conv. layer,
|
2131 |
+
which has a kernel size of 8, stride of 4 and padding of 3.
|
2132 |
+
The number of channels in the CNN are [64, 64, 128, 256,
|
2133 |
+
512], respectively. Similar to the visual encoder, each conv.
|
2134 |
+
layer is followed by a leaky ReLU [50, 74] with a negative
|
2135 |
+
slope of 0.2 and a Batch Normalization [32] of 1e−5. The
|
2136 |
+
adaptive average pooling layer reduces the output of the last
|
2137 |
+
conv. layer to a feature of size 1 × 1 × 512.
|
2138 |
+
Pose encoder.
|
2139 |
+
The pose encoder (Sec. 4.2 in main) in
|
2140 |
+
πV is a single linear layer that takes a normalized pose P
|
2141 |
+
(Sec. 4.1 in main) as input and produces a 32-dimensional
|
2142 |
+
pose embedding.
|
2143 |
+
Fusion layers.
|
2144 |
+
We perform linear fusion of the visual,
|
2145 |
+
speech and pose embeddings (Sec. 4.2 and Fig. 2 in main)
|
2146 |
+
at two levels. The first level has 4 linear layers and the sec-
|
2147 |
+
ond level has 1 linear layer. Each linear layer produces a
|
2148 |
+
512-dimensional fused feature as its output.
|
2149 |
+
Policy network.
|
2150 |
+
The policy network (Sec. 4.2 in main)
|
2151 |
+
comprises a one-layer bidirectional GRU [16] with 512 hid-
|
2152 |
+
den units. The actor and critic networks consist of one linear
|
2153 |
+
layer.
|
2154 |
+
7.8.2
|
2155 |
+
Mapper architecture
|
2156 |
+
Visual encoder.
|
2157 |
+
To encode local occupancy map inputs,
|
2158 |
+
our shared mapper f M (Sec. 4.3 in main) uses a CNN similar
|
2159 |
+
to the one used for encoding occupancy maps in πV (Sec.),
|
2160 |
+
except that it doesn’t have a pooling layer at the end. The
|
2161 |
+
RGB encoder (Sec. 4.3 in main) in f M is also similar to
|
2162 |
+
the one for πV , except that it also doesn’t have a pooling
|
2163 |
+
layer at the end. We fuse the map and RGB features by
|
2164 |
+
concatenating them along the channel dimension, and obtain
|
2165 |
+
a 4 × 4 × 1024 dimensional feature.
|
2166 |
+
Speech encoder.
|
2167 |
+
The speech encoders (Sec. 4.3 in main)
|
2168 |
+
in f M are CNNs with 5 layers that share the architecture
|
2169 |
+
with the first 5 conv. layers of the speech encoder in πV
|
2170 |
+
(Sec. 7.8.1), except that the last conv. layer in both encoders
|
2171 |
+
has 1024 output channels.
|
2172 |
+
Modality encoder.
|
2173 |
+
For our modality embedding
|
2174 |
+
ˆm
|
2175 |
+
(Sec. 4.3 in main), we maintain a sparse lookup table of
|
2176 |
+
1024-dimensional learnable embeddings, which we index
|
2177 |
+
with 0 to retrieve the visual modality embedding ( ˆmV ), 1 to
|
2178 |
+
retrieve the modality embedding ( ˆmS) for the speech from
|
2179 |
+
self, and 2 to retrieve the modality embedding ( ˆmS′) for the
|
2180 |
+
speech from the other ego.
|
2181 |
+
Occupancy prediction network.
|
2182 |
+
The transformer [76]
|
2183 |
+
(Sec. 4.3 in main) in our occupancy prediction network com-
|
2184 |
+
prises 6 encoder and 6 decoder layers, 8 attention heads,
|
2185 |
+
an input and output size of 1024, a hidden size of 2048,
|
2186 |
+
and ReLU [50, 74] activations.
|
2187 |
+
Additionally, we use a
|
2188 |
+
dropout [70] of 0.1 in our transformer.
|
2189 |
+
The transpose convolutional network U (Sec. 4.3 in main)
|
2190 |
+
consists of 6 layers in total. The first 5 layers are transpose
|
2191 |
+
convolutions (conv.) layers. The first 4 transpose conv.
|
2192 |
+
layers have a kernel size of 4 and stride of 2, and the last
|
2193 |
+
transpose conv. layer has a kernel size of 3 and stride of
|
2194 |
+
1. Each transpose conv. has a padding of 1, ReLU [50, 74]
|
2195 |
+
activation and Batch Normalization [32]. The number of the
|
2196 |
+
output channels for the transpose conv. layers are [512, 256,
|
2197 |
+
128, 64, 2], respectively. The last layer in U is a sigmoid
|
2198 |
+
layer (Sec. 4.3 in main), which outputs the map estimates.
|
2199 |
+
7.8.3
|
2200 |
+
Parameter initialization
|
2201 |
+
We use the Kaiming-normal [31] weight initialization strat-
|
2202 |
+
egy to initialize the weights of all our network modules,
|
2203 |
+
except for the pose encoding layers and fusion layers, which
|
2204 |
+
are initialized with Kaiming-uniform [31] initialization, and
|
2205 |
+
the policy network, which is initialized using the orthogonal
|
2206 |
+
initialization strategy [64]. We switch off biases in all net-
|
2207 |
+
work modules, except for the policy network where we set
|
2208 |
+
the biases initially to 0.
|
2209 |
+
7.8.4
|
2210 |
+
Training hyperparameters.
|
2211 |
+
Policy training.
|
2212 |
+
To train our policy πV
|
2213 |
+
using DD-
|
2214 |
+
PPO [77] (Sec. 4.4 in main), we weight the action loss by
|
2215 |
+
1.0, value loss by 0.5, and entropy loss by 0.05. We train our
|
2216 |
+
policy on 8 Nvidia Tesla V100 SXM2 GPUs with Adam [41],
|
2217 |
+
an initial learning rate of 1e−4 and 8 processes per GPU for
|
2218 |
+
8.064 million policy prediction steps. Among other policy
|
2219 |
+
training parameters, we set the clip parameter value to 0.1,
|
2220 |
+
number of DD-PPO epochs to 4, number of mini batches to
|
2221 |
+
1, max gradient norm value to 0.5, reward discount factor
|
2222 |
+
γ to 0.99, and the value of λ in the generalized advantage
|
2223 |
+
estimation [66] formulation for DD-PPO to 0.95.
|
2224 |
+
Mapper training.
|
2225 |
+
We train our shared scene mapper f M
|
2226 |
+
(Sec. 4.3 in main) with a binary cross entropy loss (Sec. 4.4
|
2227 |
+
in main) on 4 Nvidia Quadro RTX 6000 GPUs until conver-
|
2228 |
+
gence by using Adam [41], an initial learning rate of 1e−4
|
2229 |
+
and a batch size of 24.
|
2230 |
+
16
|
2231 |
+
|
BdE0T4oBgHgl3EQfPwDB/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3b87f345d5a5cf17ab95fdbcb2deb21a631f0f84f9d33d153b94e0e928e38b17
|
3 |
+
size 1690159
|
CNAyT4oBgHgl3EQfR_eC/content/tmp_files/2301.00077v1.pdf.txt
ADDED
@@ -0,0 +1,1276 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
© 2023 IEEE. This is the author’s version of the article that has been published in IEEE Transactions on Visualization and
|
2 |
+
Computer Graphics. The final version of this record is available at: xx.xxxx/TVCG.201x.xxxxxxx/
|
3 |
+
A Study on a User-Controlled Radial Tour for Variable Importance
|
4 |
+
in High-Dimensional Data
|
5 |
+
Nicholas Spyrison, Dianne Cook, Kim Marriott
|
6 |
+
Abstract—Principal component analysis is a long-standing go-to method for exploring multivariate data. The principal components
|
7 |
+
are linear combinations of the original variables, ordered by descending variance. The first few components typically provide a good
|
8 |
+
visual summary of the data. Tours also make linear projections of the original variables but offer many different views, like examining
|
9 |
+
the data from different directions. The grand tour shows a smooth sequence of projections as an animation following interpolations
|
10 |
+
between random target bases. The manual radial tour rotates the selected variable’s contribution into and out of a projection. This
|
11 |
+
allows the importance of the variable to structure in the projection to be assessed. This work describes a mixed-design user study
|
12 |
+
evaluating the radial tour’s efficacy compared with principal component analysis and the grand tour. A supervised classification task
|
13 |
+
is assigned to participants who evaluate variable attribution of the separation between two classes. Their accuracy in assigning the
|
14 |
+
variable importance is measured across various factors. Data were collected from 108 crowdsourced participants, who performed two
|
15 |
+
trials with each visual for 648 trials in total. Mixed model regression finds strong evidence that the radial tour results in a large increase
|
16 |
+
in accuracy over the alternatives. Participants also reported a preference for the radial tour in comparison to the other two methods.
|
17 |
+
Index Terms—Multivariate data visualization, variable importance, radial tour, linear dimension reduction,
|
18 |
+
1
|
19 |
+
INTRODUCTION
|
20 |
+
Despite decades of research, multivariate data continues to provide
|
21 |
+
fascinating challenges for visualization. Data visualization is impor-
|
22 |
+
tant because it is a key element of exploratory data analysis [43] for
|
23 |
+
assessing model assumptions and as a cross-check on numerical sum-
|
24 |
+
marization [2,26,50]. One challenge is measuring if a new technique
|
25 |
+
yields a more informed perception of information than current practices.
|
26 |
+
Dimension reduction is commonly used with visualization to provide
|
27 |
+
informative low-dimensional summaries of quantitative multivariate
|
28 |
+
data. Principal component analysis (PCA) [34] is one of the first meth-
|
29 |
+
ods ever developed, and it remains very popular. Visualization of PCA
|
30 |
+
is typically in the form of static scatterplots of a few leading compo-
|
31 |
+
nents. When the scatterplot is accompanied by a visual representation
|
32 |
+
of the basis they are called a biplot [17]. A basis is a p × d matrix
|
33 |
+
of the linear combination of the p variables mapped to a smaller d-
|
34 |
+
dimensional space. That is, it is an orthogonal rotation matrix, the
|
35 |
+
magnitude, and the angle that the variables contribute.
|
36 |
+
Dynamic visualizations called tours [4] animate through a sequence
|
37 |
+
of linear projections (orthonormal bases). Instead of a static view, tours
|
38 |
+
provide a smoothly changing view by interpolating between bases.
|
39 |
+
There are various types of tours distinguished by how the paths are
|
40 |
+
generated. Asimov originally animated between randomly selected
|
41 |
+
bases in the grand tour. The manual tour [11] allows for user control
|
42 |
+
over the basis changes. A selected variable (or component) can be
|
43 |
+
rotated into or out of view or to a particular value. The radial tour [42] is
|
44 |
+
a variant of the manual tour that fixes the contribution angle and changes
|
45 |
+
• Monash University
|
46 |
+
Australia
|
47 | |
48 |
+
ORCiD: 0000-0002-8417-0212.
|
49 |
+
• Monash University
|
50 |
+
Australia
|
51 | |
52 |
+
ORCiD: 0000-0002-3813-7155
|
53 |
+
• Monash University
|
54 |
+
Australia
|
55 | |
56 |
+
ORCiD: 0000-0002-9813-0377
|
57 |
+
Manuscript received xx xxx. 201x; accepted xx xxx. 201x. Date of Publication
|
58 |
+
xx xxx. 201x; date of current version xx xxx. 201x. For information on
|
59 |
+
obtaining reprints of this article, please send e-mail to: [email protected].
|
60 |
+
Digital Object Identifier: xx.xxxx/TVCG.201x.xxxxxxx
|
61 |
+
the magnitude along the radius. The permanence of the data points
|
62 |
+
from basis to basis holds information between intermediate interpolated
|
63 |
+
projections, and the user control of the basis could plausibly lead
|
64 |
+
to more information being perceived than a static display. This is a
|
65 |
+
hypothesis that a user study could assess.
|
66 |
+
Empirical studies have rarely assessed tours. An exception is [31],
|
67 |
+
who compares scatterplots of grand tours on 2D monitors with 3D
|
68 |
+
(stereoscopic, not head-mounted) over n = 15 participants. Partici-
|
69 |
+
pants perform cluster detection, dimensionality estimation, and radial
|
70 |
+
sparseness tasks on six-dimensional data. They find that stereoscopic
|
71 |
+
3D leads to more accuracy in cluster identification, though the time
|
72 |
+
to interact with the display was much higher in the 3D environment.
|
73 |
+
In this work, we extend the evaluation of tours which compares the
|
74 |
+
radial tour as benchmarked against the grand tour and discrete pairs of
|
75 |
+
principal components.
|
76 |
+
The contribution of this paper is an empirical user study comparing
|
77 |
+
the radial tour against PCA and the grand tour for assessing variable
|
78 |
+
attribution on clustered data. This is the first empirical evaluation of
|
79 |
+
the radial or manual tour. We discuss how this fits with other multi-
|
80 |
+
variate data visualization techniques and coordinated views of linear
|
81 |
+
projections.
|
82 |
+
We are particularly interested in assessing the effectiveness of the
|
83 |
+
new radial tour relative to common practice with PCA and grand tour.
|
84 |
+
The user influence over a basis, uniquely available in the radial tour, is
|
85 |
+
crucial to testing variable sensitivity to the structure visible in projection.
|
86 |
+
If the contribution of a variable is reduced and the feature disappears,
|
87 |
+
then we say that the variable was sensitive to that structure. For example,
|
88 |
+
Fig. 1 shows two projections of simulated data. Panel (a) has identified
|
89 |
+
the separation between the two clusters. The contributions in panel (b)
|
90 |
+
show no such cluster separation. The former has a large contribution
|
91 |
+
of V2 in the direction of separation, while it is negligible in the right
|
92 |
+
frame. Because of this, we say that V2 is sensitive to the separation of
|
93 |
+
the clusters.
|
94 |
+
Variable sensitivity is important for the interpretation of machine
|
95 |
+
learning models. They are the magnitude and direction of contribution
|
96 |
+
to the model. It is important that developers maintain the interpretabil-
|
97 |
+
ity of models. Exploratory Artificial Intelligence (XAI) [1, 3], is an
|
98 |
+
emerging field that extends the interpretability of such black-box mod-
|
99 |
+
els. Multivariate data visualization is essential for exploring feature
|
100 |
+
spaces and communicating interpretations of models [5,6,47].
|
101 |
+
The paper is structured as follows. Sect. 2 provides background on
|
102 |
+
standard visualization methods and linear dimension reduction tech-
|
103 |
+
niques. Sect. 3 describes the experimental factors, task, and accuracy
|
104 |
+
measure used. The results of the study are discussed in Sect. 4. Con-
|
105 |
+
1
|
106 |
+
arXiv:2301.00077v1 [stat.AP] 31 Dec 2022
|
107 |
+
|
108 |
+
V1
|
109 |
+
V2
|
110 |
+
V3
|
111 |
+
V4
|
112 |
+
Cluster
|
113 |
+
A
|
114 |
+
B
|
115 |
+
a
|
116 |
+
V1V2
|
117 |
+
V3
|
118 |
+
V4
|
119 |
+
b
|
120 |
+
Fig. 1. Illustration of cluster separation affected by variable importance. Panel (a) is a projection mostly of V2 and V3, and the separation between
|
121 |
+
clusters is in the direction of V2, not V3. This suggests V2 is important for clustering, but V3 is not. Panel (b) shows a projection of mostly V3 and V4,
|
122 |
+
with no contribution from V2 and little from V3. That there is no separation between the clusters indicates that V3 and V4 are not important.
|
123 |
+
clusions and potential future directions are discussed in Sect. 6. More
|
124 |
+
results, participant demographics, and analysis of the response time are
|
125 |
+
available in the Supplemental Materials.
|
126 |
+
2
|
127 |
+
RELATED WORK
|
128 |
+
Consider the data to be a matrix of n observations (rows) and p variables
|
129 |
+
(columns), denoted as Xn×p.
|
130 |
+
2.1
|
131 |
+
Orthogonal multivariate visualization
|
132 |
+
Grinstein [19] illustrates many multivariate visualization methods. In
|
133 |
+
particular, this work shows examples of actual visuals. Liu [25] give a
|
134 |
+
good classification and taxonomy of such methods. The content below
|
135 |
+
focuses on the most common visuals that use the full data space before
|
136 |
+
discussing linear combinations of those variables in projections.
|
137 |
+
2.1.1
|
138 |
+
Scatterplot matrix
|
139 |
+
One could consider looking at p histograms or univariate densities.
|
140 |
+
Doing so will miss features in two or more dimensions. Fig. 2 shows
|
141 |
+
a scatterplot matrix [9] of the four principal components of simulated
|
142 |
+
data. Such displays do not scale well with dimensions because each
|
143 |
+
plot would get less and less space. Scatterplot matrices can only dis-
|
144 |
+
play information in two orthogonal dimensions, so features in three
|
145 |
+
dimensions may not be fully resolved.
|
146 |
+
2.1.2
|
147 |
+
Parallel coordinates plot
|
148 |
+
Another common way to display multivariate data is with a parallel
|
149 |
+
coordinates plot [32]. Parallel coordinates plots scale well with dimen-
|
150 |
+
sions but poorly with observations as the lines overcrowd the display.
|
151 |
+
Parallel coordinate plots are asymmetric across variable ordering. In
|
152 |
+
that, shuffling the order of the variable can lead to different conclu-
|
153 |
+
sions. Another shortcoming is the graphical channel used to convey
|
154 |
+
information. [29] suggests that position is the visual channel that is
|
155 |
+
most perceptible to humans. In the case of parallel coordinates plots,
|
156 |
+
the horizontal axes span variables rather than the values of one vari-
|
157 |
+
able, causing the loss of a display dimension to be used by our most
|
158 |
+
perceptible visual channel.
|
159 |
+
2.2
|
160 |
+
Multivariate projections
|
161 |
+
At some point, visualization will be forced to turn to dimension re-
|
162 |
+
duction to scale better with the dimensionality of the data. Below we
|
163 |
+
introduce linear projections and the common principal component anal-
|
164 |
+
ysis. Then we touch on nonlinear projections and exclude them from
|
165 |
+
consideration.
|
166 |
+
2.2.1
|
167 |
+
Linear
|
168 |
+
Let data, X, contain n observations of p variables. A linear projection
|
169 |
+
maps a higher p-dimensional space onto a smaller d-space with an
|
170 |
+
affine mapping (where parallel lines stay parallel). A projection, Y,
|
171 |
+
is the resulting space of the data multiplied by a basis, A, such that
|
172 |
+
Yn×d = Xn×p ×Ap×d. This is essentially a reorientation of the original
|
173 |
+
variable. This intuition is conveyed by thinking of a shadow as a 2D
|
174 |
+
projection of a 3D object. Rotating the object changes the shadow it
|
175 |
+
casts and, correspondingly, the basis that maps the reorientation of the
|
176 |
+
object.
|
177 |
+
2.2.2
|
178 |
+
Principal component analysis
|
179 |
+
PCA is a good baseline of comparison for linear projections because
|
180 |
+
of its frequent and broad use across disciplines. PCA [34] defines new
|
181 |
+
components, linear combinations of the original variables, ordered by
|
182 |
+
decreasing variation through the help of eigenvalue matrix decomposi-
|
183 |
+
tion. While the resulting dimensionality is the same size, the benefit
|
184 |
+
comes from the ordered nature of the components. The data can be
|
185 |
+
said to be approximated by the first several components. The exact
|
186 |
+
number is subjectively selected given the variance contained in each
|
187 |
+
component, typically guided from a scree plot [8]. Features with siz-
|
188 |
+
able signal regularly appear in the leading components that commonly
|
189 |
+
approximate data. However, this is not always the case and component
|
190 |
+
spaces should be fully explored to look for signal in components with
|
191 |
+
less variation. This is especially true for cluster structure [14].
|
192 |
+
2.2.3
|
193 |
+
Nonlinear
|
194 |
+
Nonlinear transformations bend and distort spaces that are not entirely
|
195 |
+
accurate or faithful to the original variable space. Popular modern
|
196 |
+
methods include t-SNE and UMAP [28,44]. There are various quality
|
197 |
+
metrics, such as Trustworthiness, Continuity, Normalized stress, and
|
198 |
+
Average local error, have been introduced to describe the distortion of
|
199 |
+
the space [16,18]. Unfortunately, these distortions are hard to visualize
|
200 |
+
and comprehend, effectively breaking the variable interpretability of
|
201 |
+
the resulting space. The intuition of this can be demonstrated with
|
202 |
+
map projections. Snyder [41] lists over 200 different projections that
|
203 |
+
2
|
204 |
+
|
205 |
+
© 2023 IEEE. This is the author’s version of the article that has been published in IEEE Transactions on Visualization and
|
206 |
+
Computer Graphics. The final version of this record is available at: xx.xxxx/TVCG.201x.xxxxxxx/
|
207 |
+
PC1
|
208 |
+
PC2
|
209 |
+
PC3
|
210 |
+
PC4
|
211 |
+
PC1
|
212 |
+
PC2
|
213 |
+
PC3
|
214 |
+
PC4
|
215 |
+
Fig. 2. Scatterplot matrix of the first four principal components of 6D
|
216 |
+
simulated data containing four classes. The separation between classes
|
217 |
+
is primarily in PC1 and PC4. This is not uncommon because PCA is
|
218 |
+
summarizing variance, not cluster structure.
|
219 |
+
distort the surface of the earth to display as a 2D map, each with unique
|
220 |
+
properties and use cases.
|
221 |
+
Because of the difficulty of interpreting the distortions of nonlinear
|
222 |
+
spaces and the added subjectivity of hyperparameter selection, we
|
223 |
+
exclude nonlinear techniques and instead, decide to compare three
|
224 |
+
linear techniques.
|
225 |
+
2.3
|
226 |
+
Tours, animated linear projections
|
227 |
+
A tour animates through many linear projections. One of the insightful
|
228 |
+
features of the tour is the permanence of the data points; one can track
|
229 |
+
the relative changes of observations as the basis changes, as opposed to
|
230 |
+
discretely jumping to an orthogonal view angle with no intermediate
|
231 |
+
information. Types of tours are distinguished by the generation of their
|
232 |
+
basis paths [13,22]. In contrast with the discrete orientations of PCA,
|
233 |
+
we compare continuous linear projection changes with grand and radial
|
234 |
+
tours.
|
235 |
+
2.3.1
|
236 |
+
Grand tours
|
237 |
+
Target bases are selected randomly in a grand tour [4]. These target
|
238 |
+
bases are then geodesically interpolated for a smooth, continuous path.
|
239 |
+
The grand tour is the first and most widely known tour. The random
|
240 |
+
selection of target bases makes it a general unguided exploratory tool.
|
241 |
+
The grand tour will make a good comparison that has a continuity of
|
242 |
+
data points similar to the radial tour but lacks the user control enjoyed
|
243 |
+
by PCA and radial tours.
|
244 |
+
2.3.2
|
245 |
+
Manual and radial tours
|
246 |
+
Whether an analyst uses PCA or the grand tour, cannot influence the
|
247 |
+
basis. They cannot explore the structure identified or change the con-
|
248 |
+
tribution of the variables. User-controlled steering is a key aspect of
|
249 |
+
manual tours that helps to test variable attribution.
|
250 |
+
The manual tour [11] defines its basis path by manipulating the
|
251 |
+
basis contribution of a selected variable. A manipulation dimension
|
252 |
+
is appended onto the projection plane, giving a full contribution to
|
253 |
+
the selected variable. The target bases are then chosen to rotate this
|
254 |
+
newly created manipulation space. This manipulation space is similarly
|
255 |
+
orthogonally restrained. The data is projected through its interpolated
|
256 |
+
basis and rendered into an animation. When the contribution of one
|
257 |
+
variable changes, the contributions of the other variables must also
|
258 |
+
change, to maintain the orthonormality of the basis. A key feature of the
|
259 |
+
manual tour is that it allows users to control the variable contributions
|
260 |
+
to the basis. Such manipulations can be queued in advance or selected
|
261 |
+
in real time for human-in-the-loop analysis [21]. Manual navigation
|
262 |
+
is relatively time-consuming due to the vast volume of resulting view
|
263 |
+
space and the abstract method of steering the projection basis. First, it
|
264 |
+
is advisable to identify a basis of particular interest and then use the
|
265 |
+
manual tour as a more directed, local exploration tool to explore the
|
266 |
+
sensitivity of a variable’s contribution to the feature of interest.
|
267 |
+
To simplify the task and keep its duration realistic, we consider a
|
268 |
+
variant of the manual tour called a radial tour. In a radial tour, the
|
269 |
+
magnitude of along the radius with a fixed angle of contribution, as
|
270 |
+
seen in Fig. 3. The radial tour benefits from both continuity of the data
|
271 |
+
alongside grand tours and user-steering via choosing the variable to
|
272 |
+
rotate.
|
273 |
+
Manual tours have been recently made available in the R package
|
274 |
+
spinifex [42], which facilitates manual tour (and radial variant). It also
|
275 |
+
provides an interface for a layered composition of tours and exporting
|
276 |
+
to gif and mp4 with gganimate [35] or html widget with plotly [40].
|
277 |
+
It is also compatible with tours made by tourr [48]. Now that we have
|
278 |
+
a readily available means to produce various tours, we want to see
|
279 |
+
how they fare against traditional discrete displays commonly used with
|
280 |
+
PCA.
|
281 |
+
2.4
|
282 |
+
Other animated linear projections
|
283 |
+
The work of [15] allows users to interactively change the face of a
|
284 |
+
local display by navigating to adjacent faces on a global overview
|
285 |
+
scatterplot matrix. This offers analysts a way to geometrically explore
|
286 |
+
the transition between adjacent faces of a scatterplot matrix as though
|
287 |
+
rotating the face of dice at right angles. The interpolated bases between
|
288 |
+
the orthogonal faces display linear combinations of three variables at
|
289 |
+
varying degrees. This is what [27] called a little tour with the addition of
|
290 |
+
user control. It is a particular type of manual tour where only horizontal
|
291 |
+
or vertical rotation is allowed.
|
292 |
+
Star Coordinates [20] also arrive at the biplot scatterplot displays
|
293 |
+
starting from the perspective of radial parallel coordinates. [23] extend
|
294 |
+
this idea, mapping it back to orthogonal projections. They provide a
|
295 |
+
means to interpolate through PCA components, the orthogonal contri-
|
296 |
+
butions of the scatterplot matrix, and the grand tour. This work also
|
297 |
+
defines user-controlled interaction, similar to small steps in a manual
|
298 |
+
or radial tour.
|
299 |
+
TripAdvisor [30] is an interactive application that plans sequential
|
300 |
+
interpolation between distant target bases. It also provides an additional
|
301 |
+
global context of a subset of possible frames with glyph representation
|
302 |
+
and an overview of variable attribution by summarizing the top ten
|
303 |
+
principal components. It allows for user-steering by using a “touchpad
|
304 |
+
polygon”. This touchpad allows for contribution magnitudes to be
|
305 |
+
changed. This is similar to an incremental change with the manual tour.
|
306 |
+
The number of orthogonal axes in static plots as well as the number
|
307 |
+
of bases to view in a tour increase quadratically with the dimensions,
|
308 |
+
p. This is why it is particularly important to properly select variables
|
309 |
+
or otherwise reduce the dimensions before viewing. PCA, Linear dis-
|
310 |
+
criminant analysis and entropy are common approaches to variable
|
311 |
+
selection [37,38,46]. Such methods often yield a sort of screeplot [8]
|
312 |
+
where the analyst selects a subjective, but informed, number of compo-
|
313 |
+
nents to approximate the data while discarding the least information.
|
314 |
+
The variable sensitivity we test for, in contrast, is the act of visual
|
315 |
+
analysis of one variable’s contribution to the structure. In practice, this
|
316 |
+
is a tool for the analyst to fine-tune their variable selection or otherwise
|
317 |
+
evaluate the resulting approximated space.
|
318 |
+
In order to further mitigate the view time, objective functions can be
|
319 |
+
used to inform static or animated biplots. A dissimilarity statistic can be
|
320 |
+
used to solve a basis path for showing a particularly interesting tour [24].
|
321 |
+
More generally projection pursuit can be used to conduct a guided
|
322 |
+
tour of any objective function applied to an embedding space [12,13].
|
323 |
+
However, the function optimized is likely to show some feature of
|
324 |
+
interest if it is ultimately selected by the analyst. The ability to stop
|
325 |
+
3
|
326 |
+
|
327 |
+
V1
|
328 |
+
V2
|
329 |
+
V3
|
330 |
+
V4
|
331 |
+
a
|
332 |
+
V1
|
333 |
+
V2
|
334 |
+
V3
|
335 |
+
V4
|
336 |
+
Cluster
|
337 |
+
A
|
338 |
+
B
|
339 |
+
b
|
340 |
+
V1 V2
|
341 |
+
V3
|
342 |
+
V4
|
343 |
+
c
|
344 |
+
Fig. 3. A radial tour changing the contribution of V2. The contribution is in the direction of cluster separation. When its contribution is removed, the
|
345 |
+
clusters overlap (right). Because of this, we say that V2 is sensitive to the separation of these two species.
|
346 |
+
and control the exploration at any point only stands to improve one’s
|
347 |
+
understanding of the data.
|
348 |
+
2.5
|
349 |
+
Empirical evaluation
|
350 |
+
Some studies compare visualizations across complete contributions of
|
351 |
+
variables. Chang [10] conducted an n = 51 participant study compar-
|
352 |
+
ing parallel coordinate plots and scatterplot matrix either in isolation,
|
353 |
+
sequentially, or as a coordinated view. Accuracy, completion time, and
|
354 |
+
eye focus were measured for six tasks. Three tasks were more accurate
|
355 |
+
with scatterplot matrix and three with parallel coordinates, while the
|
356 |
+
coordinated view was usually marginally more accurate than the max
|
357 |
+
of the separate visuals. Cao [7] compare nonstandardized line-glyph
|
358 |
+
and star-glyphs with standardized variants (with and without fill under
|
359 |
+
the curve). Each of the n = 18 participants performed 72 trials across
|
360 |
+
the six visuals, two levels of dimensions, and two levels of observations.
|
361 |
+
Visuals with variable standardization outperformed the nonstandard-
|
362 |
+
ized variants, and the radial star-glyph reportedly outperformed the line
|
363 |
+
variant.
|
364 |
+
Other studies have investigated the relative benefits of projecting to
|
365 |
+
2- or 3D scatterplots in PCA-reduced spaces. Gracia [18] conducted an
|
366 |
+
n = 40 user study comparing 2- and 3D scatterplots on traditional 2D
|
367 |
+
monitors. Participants perform point classification, distance perception,
|
368 |
+
and outlier identification tasks. The results are mixed and primarily
|
369 |
+
have small differences. There is some evidence to suggest a lower
|
370 |
+
error in distance perception from a 3D scatterplot. Wagner Filho [45]
|
371 |
+
performed an n = 30 mixed-design study on PCA reduced space using
|
372 |
+
scatterplot displays between 2D on monitors, 3D on monitors, and 3D
|
373 |
+
display with a head-mounted display. None of the tasks on any dataset
|
374 |
+
lead to a significant difference in accuracy. However, the immersive
|
375 |
+
display reduced effort and navigation, resulting in higher perceived
|
376 |
+
accuracy and engagement. Sedlmair [39] instead used two expert
|
377 |
+
coders to evaluate 75 datasets and four dimension reduction techniques
|
378 |
+
across the displays of 2D scatterplots, interactive 3D scatterplots, and
|
379 |
+
2D scatterplot matrices. They suggested a tiered guidance approach
|
380 |
+
finding that 2D scatterplots are often sufficient to resolve a feature. If
|
381 |
+
not, try 2D scatterplots on a different dimension reduction technique
|
382 |
+
before going to scatterplot matrix display or concluding a true negative.
|
383 |
+
They find that interactive 3D scatterplots help in very few cases.
|
384 |
+
2.6
|
385 |
+
Conclusion
|
386 |
+
Orthogonal axes visualizations either scale poorly with dimensionality
|
387 |
+
or introduce an asymmetry of the variable ordering. Projections visu-
|
388 |
+
alize the full p-data as fewer dimensions, traditionally 1-3 at a time.
|
389 |
+
In linear, orthogonal projections, the resulting space is composed of
|
390 |
+
a linear combination of the original variables that maintain variable
|
391 |
+
interpretability. While nonlinear techniques distort and bend space in
|
392 |
+
different ways that are hard to visualize and communicate.
|
393 |
+
Tours are linear projections that are animated over changes in the
|
394 |
+
basis. Several more-recent, orthographic-star coordinate methods in-
|
395 |
+
dependently reach animated linear projections similar to tours. Some
|
396 |
+
quality metrics and empirical studies compare techniques but scarcely
|
397 |
+
with animated methods. Below we conduct a user study to compare the
|
398 |
+
radial tour with PCA and the grand tour on a variable attribution task
|
399 |
+
on clustered data.
|
400 |
+
3
|
401 |
+
USER STUDY
|
402 |
+
The experiment was designed to assess the performance of the radial
|
403 |
+
tour relative to the grand tour and PCA for interpreting the variable
|
404 |
+
attribution to the separation between two clusters. Data were simulated
|
405 |
+
across three experimental factors: location of the cluster separation,
|
406 |
+
cluster shape, and data dimensionality. Participant responses were
|
407 |
+
collected using a web application and crowdsourced through prolific.co,
|
408 |
+
[33] an alternative to MTurk.
|
409 |
+
3.1
|
410 |
+
Objective
|
411 |
+
PCA will be used as a baseline for comparison as it is the most com-
|
412 |
+
monly used linear embedding. It will use static, discrete jumps between
|
413 |
+
orthogonal components. The grand tour will act as a secondary control
|
414 |
+
that will help evaluate the benefit of observation trackability between
|
415 |
+
nearby animation bases but without user-control of its path. Lastly, the
|
416 |
+
radial tour will be compared, which benefits from the continuity of
|
417 |
+
animation and user control of the basis.
|
418 |
+
Then for some subset of tasks, we expect to find that the radial tour
|
419 |
+
performs most accurately. Conversely, there is less to be certain about
|
420 |
+
the accuracy of such limited grand tours as there is no objective function
|
421 |
+
in selecting the bases; it is possible that the random selection of the
|
422 |
+
target bases altogether avoids the bases showing cluster separation.
|
423 |
+
However, given that the data dimensionality is modest, it is probable
|
424 |
+
that the grand tour coincidentally regularly crossed bases with the
|
425 |
+
correct information for the task.
|
426 |
+
Experimental factors and the definition of an accuracy measure are
|
427 |
+
given below. The null hypothesis can be stated as:
|
428 |
+
H0 : accuracy does not change across the visual methods
|
429 |
+
Hα : accuracy does change across the visual methods
|
430 |
+
3.2
|
431 |
+
Visual factors
|
432 |
+
The visual methods are tested mixed design, with each visual being
|
433 |
+
evaluated twice by each participant. Scatterplot matrices or parallel co-
|
434 |
+
ordinates could alternatively be used to visualize these spaces. However,
|
435 |
+
we opt to focus on single biplot displays to focus on the differences
|
436 |
+
between the radial tour and its most comparable visuals, rather than a
|
437 |
+
comprehensive comparison of visual methods. The rest of this section
|
438 |
+
4
|
439 |
+
|
440 |
+
© 2023 IEEE. This is the author’s version of the article that has been published in IEEE Transactions on Visualization and
|
441 |
+
Computer Graphics. The final version of this record is available at: xx.xxxx/TVCG.201x.xxxxxxx/
|
442 |
+
Fig. 4. Examples of the application displays for PCA, grand tour, and
|
443 |
+
radial tour.
|
444 |
+
discusses the design standardization and unique input associated with
|
445 |
+
each visual.
|
446 |
+
The visualization methods were standardized wherever possible.
|
447 |
+
Data were displayed as 2D scatterplots with biplots. All aesthetic
|
448 |
+
values (color-blind safe colors, shapes, sizes, absence of legend, and
|
449 |
+
axis titles) were constant. The variable contribution biplot was always
|
450 |
+
shown left of the scatterplot embeddings with their aesthetic values
|
451 |
+
consistent. What did vary between visuals were their inputs.
|
452 |
+
PCA allowed users to select between the top four principal compo-
|
453 |
+
nents for each axis regardless of the data dimensionality (four or six).
|
454 |
+
Upon changing an axis, the visual would change to the new view of or-
|
455 |
+
thogonal components without displaying intermediate bases. There was
|
456 |
+
no user input for the grand tour; users were instead shown a 15-second
|
457 |
+
animation of the same randomly selected path (variables containing
|
458 |
+
cluster separation were shuffled after simulation). Participants could
|
459 |
+
view the same clip up to four times within the time limit. Radial tours
|
460 |
+
allowed participants to select the manipulation variable. The starting
|
461 |
+
basis was initialized to a half-clock design, where the variables were
|
462 |
+
evenly distributed in half of the circle. This design was created to be
|
463 |
+
variable agnostic while maximizing the independence of the variables.
|
464 |
+
Selecting a new variable resets the animation where the new variable is
|
465 |
+
manipulated to a complete contribution, zeroed contribution, and then
|
466 |
+
back to its initial contribution. Animation and interpolation parameters
|
467 |
+
were held constant across grand and radial tour (five bases per second
|
468 |
+
with a step size of 0.1 radians between interpolated bases). Fig. 4
|
469 |
+
displays screen captures of the visuals in the application.
|
470 |
+
3.3
|
471 |
+
Experimental factors
|
472 |
+
In addition to the visual method, data are simulated across three exper-
|
473 |
+
imental factors. First, the location of the separation between clusters
|
474 |
+
is controlled by mixing a signal and a noise variable at different ratios.
|
475 |
+
Secondly, the shape of the clusters reflects varying distributions of the
|
476 |
+
data. And third, the dimension-ality of the data is also tested. The
|
477 |
+
levels within each factor are described below, and Fig. 5 gives a visual
|
478 |
+
representation.
|
479 |
+
The location of the separation between the clusters is at the heart
|
480 |
+
of the measure. It would be good to test a few varying levels. To test
|
481 |
+
the sensitivity, a noise and signal variable are mixed at different ratios.
|
482 |
+
The separation between clusters is mixed at the following percentages:
|
483 |
+
0/100% (not mixed), 33/66%, 50/50% (evenly mixed).
|
484 |
+
In selecting the shape of the clusters, the convention given by
|
485 |
+
Scrucca et al. (2016) is followed. They describe 14 variants of model
|
486 |
+
families containing three clusters. The model family name is the abbre-
|
487 |
+
viation of the clusters’ respective volume, shape, and orientation. The
|
488 |
+
levels are either Equal or Vary. The models EEE, EEV, and EVV are
|
489 |
+
used. For instance, in the EEV model, the volume and shape of clusters
|
490 |
+
are constant, while the shape’s orientation varies. The EVV model is
|
491 |
+
modified by moving four-fifths of the data out in a “>” or banana-like
|
492 |
+
shape.
|
493 |
+
Dimension-ality is tested at two modest levels: four dimensions
|
494 |
+
containing three clusters and six with four clusters. Such modest
|
495 |
+
dimensionality is required to limit the difficulty and search space to
|
496 |
+
make the task realistic for crowdsourcing.
|
497 |
+
Visual
|
498 |
+
Location
|
499 |
+
Shape
|
500 |
+
Dimension
|
501 |
+
Levels of the experimental factors
|
502 |
+
V1
|
503 |
+
V2
|
504 |
+
V3
|
505 |
+
V4
|
506 |
+
PC i
|
507 |
+
PC j
|
508 |
+
Discrete jump to
|
509 |
+
selected PC pair
|
510 |
+
PCA
|
511 |
+
V1
|
512 |
+
V2
|
513 |
+
V3
|
514 |
+
V4
|
515 |
+
Animation through
|
516 |
+
random bases
|
517 |
+
Grand tour
|
518 |
+
V1
|
519 |
+
V2
|
520 |
+
V3V4
|
521 |
+
Animation changing
|
522 |
+
the selected variable
|
523 |
+
Radial tour
|
524 |
+
1*V1 + 0*V2
|
525 |
+
0/100%
|
526 |
+
.866*V1 + .5*V2
|
527 |
+
33/66%
|
528 |
+
.7071*V1 + .7071*V2
|
529 |
+
50/50%
|
530 |
+
a
|
531 |
+
b
|
532 |
+
c
|
533 |
+
(d)
|
534 |
+
EEE
|
535 |
+
a
|
536 |
+
b
|
537 |
+
c
|
538 |
+
(d)
|
539 |
+
EEV
|
540 |
+
a
|
541 |
+
b
|
542 |
+
c
|
543 |
+
b
|
544 |
+
b
|
545 |
+
b
|
546 |
+
b
|
547 |
+
(d)
|
548 |
+
EVV, banana transformed
|
549 |
+
V1
|
550 |
+
V2
|
551 |
+
V3
|
552 |
+
V4
|
553 |
+
4 dimensions, 3 clusters
|
554 |
+
V1V2
|
555 |
+
V3
|
556 |
+
V4
|
557 |
+
V5
|
558 |
+
V6
|
559 |
+
6 dimensions, 4 clusters
|
560 |
+
Cluster 'd', above, only exists
|
561 |
+
when there are six dimensions,
|
562 |
+
is spherical and has a cluster
|
563 |
+
separation orthogonal to the
|
564 |
+
plane of the other three
|
565 |
+
isodensities.
|
566 |
+
Fig. 5. Levels of the visuals and three experimental factors: location
|
567 |
+
of cluster separation, the shape of clusters, and dimensionality of the
|
568 |
+
sampled data.
|
569 |
+
5
|
570 |
+
|
571 |
+
Training -- pca
|
572 |
+
Evaluation -- grand
|
573 |
+
Training -- radial
|
574 |
+
38/60 seconds remaining.
|
575 |
+
x axis
|
576 |
+
Manip variable:
|
577 |
+
O ViO V2O V30 V4
|
578 |
+
O PC1 O PC2 O PC3 O PC4
|
579 |
+
y axis
|
580 |
+
O PC1 O PC2 O PC3 O PC43.4
|
581 |
+
Task and evaluation
|
582 |
+
With our hypothesis formulated, let us turn our attention to the task
|
583 |
+
and how it is evaluated. Participants were asked to “check any/all
|
584 |
+
variables that contribute more than average to the cluster separation of
|
585 |
+
the green circles and the orange triangles”. This was further explained
|
586 |
+
in the explanatory video as “mark any and all variable that carries
|
587 |
+
more than their fair share of the weight, or one quarter in the case of
|
588 |
+
four variables”. The participant instruction video can be viewed at
|
589 |
+
https://vimeo.com/712674984.
|
590 |
+
The instructions iterated several times in the video were: 1) use the
|
591 |
+
input controls to find a basis that contains separation between the clus-
|
592 |
+
ters of green circles and orange triangles, 2) look at the orientation of
|
593 |
+
the variable contributions in the grey circle (biplot axes orientation), and
|
594 |
+
3) select all variables that contribute more than uniformed distributed
|
595 |
+
cluster separation in the scatterplot. Independent of the experimental
|
596 |
+
level, participants were limited to 60 seconds for each evaluation of
|
597 |
+
this task. This restriction did not impact many participants as the 25th,
|
598 |
+
50th, and 75th quantiles of the response time were about 7, 21, and 30
|
599 |
+
seconds, respectively.
|
600 |
+
The accuracy measure of this task was designed with a couple of
|
601 |
+
features in mind. 1) Symmetric about the expected value, without
|
602 |
+
preference for under- or over-guessing. 2) Heavier than linear weight
|
603 |
+
with an increasing difference from the expected value. The following
|
604 |
+
measure is defined for evaluating the task.
|
605 |
+
Let the data Xijk,i = 1,...,n; j = 1,..., p;k = 1,...,K be simulated
|
606 |
+
observations containing clusters of observations of different distribu-
|
607 |
+
tions. Where n is the number of observations, p is the number of
|
608 |
+
variables, and K indicates the number of clusters. Cluster membership
|
609 |
+
is exclusive; an observation cannot belong to more than one cluster.
|
610 |
+
The weights, w, is a vector of the variable-wise difference between
|
611 |
+
the mean of two clusters of less 1/p, the expected cluster separation
|
612 |
+
if it were uniformly distributed. Accuracy, A is defined as the signed
|
613 |
+
square of these weights if selected by the participant. Participant
|
614 |
+
responses are a logical value for each variable — whether or not the
|
615 |
+
participant thinks each variable separates the two clusters more than
|
616 |
+
uniformly distributed separation. Weights comparing clusters 1 and 2
|
617 |
+
are calculated as follows:
|
618 |
+
A =
|
619 |
+
p
|
620 |
+
∑
|
621 |
+
j=1
|
622 |
+
I( j)·sign(w j)·w2
|
623 |
+
j, where
|
624 |
+
wj =
|
625 |
+
X· j1 −X·j2
|
626 |
+
∑p
|
627 |
+
j=1(|X· j1 −X· j2|) − 1
|
628 |
+
p
|
629 |
+
where
|
630 |
+
I is the indicator function, returning a binary response
|
631 |
+
X· jk, mean of the j-th variable of the k-th cluster
|
632 |
+
where I( j) is the indicator function, the binary response for variable j.
|
633 |
+
Fig. 6 shows one projection of a simulation with its observed variable
|
634 |
+
separation (wide bars), expected uniform separation (dashed line), and
|
635 |
+
accuracy if selected (thin vertical lines).
|
636 |
+
3.5
|
637 |
+
Randomized factor assignment
|
638 |
+
Now, with simulation and their artifacts in hand, this section covers
|
639 |
+
how the experimental factors are assigned and demonstrate how this is
|
640 |
+
experienced from the participant’s perspective.
|
641 |
+
The study is sectioned into three periods. Each period is linked to a
|
642 |
+
randomized level of visual and location. The order of dimension and
|
643 |
+
shape are of secondary interest and are held constant in increasing order
|
644 |
+
of difficulty; four then six dimensions and EEE, EEV, then EVV-banana,
|
645 |
+
respectively.
|
646 |
+
Each period starts with an untimed training task at the simplest
|
647 |
+
remaining experimental levels; location = 0/100%, shape = EEE, and
|
648 |
+
four dimensions with three clusters. This serves to introduce and
|
649 |
+
familiarize participants with input and visual differences. After the
|
650 |
+
training, the participant performs two trials with the same visual and
|
651 |
+
location level across the increasing difficulty of dimension and shape.
|
652 |
+
The plot was removed after 60 seconds, though participants rarely
|
653 |
+
reached this limit.
|
654 |
+
We assigned these factors based on the following order: visual
|
655 |
+
methods, location, shape, and dimensionality. We first assigned three
|
656 |
+
visual methods to three different sessions. The session order and the
|
657 |
+
order of location follow a nested Latin square. The order of dimension
|
658 |
+
and shape are assigned based on increasing order of difficulty.
|
659 |
+
Through pilot studies sampled by convenience (information technol-
|
660 |
+
ogy and statistics Ph.D. students attending Monash University), it was
|
661 |
+
estimated that three complete evaluations are needed to power the study
|
662 |
+
properly, a total of N = 3×3!2 = 108 participants.
|
663 |
+
3.6
|
664 |
+
Participants
|
665 |
+
N = 108 participants were recruited via prolific.co (Palan and Schitter
|
666 |
+
2018). Participants are restricted based on their claimed education
|
667 |
+
requiring that they have completed at least an undergraduate degree
|
668 |
+
(some 58,700 of the 150,400 users at the time). This restriction is
|
669 |
+
used on the premise that linear projections and biplot displays will
|
670 |
+
not be regularly used for consumption by general audiences. There
|
671 |
+
is also the implicit filter that Prolific participants must be at least 18
|
672 |
+
years of age and have implicit biases of timezone, internet availability,
|
673 |
+
language compatibility, and socioeconomic status. Participants were
|
674 |
+
compensated for their time at £7.50 per hour, whereas the mean duration
|
675 |
+
of the survey was about 16 minutes. Previous knowledge or familiarity
|
676 |
+
was minimal, as validated in the follow-up survey. The Supplemental
|
677 |
+
Materials include a heatmap distribution of age and education paneled
|
678 |
+
across preferred pronouns of the participants that completed the survey,
|
679 |
+
who are relatively young, well-educated, and slightly more likely to
|
680 |
+
identify as males.
|
681 |
+
4
|
682 |
+
RESULTS
|
683 |
+
To recap, the primary response variable is accuracy, as defined in
|
684 |
+
Sect. 3.4. Two primary data sets were collected; the user study evalua-
|
685 |
+
tions and the post-study survey. The former is the 108 participants with
|
686 |
+
the experimental factors: visual, location of the cluster separation sig-
|
687 |
+
nal, the shape of the variance-covariance matrix, and the dimensionality
|
688 |
+
of the data. Experimental factors and randomization were discussed
|
689 |
+
in Sect. 3.3. A follow-up survey was completed by 84 of these 108
|
690 |
+
people. It collected demographic information (preferred pronoun, age,
|
691 |
+
and education) and subjective measures for each visual (preference,
|
692 |
+
familiarity, ease of use, and confidence).
|
693 |
+
Below a battery of mixed regression models is built to examine the
|
694 |
+
degree of the evidence and the size of the effects of the experimental
|
695 |
+
factors. Then, Likert plots and rank-sum tests to compare the subjective
|
696 |
+
measures between the visuals.
|
697 |
+
4.1
|
698 |
+
Accuracy
|
699 |
+
To quantify the contribution of the experimental factors to the accuracy,
|
700 |
+
mixed-effects models were fit. All models have a random effect term
|
701 |
+
on the participant and the simulation. These terms explain the amount
|
702 |
+
of error attributed to the individual participant’s effect and variation
|
703 |
+
due to the random sampling data.
|
704 |
+
In building a set of models to test, a base model with only the visual
|
705 |
+
term being compared with the full linear model term and progressively
|
706 |
+
interacting with an additional experimental factor. The models with
|
707 |
+
three and four interacting variables are rank deficient; there is not
|
708 |
+
enough varying information in the data to explain all interacting terms.
|
709 |
+
Fixed effects
|
710 |
+
Full model
|
711 |
+
α
|
712 |
+
�Y = µ +αi +Z+W+ε
|
713 |
+
α +β +γ +δ
|
714 |
+
�Y = µ +αi +β j +γk +δl +Z+W+ε
|
715 |
+
α ×β +γ +δ
|
716 |
+
�Y = µ +αi ×β j +γk +δl +Z+W+ε
|
717 |
+
α ×β ×γ +δ
|
718 |
+
�Y = µ +αi ×β j ×γk +δl +Z+W+ε
|
719 |
+
α ×β ×γ ×δ
|
720 |
+
�Y = µ +αi ×β j ×γk ×δl +Z+W+ε
|
721 |
+
6
|
722 |
+
|
723 |
+
© 2023 IEEE. This is the author’s version of the article that has been published in IEEE Transactions on Visualization and
|
724 |
+
Computer Graphics. The final version of this record is available at: xx.xxxx/TVCG.201x.xxxxxxx/
|
725 |
+
V1
|
726 |
+
V2
|
727 |
+
V3
|
728 |
+
V4
|
729 |
+
V5
|
730 |
+
V6
|
731 |
+
PC1
|
732 |
+
PC4
|
733 |
+
Visual: PCA, location: 33/66%,
|
734 |
+
Shape: EEV, dimension: 6 & 4 clusters
|
735 |
+
1/p = 0.17
|
736 |
+
1/p = 0.17
|
737 |
+
1/p = 0.17
|
738 |
+
1/p = 0.17
|
739 |
+
1/p = 0.17
|
740 |
+
1/p = 0.17
|
741 |
+
0.0
|
742 |
+
0.2
|
743 |
+
0.4
|
744 |
+
1
|
745 |
+
2
|
746 |
+
3
|
747 |
+
4
|
748 |
+
5
|
749 |
+
6
|
750 |
+
Variable
|
751 |
+
Bars: observed cluster separation
|
752 |
+
Lines: accuracy weights if selected
|
753 |
+
Fig. 6. Illustration of how accuracy is measured. (L), Scatterplot and biplot of PC1 by PC4 of a simulated data set (R) illustrate cluster separation
|
754 |
+
between the green circles and orange triangles. Bars indicate observed cluster separation, and (red/green) lines show the accuracy of the variable if
|
755 |
+
selected. The horizontal dashed line has a height 1/p, the expected value of cluster separation. The accuracy weights equal the signed square of
|
756 |
+
the difference between each variable value and the dashed line.
|
757 |
+
Table 1. Model performance of random effect models regressing accuracy.
|
758 |
+
Complex models perform better in terms of R2 and RMSE, yet AIC and
|
759 |
+
BIC penalize their large number of fixed effects in favor of the much
|
760 |
+
simpler model containing only the visuals. Conditional R2 includes error
|
761 |
+
explained by the random effects, while marginal does not.
|
762 |
+
Model
|
763 |
+
AIC
|
764 |
+
BIC
|
765 |
+
R2 cond.
|
766 |
+
R2 marg.
|
767 |
+
RMSE
|
768 |
+
a
|
769 |
+
-71
|
770 |
+
-71
|
771 |
+
-44.219
|
772 |
+
0.303
|
773 |
+
0.289
|
774 |
+
a+b+c+d
|
775 |
+
-45
|
776 |
+
-45
|
777 |
+
4.063
|
778 |
+
0.334
|
779 |
+
0.294
|
780 |
+
a*b+c+d
|
781 |
+
-26
|
782 |
+
-25
|
783 |
+
41.445
|
784 |
+
0.338
|
785 |
+
0.293
|
786 |
+
a*b*c+d
|
787 |
+
28
|
788 |
+
32
|
789 |
+
167.092
|
790 |
+
0.383
|
791 |
+
0.309
|
792 |
+
a*b*c*d
|
793 |
+
105
|
794 |
+
116
|
795 |
+
360.052
|
796 |
+
0.37
|
797 |
+
0.19
|
798 |
+
where
|
799 |
+
µ is the intercept of the model
|
800 |
+
αi is the visual | i ∈ (pca, grand, radial)
|
801 |
+
β j is the location | j ∈ (0/100, 33/66, 50/50% mix)
|
802 |
+
γk is the shape | k ∈ (EEE, EEV, EVV banana)
|
803 |
+
δl is the dimension | l ∈ (4 & 3, 6 & 4) var & clusters
|
804 |
+
Z ∼ N (0, τ) is the random effect of participant
|
805 |
+
W ∼ N (0, υ) is the random effect of simulation
|
806 |
+
ε ∼ N (0, σ) is the remaining error of the model
|
807 |
+
Table 1 compares the model summaries across increasing complexity.
|
808 |
+
The α × β + γ + δ model is selected to examine in more detail as it
|
809 |
+
has relatively high condition R2 and not overly complex interacting
|
810 |
+
terms. Table 2 looks at the coefficients for this model. There is strong
|
811 |
+
evidence suggesting a relatively large increase in accuracy from the
|
812 |
+
radial tour, though there is evidence that almost all of the increase the
|
813 |
+
is lost under 33/66% mixing.
|
814 |
+
We also want to visually examine the conditional variables in the
|
815 |
+
model. Fig. 7 illustrates the accuracy for each model term shown as
|
816 |
+
mean and 95% confidence interval.
|
817 |
+
4.2
|
818 |
+
Subjective measures
|
819 |
+
Modeling has proven that the use of the radial tour leads to a sizable
|
820 |
+
improvement in the accuracy measure for this task. This is not the
|
821 |
+
whole story. It is desirable to know what the users think of using
|
822 |
+
the visuals. We follow the direction set by [45]. They observe four
|
823 |
+
Table 2. The task accuracy model coefficients for �Y = α × β + �� + δ,
|
824 |
+
with visual = pca, location = 0/100%, shape = EEE, and dim = 4 held
|
825 |
+
as baselines. Visual being radial is the fixed term with the strongest
|
826 |
+
evidence supporting the hypothesis. Interacting with the location term,
|
827 |
+
there is evidence suggesting radial performs with minimal improvement
|
828 |
+
for 33/66% location mixing.
|
829 |
+
Est
|
830 |
+
SE
|
831 |
+
df
|
832 |
+
t val
|
833 |
+
Prob
|
834 |
+
(Intercept)
|
835 |
+
0.10
|
836 |
+
0.06
|
837 |
+
16.1
|
838 |
+
1.54
|
839 |
+
0.143
|
840 |
+
Factor
|
841 |
+
VisGrand
|
842 |
+
0.06
|
843 |
+
0.04
|
844 |
+
622.1
|
845 |
+
1.63
|
846 |
+
0.104
|
847 |
+
VisRadial
|
848 |
+
0.14
|
849 |
+
0.04
|
850 |
+
617.0
|
851 |
+
3.77
|
852 |
+
0.000
|
853 |
+
***
|
854 |
+
Fixed effects
|
855 |
+
Loc33/66%
|
856 |
+
-0.02
|
857 |
+
0.07
|
858 |
+
19.9
|
859 |
+
-0.29
|
860 |
+
0.777
|
861 |
+
Loc50/50%
|
862 |
+
-0.04
|
863 |
+
0.07
|
864 |
+
20.0
|
865 |
+
-0.66
|
866 |
+
0.514
|
867 |
+
ShapeEEV
|
868 |
+
-0.05
|
869 |
+
0.06
|
870 |
+
11.8
|
871 |
+
-0.82
|
872 |
+
0.427
|
873 |
+
ShapeBanana
|
874 |
+
-0.09
|
875 |
+
0.06
|
876 |
+
11.8
|
877 |
+
-1.54
|
878 |
+
0.150
|
879 |
+
Dim6
|
880 |
+
-0.01
|
881 |
+
0.05
|
882 |
+
11.8
|
883 |
+
-0.23
|
884 |
+
0.824
|
885 |
+
Interactions
|
886 |
+
VisGrand:Loc33/66
|
887 |
+
-0.02
|
888 |
+
0.06
|
889 |
+
588.9
|
890 |
+
-0.29
|
891 |
+
0.774
|
892 |
+
VisRadial:Loc33/66
|
893 |
+
-0.12
|
894 |
+
0.06
|
895 |
+
586.5
|
896 |
+
-2.13
|
897 |
+
0.033
|
898 |
+
*
|
899 |
+
VisGrand:Loc50/50
|
900 |
+
-0.03
|
901 |
+
0.06
|
902 |
+
591.6
|
903 |
+
-0.47
|
904 |
+
0.641
|
905 |
+
VisRadial:Loc50/50
|
906 |
+
-0.06
|
907 |
+
0.06
|
908 |
+
576.3
|
909 |
+
-1.16
|
910 |
+
0.248
|
911 |
+
7
|
912 |
+
|
913 |
+
Violin plots of the terms for accuracy: Y1^ = α * β + γ + δ
|
914 |
+
0.00
|
915 |
+
0.05
|
916 |
+
0.10
|
917 |
+
pca grandradial
|
918 |
+
Visual
|
919 |
+
Accuracy
|
920 |
+
0.05
|
921 |
+
0.10
|
922 |
+
0.15
|
923 |
+
0/10033/6650/50%
|
924 |
+
Location
|
925 |
+
0.00
|
926 |
+
0.05
|
927 |
+
0.10
|
928 |
+
0.15
|
929 |
+
EEE EEVbanana
|
930 |
+
Shape
|
931 |
+
0.04
|
932 |
+
0.06
|
933 |
+
0.08
|
934 |
+
4
|
935 |
+
6
|
936 |
+
Dim
|
937 |
+
0/100%
|
938 |
+
33/66%
|
939 |
+
50/50%
|
940 |
+
pca
|
941 |
+
grand
|
942 |
+
radial
|
943 |
+
pca
|
944 |
+
grand
|
945 |
+
radial
|
946 |
+
pca
|
947 |
+
grand
|
948 |
+
radial
|
949 |
+
0.00
|
950 |
+
0.05
|
951 |
+
0.10
|
952 |
+
0.15
|
953 |
+
0.20
|
954 |
+
0.25
|
955 |
+
Visual
|
956 |
+
Accuracy
|
957 |
+
Visual
|
958 |
+
pca
|
959 |
+
grand
|
960 |
+
radial
|
961 |
+
Fig. 7. Accuracy of terms of the model �Y = α ×β +γ +δ. Viewing the
|
962 |
+
marginal accuracy of the terms corroborates the primary findings that
|
963 |
+
the use of the radial tour leads to a significant increase in accuracy, at
|
964 |
+
least over PCA, and this effect is particularly well supported when no
|
965 |
+
location mixing is applied.
|
966 |
+
subjective measures. The following were used in this study: confidence,
|
967 |
+
ease of use, prior familiarity, and preference. Each of these questions
|
968 |
+
was asked of all for each visual as 5-point Likert items.
|
969 |
+
The 84 evaluations of the post-study survey are shown in Fig. 8. The
|
970 |
+
figure uses Likert plots or stacked percentage bar plots and asscoisated
|
971 |
+
mean and 95% confidence intervals.
|
972 |
+
There was strong evidence to support that participants preferred the
|
973 |
+
radial tour to either alternative. There is less evidence that the radial
|
974 |
+
tour led to more confidence and was found easier to use than the grand
|
975 |
+
tour. In confirmation of expectations, crowdsourced participants had
|
976 |
+
low familiarity with all visuals, with no difference in mean supported.
|
977 |
+
5
|
978 |
+
DISCUSSION
|
979 |
+
Data visualization is an integral part of understanding relationships
|
980 |
+
in data and how models are fitted. When it comes to multivariate
|
981 |
+
data giving a comprehensive view quickly becomes difficult as the
|
982 |
+
dimensions become sizable. Analysts have the task of choosing which
|
983 |
+
visualization technique to use. Because the viewing volume/time of
|
984 |
+
multivariate spaces typically increase quadratically with dimensions
|
985 |
+
dimension reduction must be properly conducted. While there are
|
986 |
+
optimization methods for static and animated visuals, the particular
|
987 |
+
function used is a guided choice of the analyst.
|
988 |
+
Sect. 2 discussed various types of visualization which are may be
|
989 |
+
preferred for differing tasks and ends. The visualization and perception
|
990 |
+
of multivariate spaces is a broad and heterogeneous task. This work
|
991 |
+
focuses a subset of linear projections and especially sheds light on
|
992 |
+
potential benefit of providing user control in conjunction with the
|
993 |
+
animated projection over many bases as a radial tour.
|
994 |
+
The radial tour is a method for the analyst to choose a variable to
|
995 |
+
alter its contribution to the basis. The animation over small changes
|
996 |
+
to the basis allows the sensitivity of the structure to be assessed from
|
997 |
+
the variable contribution. The hypothesis is that user control over the
|
998 |
+
basis and the permanence of observations between intermediate frames
|
999 |
+
may lead to a better perception of the variable attribution causing the
|
1000 |
+
separation of clusters.
|
1001 |
+
confidence
|
1002 |
+
ease of use
|
1003 |
+
familiarity
|
1004 |
+
preference
|
1005 |
+
radial
|
1006 |
+
grand
|
1007 |
+
pca
|
1008 |
+
0%
|
1009 |
+
25%
|
1010 |
+
50%
|
1011 |
+
75%
|
1012 |
+
100%
|
1013 |
+
0%
|
1014 |
+
25%
|
1015 |
+
50%
|
1016 |
+
75%
|
1017 |
+
100%
|
1018 |
+
0%
|
1019 |
+
25%
|
1020 |
+
50%
|
1021 |
+
75%
|
1022 |
+
100%
|
1023 |
+
0%
|
1024 |
+
25%
|
1025 |
+
50%
|
1026 |
+
75%
|
1027 |
+
100%
|
1028 |
+
Response rate
|
1029 |
+
Visual
|
1030 |
+
Response
|
1031 |
+
most agree
|
1032 |
+
agree
|
1033 |
+
neutral
|
1034 |
+
disagree
|
1035 |
+
most disagree
|
1036 |
+
Likert scale [1−5]
|
1037 |
+
Subjective measures
|
1038 |
+
familiarity
|
1039 |
+
preference
|
1040 |
+
confidence
|
1041 |
+
ease of use
|
1042 |
+
pca
|
1043 |
+
grand radial
|
1044 |
+
pca
|
1045 |
+
grand radial
|
1046 |
+
2.0
|
1047 |
+
2.5
|
1048 |
+
3.0
|
1049 |
+
3.5
|
1050 |
+
2.0
|
1051 |
+
2.5
|
1052 |
+
3.0
|
1053 |
+
3.5
|
1054 |
+
Visual
|
1055 |
+
Response
|
1056 |
+
visual
|
1057 |
+
pca
|
1058 |
+
grand
|
1059 |
+
radial
|
1060 |
+
Fig. 8. The subjective measures of the 84 responses of the post-study
|
1061 |
+
survey with five-point Likert items levels of agreement. (L) Likert plots
|
1062 |
+
(stacked percent bar plots) with (R) mean and 95% CI of the same
|
1063 |
+
measures. Participants are more confident using the radial tour and find
|
1064 |
+
it easier to use than the grand tour. The radial tour is the most preferred
|
1065 |
+
visual.
|
1066 |
+
A mixed modeling analysis of the study provides strong support
|
1067 |
+
for this conclusion. That is, there is significant evidence to suggest
|
1068 |
+
the use of the radial tour leads to a sizable increase in accuracy. One
|
1069 |
+
unexpected caveat is that mixing the location of the signal at 33/66%
|
1070 |
+
almost completely negates this gain. Perhaps this is because the “half-
|
1071 |
+
clock” basis used did not give enough weight to the variable containing
|
1072 |
+
the small fraction. It was also interesting to note that no level of the
|
1073 |
+
experimental factors alone had a significant effect on this setup. Lastly,
|
1074 |
+
the follow-up survey asked participants to evaluate measures of the
|
1075 |
+
visuals. Most notably, participants preferred the radial tour to the other
|
1076 |
+
visuals. Knowing that the radial tour outperforms alternatives and is
|
1077 |
+
the preferred choice can help inform the selection of visual methods
|
1078 |
+
for developers and analysts.
|
1079 |
+
There are several implicit limitations to this study: the task, type
|
1080 |
+
of data, and levels of the factors to name a few. The expansion of any
|
1081 |
+
of these areas is conceptually simple, but exponentially increases the
|
1082 |
+
number of participants needed to properly power the study. Additionally
|
1083 |
+
the sample of crowd-sourced, educated, but unfamiliar users may not
|
1084 |
+
extrapolate well to more experienced users. There are several ways that
|
1085 |
+
future work could be extended. Aside from expanding the support of
|
1086 |
+
the experimental factors, more exciting directions include: introducing
|
1087 |
+
a new task, including more visualizations, and changing the experience
|
1088 |
+
level of the target population. It is difficult to achieve good coverage
|
1089 |
+
given the number of possible factors to vary.
|
1090 |
+
6
|
1091 |
+
CONCLUSION
|
1092 |
+
This paper discussed a crowdsourced mixed design user study (n = 108)
|
1093 |
+
comparing the efficacy of three linear projection techniques: PCA,
|
1094 |
+
grand tour, and radial tour. The participants performed a supervised
|
1095 |
+
cluster task, explicitly identifying which variables contribute to the
|
1096 |
+
separation of two target clusters. This was evaluated evenly over four
|
1097 |
+
experimental factors. In summary, mixed model regression finds strong
|
1098 |
+
evidence that using the radial tour sizably increases accuracy, espe-
|
1099 |
+
cially when cluster separation location is not mixed at 33/66%. The
|
1100 |
+
effect sizes on accuracy are large relative to the change from the other
|
1101 |
+
8
|
1102 |
+
|
1103 |
+
© 2023 IEEE. This is the author’s version of the article that has been published in IEEE Transactions on Visualization and
|
1104 |
+
Computer Graphics. The final version of this record is available at: xx.xxxx/TVCG.201x.xxxxxxx/
|
1105 |
+
experimental factors and the random effect of data simulation, though
|
1106 |
+
smaller than the random effect of the participant. The radial tour was
|
1107 |
+
the most preferred of the three visuals.
|
1108 |
+
There is no panacea for the comprehensive visualization of multi-
|
1109 |
+
variate spaces. We have demonstrated that there is a definite value of
|
1110 |
+
user-control in linear projections. The agency of the analyst remains an
|
1111 |
+
important tool for the exploratory analysis of multivariate data.
|
1112 |
+
ACKNOWLEDGMENTS
|
1113 |
+
This research was supported by an Australian Government Research
|
1114 |
+
Training Program (RTP) scholarship. This article was created in R [36]
|
1115 |
+
and rmarkdown [49]. Visuals were prepared with spinifex [42]. We
|
1116 |
+
thank Jieyang Chong for his help in proofreading this article. The code,
|
1117 |
+
response files, their analyses, and the study application are publicly
|
1118 |
+
available at https://github.com/nspyrison/spinifex_study.
|
1119 |
+
The participant instruction video can be viewed at https://vimeo.
|
1120 |
+
com/712674984.
|
1121 |
+
REFERENCES
|
1122 |
+
[1] A. Adadi and M. Berrada. Peeking inside the black-box: a survey on
|
1123 |
+
explainable artificial intelligence (XAI). IEEE access, 6:52138–52160,
|
1124 |
+
2018.
|
1125 |
+
[2] F. J. Anscombe. Graphs in Statistical Analysis. The American Statistician,
|
1126 |
+
27(1):17–21, 1973. doi: 10.2307/2682899
|
1127 |
+
[3] A. B. Arrieta, N. D´ıaz-Rodriguez, J. Del Ser, A. Bennetot, S. Tabik, A. Bar-
|
1128 |
+
bado, S. Garc´ıa, S. Gil-L´opez, D. Molina, and R. Benjamins. Explainable
|
1129 |
+
Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and
|
1130 |
+
challenges toward responsible AI. Information Fusion, 58:82–115, 2020.
|
1131 |
+
[4] D. Asimov. The Grand Tour: a Tool for Viewing Multidimensional Data.
|
1132 |
+
SIAM journal on scientific and statistical computing, 6(1):128–143, 1985.
|
1133 |
+
doi: 10.1137/0906011
|
1134 |
+
[5] P. Biecek. DALEX: explainers for complex predictive models in R. The
|
1135 |
+
Journal of Machine Learning Research, 19(1):3245–3249, 2018.
|
1136 |
+
[6] P. Biecek and T. Burzykowski. Explanatory Model Analysis: Explore,
|
1137 |
+
Explain, and Examine Predictive Models. CRC Press, 2021.
|
1138 |
+
[7] N. Cao, Y.-R. Lin, D. Gotz, and F. Du. Z-Glyph: Visualizing outliers in
|
1139 |
+
multivariate data. Information Visualization, 17(1):22–40, Jan. 2018. doi:
|
1140 |
+
10.1177/1473871616686635
|
1141 |
+
[8] R. B. Cattell. The scree test for the number of factors. Multivariate
|
1142 |
+
behavioral research, 1(2):245–276, 1966.
|
1143 |
+
[9] J. M. Chambers, W. S. Cleveland, B. Kleiner, and P. A. Tukey. Graphical
|
1144 |
+
methods for data analysis. Chapman and Hall/CRC, 2018.
|
1145 |
+
[10] C. Chang, T. Dwyer, and K. Marriott. An evaluation of perceptually com-
|
1146 |
+
plementary views for multivariate data. In 2018 IEEE Pacific Visualization
|
1147 |
+
Symposium (PacificVis), pp. 195–204. IEEE, 2018.
|
1148 |
+
[11] D. Cook and A. Buja.
|
1149 |
+
Manual Controls for High-Dimensional Data
|
1150 |
+
Projections. Journal of Computational and Graphical Statistics, 6(4):464–
|
1151 |
+
480, 1997. doi: 10.2307/1390747
|
1152 |
+
[12] D. Cook, A. Buja, and J. Cabrera. Projection Pursuit Indexes Based
|
1153 |
+
on Orthonormal Function Expansions. Journal of Computational and
|
1154 |
+
Graphical Statistics, 2(3):225–250, Sept. 1993. doi: 10.1080/10618600.
|
1155 |
+
1993.10474610
|
1156 |
+
[13] D. Cook, A. Buja, E.-K. Lee, and H. Wickham. Grand Tours, Projection
|
1157 |
+
Pursuit Guided Tours, and Manual Controls. In Handbook of Data Visu-
|
1158 |
+
alization, pp. 295–314. Springer Berlin Heidelberg, Berlin, Heidelberg,
|
1159 |
+
2008. doi: 10.1007/978-3-540-33037-0 13
|
1160 |
+
[14] D. J. Donnell, A. Buja, and W. Stuetzle. Analysis of Additive Dependen-
|
1161 |
+
cies and Concurvities Using Smallest Additive Principal Components. The
|
1162 |
+
Annals of Statistics, 22(4):1635 – 1668, 1994.
|
1163 |
+
[15] N. Elmqvist, P. Dragicevic, and J.-D. Fekete. Rolling the dice: Multidi-
|
1164 |
+
mensional visual exploration using scatterplot matrix navigation. IEEE
|
1165 |
+
transactions on Visualization and Computer Graphics, 14(6):1539–1148,
|
1166 |
+
2008. Publisher: IEEE.
|
1167 |
+
[16] M. Espadoto, R. M. Martins, A. Kerren, N. S. T. Hirata, and A. C. Telea.
|
1168 |
+
Toward a Quantitative Survey of Dimension Reduction Techniques. IEEE
|
1169 |
+
Transactions on Visualization and Computer Graphics, 27(3):2153–2173,
|
1170 |
+
Mar. 2021. doi: 10.1109/TVCG.2019.2944182
|
1171 |
+
[17] K. R. Gabriel. The biplot graphic display of matrices with application to
|
1172 |
+
principal component analysis. Biometrika, 58(3):453–467, 1971.
|
1173 |
+
[18] A. Gracia, S. Gonz ˜A¡lez, V. Robles, E. Menasalvas, and T. von Lan-
|
1174 |
+
desberger. New Insights into the Suitability of the Third Dimension for
|
1175 |
+
Visualizing Multivariate/Multidimensional Data: A Study Based on Loss
|
1176 |
+
of Quality Quantification. Information Visualization, 15(1):3–30, 2016.
|
1177 |
+
doi: 10.1177/1473871614556393
|
1178 |
+
[19] G. Grinstein, M. Trutschl, and U. Cvek. High-Dimensional Visualizations.
|
1179 |
+
p. 14, 2002.
|
1180 |
+
[20] E. Kandogan. Star Coordinates: A Multi-dimensional Visualization Tech-
|
1181 |
+
nique with Uniform Treatment of Dimensions. In In Proceedings of the
|
1182 |
+
IEEE Information Visualization Symposium, Late Breaking Hot Topics, pp.
|
1183 |
+
9–12, 2000.
|
1184 |
+
[21] W. Karwowski. International Encyclopedia of Ergonomics and Human
|
1185 |
+
Factors, -3 Volume Set. CRC Press, 2006.
|
1186 |
+
[22] S. Lee, D. Cook, N. da Silva, U. Laa, N. Spyrison, E. Wang, and H. S.
|
1187 |
+
Zhang. The state-of-the-art on tours for dynamic visualization of high-
|
1188 |
+
dimensional data. WIREs Computational Statistics, n/a(n/a):e1573, 2021.
|
1189 |
+
doi: 10.1002/wics.1573
|
1190 |
+
[23] D. J. Lehmann and H. Theisel. Orthographic star coordinates. IEEE
|
1191 |
+
Transactions on Visualization and Computer Graphics, 19(12):2615–2624,
|
1192 |
+
2013. Publisher: IEEE.
|
1193 |
+
[24] D. J. Lehmann and H. Theisel. Optimal Sets of Projections of High-
|
1194 |
+
Dimensional Data. IEEE Transactions on Visualization and Computer
|
1195 |
+
Graphics, 22(1):609–618, Jan. 2016. doi: 10.1109/TVCG.2015.2467132
|
1196 |
+
[25] S. Liu, D. Maljovec, B. Wang, P.-T. Bremer, and V. Pascucci. Visualizing
|
1197 |
+
High-Dimensional Data: Advances in the Past Decade. IEEE Transactions
|
1198 |
+
on Visualization and Computer Graphics, 23(3):1249–1268, 2017. doi: 10
|
1199 |
+
.1109/TVCG.2016.2640960
|
1200 |
+
[26] J. Matejka and G. Fitzmaurice. Same Stats, Different Graphs: Generating
|
1201 |
+
Datasets with Varied Appearance and Identical Statistics through Simu-
|
1202 |
+
lated Annealing. In Proceedings of the 2017 CHI Conference on Human
|
1203 |
+
Factors in Computing Systems - CHI ’17, pp. 1290–1294. ACM Press,
|
1204 |
+
Denver, Colorado, USA, 2017. doi: 10.1145/3025453.3025912
|
1205 |
+
[27] J. A. McDonald. Interactive graphics for data analysis. Technical report,
|
1206 |
+
1982.
|
1207 |
+
[28] L. McInnes, J. Healy, and J. Melville. UMAP: Uniform Manifold Approx-
|
1208 |
+
imation and Projection for Dimension Reduction. arXiv:1802.03426 [cs,
|
1209 |
+
stat], Dec. 2018. arXiv: 1802.03426.
|
1210 |
+
[29] T. Munzner. Visualization analysis and design, 2014.
|
1211 |
+
[30] J. E. Nam and K. Mueller. Tripadvisorˆ{ND}: A tourism-inspired high-
|
1212 |
+
dimensional space exploration framework with overview and detail. IEEE
|
1213 |
+
transactions on visualization and computer graphics, 19(2):291–305, 2012.
|
1214 |
+
Publisher: IEEE.
|
1215 |
+
[31] L. Nelson, D. Cook, and C. Cruz-Neira. Xgobi vs the c2: Results of an
|
1216 |
+
experiment comparing data visualization in a 3-d immersive virtual reality
|
1217 |
+
environment with a 2-d workstation display. Computational Statistics,
|
1218 |
+
14(1):39–51, Dec. 1999.
|
1219 |
+
[32] M. d. Ocagne. Coordonne’es paralle’les et axiales. Me’thode de transfor-
|
1220 |
+
mation ge’ome’trique et proce’de’ nouveau de calcul graphique de’duits
|
1221 |
+
de la conside’ration des coordonne’es paralle’les. Gauthier-Villars, Paris,
|
1222 |
+
1885.
|
1223 |
+
[33] S. Palan and C. Schitter. Prolific: A subject pool for online experiments.
|
1224 |
+
Journal of Behavioral and Experimental Finance, 17:22–27, 2018.
|
1225 |
+
[34] K. Pearson. LIII. On Lines and Planes of Closest fit to Systems of Points
|
1226 |
+
in Space. The London, Edinburgh, and Dublin Philosophical Magazine
|
1227 |
+
and Journal of Science, 2(11):559–572, 1901.
|
1228 |
+
[35] T. L. Pedersen and D. Robinson. gganimate: A Grammar of Animated
|
1229 |
+
Graphics. 2020.
|
1230 |
+
[36] R Core Team. R: A Language and Environment for Statistical Computing.
|
1231 |
+
R Foundation for Statistical Computing, Vienna, Austria, 2020.
|
1232 |
+
[37] A. Sanchez, L. Raya, M. A. Mohedano-Munoz, and M. Rubio-Sanchez.
|
1233 |
+
Feature selection based on star coordinates plots associated with eigen-
|
1234 |
+
value problems. The Visual Computer, 37(2):203–216, Feb. 2021. doi: 10.
|
1235 |
+
1007/s00371-020-01793-w
|
1236 |
+
[38] A. Sanchez, C. Soguero-Ruiz, I. Mora-Jimenez, F. J. Rivas-Flores, D. J.
|
1237 |
+
Lehmann, and M. Rubio-Sanchez. Scaled radial axes for interactive visual
|
1238 |
+
feature selection: A case study for analyzing chronic conditions. Expert
|
1239 |
+
Systems with Applications, 100:182–196, June 2018. doi: 10.1016/j.eswa.
|
1240 |
+
2018.01.054
|
1241 |
+
[39] M. Sedlmair, T. Munzner, and M. Tory. Empirical Guidance on Scatterplot
|
1242 |
+
and Dimension Reduction Technique Choices. IEEE Transactions on
|
1243 |
+
Visualization & Computer Graphics, (12):2634–2643, 2013.
|
1244 |
+
[40] C. Sievert. Interactive Web-Based Data Visualization with R, plotly, and
|
1245 |
+
shiny. Chapman and Hall/CRC, 2020.
|
1246 |
+
[41] J. P. Snyder. Map projections–A working manual, vol. 1395. US Govern-
|
1247 |
+
ment Printing Office, 1987.
|
1248 |
+
[42] N. Spyrison and D. Cook. spinifex: an R Package for Creating a Manual
|
1249 |
+
9
|
1250 |
+
|
1251 |
+
Tour of Low-dimensional Projections of Multivariate Data. The R Journal,
|
1252 |
+
12(1):243, 2020. doi: 10.32614/RJ-2020-027
|
1253 |
+
[43] J. W. Tukey. Exploratory Data Analysis, vol. 32. Pearson, 1977.
|
1254 |
+
[44] L. van der Maaten and G. Hinton. Visualizing Data Using t-SNE. Journal
|
1255 |
+
of machine learning research, 9(Nov):2579–2605, 2008.
|
1256 |
+
[45] J. A. Wagner Filho, M. F. Rey, C. M. Freitas, and L. Nedel. Immersive
|
1257 |
+
visualization of abstract information: An evaluation on dimensionally-
|
1258 |
+
reduced data scatterplots. In 2018 IEEE Conference on Virtual Reality
|
1259 |
+
and 3D User Interfaces (VR), pp. 483–490. IEEE, 2018.
|
1260 |
+
[46] Y. Wang, J. Li, F. Nie, H. Theisel, M. Gong, and D. J. Lehmann. Linear Dis-
|
1261 |
+
criminative Star Coordinates for Exploring Class and Cluster Separation
|
1262 |
+
of High Dimensional Data. Computer Graphics Forum, 36(3):401–410,
|
1263 |
+
2017. eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.13197.
|
1264 |
+
doi: 10.1111/cgf.13197
|
1265 |
+
[47] H. Wickham, D. Cook, and H. Hofmann. Visualizing statistical models:
|
1266 |
+
Removing the blindfold. Statistical Analysis and Data Mining: The ASA
|
1267 |
+
Data Science Journal, 8(4):203–225, 2015. doi: 10.1002/sam.11271
|
1268 |
+
[48] H. Wickham, D. Cook, H. Hofmann, and A. Buja. tourr: An R Package
|
1269 |
+
for Exploring Multivariate Data with Projections. Journal of Statistical
|
1270 |
+
Software, 40(2), 2011. doi: 10.18637/jss.v040.i02
|
1271 |
+
[49] Y. Xie, J. J. Allaire, and G. Grolemund. R Markdown: The Definitive
|
1272 |
+
Guide. Chapman and Hall/CRC, Boca Raton, Florida, 2018.
|
1273 |
+
[50] I. Yanai and M. Lercher. A hypothesis is a liability. Genome Biology,
|
1274 |
+
21(1):231, 2020. doi: 10.1186/s13059-020-02133-w
|
1275 |
+
10
|
1276 |
+
|
CNAyT4oBgHgl3EQfR_eC/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
CNE3T4oBgHgl3EQfUQpw/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ad0bc01d0111bb9d8afb56bf267e004167b6304ab2d7537e4de1309dc4af04e6
|
3 |
+
size 189616
|
CtFJT4oBgHgl3EQfAiym/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a0750c3fd155fc5a7bbf7641e65d895013ee5f6913804656744dfbaa86e2a1df
|
3 |
+
size 94438
|
E9FKT4oBgHgl3EQfaS5z/content/tmp_files/2301.11807v1.pdf.txt
ADDED
@@ -0,0 +1,901 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
arXiv:2301.11807v1 [physics.gen-ph] 12 Jan 2023
|
2 |
+
Dyonic and magnetic black holes with rational nonlinear
|
3 |
+
electrodynamics
|
4 |
+
S. I. Kruglov 1
|
5 |
+
Department of Physics, University of Toronto,
|
6 |
+
60 St. Georges St., Toronto, ON M5S 1A7, Canada
|
7 |
+
Canadian Quantum Research Center,
|
8 |
+
204-3002 32 Ave Vernon, BC V1T 2L7, Canada
|
9 |
+
Abstract
|
10 |
+
The principles of causality and unitarity are studied within rational
|
11 |
+
nonlinear electrodynamics proposed earlier. We investigate dyonic and
|
12 |
+
magnetized black holes and show that in the self-dual case, when the
|
13 |
+
electric charge equals the magnetic charge, corrections to Coulomb’s
|
14 |
+
law and Reissner−Nordstr¨om solutions are absent. In the case of the
|
15 |
+
magnetic black hole, the Hawking temperature, the heat capacity and
|
16 |
+
the Helmholtz free energy are calculated. It is shown that there are
|
17 |
+
second-order phase transitions and it was demonstrated that at some
|
18 |
+
range of parameters the black holes are stable.
|
19 |
+
1
|
20 |
+
Introduction
|
21 |
+
The black holes (BHs) are real objects in the centers of many galactic and
|
22 |
+
its physics is of great interest. Dyonic solutions in the string [1]-[4] and in
|
23 |
+
the supergravity [5]-[8] theories for BHs with magnetic and electric charges
|
24 |
+
were obtained. Such solutions are used in the theory of superconductivity
|
25 |
+
and thermodynamics [9], [10], [11]. In this paper we obtain dyonic and mag-
|
26 |
+
netic BH solutions in the framework of rational nonlinear electrodynamics
|
27 |
+
proposed in [12]. The attractive feature of this nonlinear electrodynamics
|
28 |
+
(NED) is the absence of singularities in the center of charges and their finite
|
29 |
+
self-energy. Similar properties of NED were firstly observed by Born and
|
30 |
+
Infeld in another NED [13]. Quantum electrodynamics with loop corrections
|
31 |
+
also leads to NED [14]. The singularity problems are absent also in other
|
32 |
+
NED models [15]-[19]. The general relativity (GR) and the thermodynamics
|
33 |
+
1E-mail: [email protected]
|
34 |
+
1
|
35 |
+
|
36 |
+
of BH with NED was considered in [20]-[35]. The phase transitions in electri-
|
37 |
+
cally and magnetically charged BHs were investigated in [36]-[40]. It worth
|
38 |
+
noting that the universe acceleration also can be explained by NED coupled
|
39 |
+
with GR [41]-[49].
|
40 |
+
The paper is organised as follows. In Sec. 2 we study the causality and
|
41 |
+
unitarity principles. We obtain the dyonic solution in Sec. 3. In Sec. 4 we
|
42 |
+
consider the magnetic BH. The metric function and their asymptotic as r →
|
43 |
+
∞ are found. It was shown that the magnetic mass of BHs is finite and there
|
44 |
+
are not singularities of the Ricci scalar as r → ∞. The BH thermodynamics
|
45 |
+
and the thermal stability of charged black holes are investigated in Sec. 5.
|
46 |
+
We obtain the Hawking temperature, the heat capacity, the Helmholtz free
|
47 |
+
energy and demonstrate that the phase transitions in BHs occur.
|
48 |
+
We use units with c = 1 and the metric signature diag(−1, 1, 1, 1).
|
49 |
+
2
|
50 |
+
The model and principles of causality and
|
51 |
+
unitarity
|
52 |
+
Here, we consider rational NED, proposed in [12], with the Lagrangian den-
|
53 |
+
sity
|
54 |
+
L = −
|
55 |
+
F
|
56 |
+
2βF + 1,
|
57 |
+
(1)
|
58 |
+
where the parameter β ≥ 0 possesses the dimension of (length)4, F =
|
59 |
+
(1/4)FµνF µν = (B2 − E2)/2, Fµν = ∂µAν − ∂νAµ is the field tensor. The
|
60 |
+
symmetrical energy-momentum tensor is given by [34]
|
61 |
+
Tµν = −
|
62 |
+
F α
|
63 |
+
µ Fνα
|
64 |
+
(1 + 2βF)2 − gµνL.
|
65 |
+
(2)
|
66 |
+
From Eq. (2) we obtain the energy density
|
67 |
+
ρ = T 0
|
68 |
+
0 =
|
69 |
+
F
|
70 |
+
1 + 2βF +
|
71 |
+
E2
|
72 |
+
(1 + 2βF)2.
|
73 |
+
(3)
|
74 |
+
For healthy theory the general principles of causality and unitarity should
|
75 |
+
hold. According to the causality principle the group velocity of excitations
|
76 |
+
over the background has to be less than the light speed, and then tachyons
|
77 |
+
are absent in the theory. The absence of ghosts is guaranteed by the unitarity
|
78 |
+
2
|
79 |
+
|
80 |
+
principle. Both principles are satisfied for the case E · B = 0 if the following
|
81 |
+
inequalities hold [50]:
|
82 |
+
LF ≤ 0,
|
83 |
+
LFF ≥ 0,
|
84 |
+
LF + 2FLFF ≤ 0,
|
85 |
+
(4)
|
86 |
+
where LF ≡ ∂L/∂F. Making use of Eq. (1) we obtain
|
87 |
+
LF = −
|
88 |
+
1
|
89 |
+
(1 + 2βF)2,
|
90 |
+
LF + 2FLFF =
|
91 |
+
6βF − 1
|
92 |
+
(1 + 2βF)3,
|
93 |
+
LFF =
|
94 |
+
4β
|
95 |
+
(1 + 2βF)3.
|
96 |
+
(5)
|
97 |
+
With the help of Eqs. (4) and (5), the principles of causality and unitarity
|
98 |
+
take place if 6βF ≤ 1 (β ≥ 0). When E = 0, βB2 ≤ 1/3.
|
99 |
+
3
|
100 |
+
The dyonic solution
|
101 |
+
The action of NED coupled with GR is given by
|
102 |
+
I =
|
103 |
+
�
|
104 |
+
d4x√−g
|
105 |
+
�
|
106 |
+
1
|
107 |
+
16πGR + L
|
108 |
+
�
|
109 |
+
,
|
110 |
+
(6)
|
111 |
+
where G is Newton’s constant, 16πG ≡ M−2
|
112 |
+
P l , and MP l is the reduced Planck
|
113 |
+
mass. The Einstein equation is
|
114 |
+
Rµν − 1
|
115 |
+
2gµνR = −8πGTµν.
|
116 |
+
(7)
|
117 |
+
Varying action (6) on electromagnetic potentials we obtain the fields equation
|
118 |
+
for electromagnet fields
|
119 |
+
∂µ
|
120 |
+
�√−gF µνLF
|
121 |
+
�
|
122 |
+
= 0.
|
123 |
+
(8)
|
124 |
+
We consider the static and spherically symmetric metric with the line element
|
125 |
+
ds2 = −A(r)dt2 +
|
126 |
+
1
|
127 |
+
A(r)dr2 + r2(dϑ2 + sin2 ϑdφ2),
|
128 |
+
(9)
|
129 |
+
where the metric function is given by
|
130 |
+
A(r) = 1 − 2M(r)G
|
131 |
+
r
|
132 |
+
,
|
133 |
+
(10)
|
134 |
+
3
|
135 |
+
|
136 |
+
and the mass function is
|
137 |
+
M(r) = m0 +
|
138 |
+
� r
|
139 |
+
0 ρ(r)r2dr = m0 + mel −
|
140 |
+
� ∞
|
141 |
+
r
|
142 |
+
ρ(r)r2dr.
|
143 |
+
(11)
|
144 |
+
The total mass of the BH m = m0 +mel, where m0 is the Schwarzschild mass
|
145 |
+
and mel =
|
146 |
+
� ∞
|
147 |
+
0 ρ(r)r2dr is the electromagnetic mass. The general solutions of
|
148 |
+
field equations, found in [37], [38], are given by
|
149 |
+
B2 = q2
|
150 |
+
m
|
151 |
+
r4 ,
|
152 |
+
E2 =
|
153 |
+
q2
|
154 |
+
e
|
155 |
+
L2
|
156 |
+
Fr4,
|
157 |
+
(12)
|
158 |
+
where qm and qe are the magnetic and electric charges, respectively. With
|
159 |
+
the help of Eqs. (1) and (12) one finds
|
160 |
+
E2 = q2
|
161 |
+
e(1 + 2βF)4
|
162 |
+
r4
|
163 |
+
,
|
164 |
+
(13)
|
165 |
+
βF = a − b(1 + 2βF)4,
|
166 |
+
a = βq2
|
167 |
+
m
|
168 |
+
2r4 ,
|
169 |
+
b = βq2
|
170 |
+
e
|
171 |
+
2r4 ,
|
172 |
+
(14)
|
173 |
+
and we introduced the unitless variables a and b. Defining the unitless value
|
174 |
+
x ≡ βF, we obtain from Eq. (14) the equation as follows:
|
175 |
+
b(2x + 1)4 + x − a = 0.
|
176 |
+
(15)
|
177 |
+
Using unitless variables t = r/ 4�
|
178 |
+
βq2m and n = q2
|
179 |
+
m/q2
|
180 |
+
e, one finds from Eq. (15)
|
181 |
+
the equation for y = 2x + 1:
|
182 |
+
y4 + t4y − n − t4 = 0.
|
183 |
+
(16)
|
184 |
+
The real dyonic solution to Eq. (16) is
|
185 |
+
y =
|
186 |
+
�
|
187 |
+
�
|
188 |
+
�
|
189 |
+
�
|
190 |
+
�
|
191 |
+
4√
|
192 |
+
3t4
|
193 |
+
4
|
194 |
+
4√
|
195 |
+
n + t4
|
196 |
+
�
|
197 |
+
sinh(ϕ/3)
|
198 |
+
−
|
199 |
+
√
|
200 |
+
n + t4 sinh(ϕ/3)
|
201 |
+
√
|
202 |
+
3
|
203 |
+
−
|
204 |
+
�
|
205 |
+
sinh(ϕ/3)
|
206 |
+
4√
|
207 |
+
n + t4
|
208 |
+
4√
|
209 |
+
3
|
210 |
+
,
|
211 |
+
sinh(ϕ) =
|
212 |
+
33/2t8
|
213 |
+
16(n + t4)3/2.
|
214 |
+
(17)
|
215 |
+
Putting n = 0 in Eq. (17) we come to the solution corresponding to the
|
216 |
+
electrically charged BH [34]. We find the self-dual solution at qe = qm (a = b)
|
217 |
+
4
|
218 |
+
|
219 |
+
from Eq. (15). Then x = 0 (F = 0, E = B), E = q/r2 (q ≡ qe = qm) and
|
220 |
+
with the help of Eqs. (3) and (11) we obtain the mass function
|
221 |
+
M(r) = m −
|
222 |
+
� ∞
|
223 |
+
r
|
224 |
+
ρ(r)r2dr = m − q2
|
225 |
+
r .
|
226 |
+
(18)
|
227 |
+
Making use of Eq. (10) one finds the metric function
|
228 |
+
A(r) = 1 − 2mG
|
229 |
+
r
|
230 |
+
+ 2q2G
|
231 |
+
r2 .
|
232 |
+
(19)
|
233 |
+
The metric function (19) corresponds to the Reissner−Nordst¨om (RN) solu-
|
234 |
+
tion with 2q2 = q2
|
235 |
+
e + q2
|
236 |
+
m.
|
237 |
+
4
|
238 |
+
The magnetic black hole
|
239 |
+
Let us consider the static magnetic BH 2. Taking into account that qe = 0,
|
240 |
+
F = q2
|
241 |
+
m/(2r4), we obtain from Eq. (3) the magnetic energy density
|
242 |
+
ρM =
|
243 |
+
B2
|
244 |
+
2(βB2 + 1) =
|
245 |
+
q2
|
246 |
+
m
|
247 |
+
2(r4 + βq2
|
248 |
+
m).
|
249 |
+
(20)
|
250 |
+
With the help of Eqs. (11) and (20) one finds the mass function
|
251 |
+
M(x) = m0 +
|
252 |
+
q3/2
|
253 |
+
m
|
254 |
+
8
|
255 |
+
√
|
256 |
+
2β1/4
|
257 |
+
�
|
258 |
+
ln x2 −
|
259 |
+
√
|
260 |
+
2x + 1
|
261 |
+
x2 +
|
262 |
+
√
|
263 |
+
2x + 1
|
264 |
+
+2 arctan(
|
265 |
+
√
|
266 |
+
2x + 1) − 2 arctan(1 −
|
267 |
+
√
|
268 |
+
2x)
|
269 |
+
�
|
270 |
+
,
|
271 |
+
(21)
|
272 |
+
where x = r/ 4�
|
273 |
+
βq2m. The BH magnetic mass is given by
|
274 |
+
mM =
|
275 |
+
� ∞
|
276 |
+
0
|
277 |
+
ρM(r)r2dr =
|
278 |
+
πq3/2
|
279 |
+
m
|
280 |
+
4
|
281 |
+
√
|
282 |
+
2β1/4 ≈ 0.56 q3/2
|
283 |
+
m
|
284 |
+
β1/4.
|
285 |
+
(22)
|
286 |
+
2In the paper M.-S. Ma, Ann.
|
287 |
+
Phys.
|
288 |
+
362, 529 (2015) the author also considered
|
289 |
+
the static magnetic BH based on NED proposed in [12]. However, here we use unitless
|
290 |
+
variables that are more convenient for the analyses of the BH thermodynamics. In addition,
|
291 |
+
we analyse more general case when the BH besides the electromagnetic mass possesses the
|
292 |
+
Schwarzschild mass (having non-electromagnetic nature).
|
293 |
+
5
|
294 |
+
|
295 |
+
The Schwarzschild mass m0 is a free parameter and at qm = 0 one has
|
296 |
+
mM = 0, and we arrive at the Schwarzschild BH. Making use of Eq. (10) we
|
297 |
+
obtain the metric function
|
298 |
+
A(x) = 1 − 2m0G
|
299 |
+
4�
|
300 |
+
βq2
|
301 |
+
mx
|
302 |
+
−
|
303 |
+
qmG
|
304 |
+
4√2βx
|
305 |
+
�
|
306 |
+
ln x2 −
|
307 |
+
√
|
308 |
+
2x + 1
|
309 |
+
x2 +
|
310 |
+
√
|
311 |
+
2x + 1
|
312 |
+
+2 arctan(
|
313 |
+
√
|
314 |
+
2x + 1) − 2 arctan(1 −
|
315 |
+
√
|
316 |
+
2x)
|
317 |
+
�
|
318 |
+
,
|
319 |
+
(23)
|
320 |
+
As r → ∞ the metric function (23) becomes
|
321 |
+
A(r) = 1 − 2mG
|
322 |
+
r
|
323 |
+
+ q2
|
324 |
+
mG
|
325 |
+
r2
|
326 |
+
+ O(r−5)
|
327 |
+
r → ∞,
|
328 |
+
(24)
|
329 |
+
where m = m0 + mM. The correction to the RN solution, according to Eq.
|
330 |
+
(24), is in the order of O(r−5). At m0 = 0 and r → 0, from Eq. (23) one
|
331 |
+
finds the asymptotic with a de Sitter core
|
332 |
+
A(r) = 1 − Gr2
|
333 |
+
β
|
334 |
+
+ Gr6
|
335 |
+
7β2q2
|
336 |
+
m
|
337 |
+
−
|
338 |
+
Gr10
|
339 |
+
11β3q4
|
340 |
+
m
|
341 |
+
+ O(r12)
|
342 |
+
r → 0.
|
343 |
+
(25)
|
344 |
+
The solution (25) is regular because as r → 0 we have A(r) → 1. When
|
345 |
+
m0 ̸= 0 the solution is singular and A(r) → ∞. Let us introduce unitless
|
346 |
+
constants C = m0G/(β1/4√qm), B = qmG/√β. Then the horizon radii, that
|
347 |
+
are the roots of the equation A(r) = 0 (x+/− = r+/−/(√qmβ1/4)), are given
|
348 |
+
in Tables 1 and 2.
|
349 |
+
The plots of the metric function (23) are depicted in
|
350 |
+
Table 1: B = 1
|
351 |
+
C
|
352 |
+
0.6
|
353 |
+
0.7
|
354 |
+
0.8
|
355 |
+
0.9
|
356 |
+
1
|
357 |
+
2
|
358 |
+
3
|
359 |
+
4
|
360 |
+
5
|
361 |
+
x+
|
362 |
+
1.75
|
363 |
+
2.02
|
364 |
+
2.27
|
365 |
+
2.52
|
366 |
+
2.75
|
367 |
+
4.91
|
368 |
+
6.97
|
369 |
+
9.00
|
370 |
+
11.02
|
371 |
+
Figs. 1 and 2.
|
372 |
+
According to Fig. 1 at m0 ̸= 0 (B = 1) there is only one
|
373 |
+
horizon. For the bigger mass (the parameter C is greater) the horizon radius
|
374 |
+
increases. Figure 2 shows that there are no horizons at m0 = 0, B < 3.17),
|
375 |
+
an extreme horizon occurs at m0 = 0, B ≈ 3.173, and two horizons hold at
|
376 |
+
m0 = 0 and B > 3.173.
|
377 |
+
Making use of Eqs. (2) and (7) at E = 0, we obtain the Ricci scalar
|
378 |
+
R = 8πGT µ
|
379 |
+
µ =
|
380 |
+
16πGβq4
|
381 |
+
m
|
382 |
+
(r4 + βq2
|
383 |
+
m)2.
|
384 |
+
(26)
|
385 |
+
The Ricci scalar approaches to zero as r → ∞ and spacetime becomes flat.
|
386 |
+
6
|
387 |
+
|
388 |
+
Table 2: m0 = 0
|
389 |
+
B
|
390 |
+
3.173
|
391 |
+
3.2
|
392 |
+
3.5
|
393 |
+
4
|
394 |
+
4.5
|
395 |
+
5
|
396 |
+
6
|
397 |
+
7
|
398 |
+
8
|
399 |
+
x−
|
400 |
+
1.68
|
401 |
+
1.52
|
402 |
+
1.21
|
403 |
+
1.03
|
404 |
+
0.92
|
405 |
+
0.85
|
406 |
+
0.75
|
407 |
+
0.68
|
408 |
+
0.63
|
409 |
+
x+
|
410 |
+
1.68
|
411 |
+
1.87
|
412 |
+
2.49
|
413 |
+
3.19
|
414 |
+
3.82
|
415 |
+
4.42
|
416 |
+
5.59
|
417 |
+
6.74
|
418 |
+
7.87
|
419 |
+
0
|
420 |
+
2
|
421 |
+
4
|
422 |
+
6
|
423 |
+
8
|
424 |
+
10
|
425 |
+
−20
|
426 |
+
−15
|
427 |
+
−10
|
428 |
+
−5
|
429 |
+
0
|
430 |
+
5
|
431 |
+
x
|
432 |
+
A(x)
|
433 |
+
|
434 |
+
|
435 |
+
C= 1
|
436 |
+
C= 2
|
437 |
+
C= 3
|
438 |
+
Figure 1: The plot of the function A(x) for B = 1. The solid curve is for
|
439 |
+
C = 1, the dashed curve corresponds to C = 2, and the dashed-doted curve
|
440 |
+
corresponds to C = 3.
|
441 |
+
5
|
442 |
+
The black hole thermodynamics
|
443 |
+
To study the black holes thermodynamics and the thermal stability of mag-
|
444 |
+
netic BHs, we consider the Hawking temperature
|
445 |
+
TH = κ
|
446 |
+
2π = A′(r+)
|
447 |
+
4π
|
448 |
+
,
|
449 |
+
(27)
|
450 |
+
where κ is the surface gravity and r+ is the event horizon radius. With the
|
451 |
+
help of Eqs. (10) and (11) one finds the relations
|
452 |
+
A′(r) = 2GM(r)
|
453 |
+
r2
|
454 |
+
− 2GM′(r)
|
455 |
+
r
|
456 |
+
,
|
457 |
+
M′(r) = r2ρ,
|
458 |
+
M(r+) = r+
|
459 |
+
2G.
|
460 |
+
(28)
|
461 |
+
Making use of Eqs. (3), (27) and (28) we obtain the Hawking temperature
|
462 |
+
TH = 1
|
463 |
+
4π
|
464 |
+
� 1
|
465 |
+
r+
|
466 |
+
− 2Gr+ρ(r+)
|
467 |
+
�
|
468 |
+
=
|
469 |
+
1
|
470 |
+
4πβ1/4√qm
|
471 |
+
� 1
|
472 |
+
x+
|
473 |
+
−
|
474 |
+
Gqmx+
|
475 |
+
√β(1 + x4+)
|
476 |
+
�
|
477 |
+
.
|
478 |
+
(29)
|
479 |
+
7
|
480 |
+
|
481 |
+
0
|
482 |
+
1
|
483 |
+
2
|
484 |
+
3
|
485 |
+
4
|
486 |
+
5
|
487 |
+
6
|
488 |
+
7
|
489 |
+
8
|
490 |
+
−0.6
|
491 |
+
−0.4
|
492 |
+
−0.2
|
493 |
+
0
|
494 |
+
0.2
|
495 |
+
0.4
|
496 |
+
0.6
|
497 |
+
0.8
|
498 |
+
1
|
499 |
+
x
|
500 |
+
A(x)
|
501 |
+
|
502 |
+
|
503 |
+
B= 2
|
504 |
+
B= 3.175
|
505 |
+
B= 5
|
506 |
+
Figure 2: The plot of the function A(x) for m0 = 0. The solid curve is for
|
507 |
+
B = 2, the dashed curve corresponds to B = 3.175, and the dashed-doted
|
508 |
+
curve corresponds to B = 5.
|
509 |
+
Using the equation M(r+) = r+/(2G) and (21) we find
|
510 |
+
Gqm
|
511 |
+
√β = 4
|
512 |
+
√
|
513 |
+
2(x+ − 2C)
|
514 |
+
D
|
515 |
+
,
|
516 |
+
D ≡ ln x2 −
|
517 |
+
√
|
518 |
+
2x + 1
|
519 |
+
x2 +
|
520 |
+
√
|
521 |
+
2x + 1 − 2 arctan(1 −
|
522 |
+
√
|
523 |
+
2x) + 2 arctan(1 +
|
524 |
+
√
|
525 |
+
2x).
|
526 |
+
(30)
|
527 |
+
Replacing Eq. (30) into Eq. (29) one obtains the Hawking temperature as
|
528 |
+
follows:
|
529 |
+
TH ==
|
530 |
+
1
|
531 |
+
4πβ1/4√qm
|
532 |
+
� 1
|
533 |
+
x+
|
534 |
+
− 4
|
535 |
+
√
|
536 |
+
2(x+ − 2C)x+
|
537 |
+
(1 + x4
|
538 |
+
+)D
|
539 |
+
�
|
540 |
+
.
|
541 |
+
(31)
|
542 |
+
The plots of the functions TH(x+)√qmβ1/4 are depicted in Figs. 3 and 4.
|
543 |
+
According to Fig. 3 the temperature is positive everywhere for the case
|
544 |
+
C ̸= 0 (m0 ̸= 0). Figure 4 shows that the Hawking temperature for C = 0
|
545 |
+
(m0 = 0) is positive for x+ > 1.679 and is zero at x+ ≈ 1.679 . The BH is
|
546 |
+
unstable when the temperature is negative. From Eq. (30) we obtain the
|
547 |
+
value of Gqm/√β = 3.173 corresponding to x+ = 1.679. By studying the
|
548 |
+
signs of the heat capacity and the Helmholtz free energy, we can observe
|
549 |
+
the different stability phases of the BH [51]. Making use of the Hawking
|
550 |
+
entropy of the BH S = Area/(4G) = πr2
|
551 |
+
+/G = πx2
|
552 |
+
+qm
|
553 |
+
√β/G we find the
|
554 |
+
heat capacity
|
555 |
+
Cq = TH
|
556 |
+
� ∂S
|
557 |
+
∂TH
|
558 |
+
�
|
559 |
+
q
|
560 |
+
= TH∂S/∂x+
|
561 |
+
∂TH/∂x+
|
562 |
+
= 2πqm
|
563 |
+
√βx+TH
|
564 |
+
G∂TH/∂x+
|
565 |
+
.
|
566 |
+
(32)
|
567 |
+
8
|
568 |
+
|
569 |
+
0
|
570 |
+
0.5
|
571 |
+
1
|
572 |
+
1.5
|
573 |
+
2
|
574 |
+
2.5
|
575 |
+
3
|
576 |
+
0
|
577 |
+
50
|
578 |
+
100
|
579 |
+
150
|
580 |
+
x+
|
581 |
+
(qm
|
582 |
+
2 β)1/4TH
|
583 |
+
|
584 |
+
|
585 |
+
C= 1
|
586 |
+
C= 2
|
587 |
+
C= 3
|
588 |
+
Figure 3: The plot of the function TH√qmβ1/4 vs x+. The solid curve is for
|
589 |
+
C = 1, the dashed curve corresponds to C = 2, and the dashed-doted curve
|
590 |
+
corresponds to C = 4.
|
591 |
+
According to Eq. (32) the heat capacity possesses a singularity when the
|
592 |
+
Hawking temperature has an extremum (∂TH/∂x+ = 0). The plots of the
|
593 |
+
heat capacity versus the variable x+ for different parameters C are depicted
|
594 |
+
in Figs. 5. 6, and 7.
|
595 |
+
Figure 5 shows the Schwarzschild behaviour of the
|
596 |
+
heat capacity for C ̸= 0 (m0 ̸= 0), i.e. it is negative at ∂TH/∂x+ < 0. As a
|
597 |
+
result, the BHs are unstable for the case m0 ̸= 0. In accordance with Fig. 6
|
598 |
+
the BH is unstable at 1.679 > x > 0 because the heat capacity is negative.
|
599 |
+
Figure 7 shows a singularity in the heat capacity at the point x ≈ 3 where
|
600 |
+
the second-order phase transition occurs. When m0 = 0 the heat capacity is
|
601 |
+
positive at the range 3 > x > 1.679 and the BH is stable.
|
602 |
+
To complete the analysis of phase transitions we consider the Helmholtz
|
603 |
+
free energy which is given by
|
604 |
+
F = m − THS.
|
605 |
+
(33)
|
606 |
+
The mass of the BH m plays the role of the internal energy, and the Hawking
|
607 |
+
entropy is S = πr2
|
608 |
+
+/G. Making use of Eqs. (22). (31) and (33) we obtain
|
609 |
+
GF
|
610 |
+
√qmβ1/4 = B
|
611 |
+
�
|
612 |
+
C +
|
613 |
+
π
|
614 |
+
4
|
615 |
+
√
|
616 |
+
2
|
617 |
+
�
|
618 |
+
− x+
|
619 |
+
4 +
|
620 |
+
√
|
621 |
+
2x2
|
622 |
+
+(x+ − 2C)
|
623 |
+
(x4
|
624 |
+
+ + 1)D
|
625 |
+
.
|
626 |
+
(34)
|
627 |
+
Substituting B = qmG/√β from Eq. (30) into (34) we find
|
628 |
+
GF
|
629 |
+
√qmβ1/4 =
|
630 |
+
�
|
631 |
+
C +
|
632 |
+
π
|
633 |
+
4
|
634 |
+
√
|
635 |
+
2
|
636 |
+
� 4
|
637 |
+
√
|
638 |
+
2(x+ − 2C)
|
639 |
+
D
|
640 |
+
− x+
|
641 |
+
4 +
|
642 |
+
√
|
643 |
+
2x2
|
644 |
+
+(x+ − 2C)
|
645 |
+
(x4+ + 1)D
|
646 |
+
�
|
647 |
+
.
|
648 |
+
(35)
|
649 |
+
9
|
650 |
+
|
651 |
+
1
|
652 |
+
2
|
653 |
+
3
|
654 |
+
4
|
655 |
+
5
|
656 |
+
6
|
657 |
+
7
|
658 |
+
8
|
659 |
+
9
|
660 |
+
10
|
661 |
+
−0.1
|
662 |
+
−0.08
|
663 |
+
−0.06
|
664 |
+
−0.04
|
665 |
+
−0.02
|
666 |
+
0
|
667 |
+
0.02
|
668 |
+
x+
|
669 |
+
(qm
|
670 |
+
2 β)1/4TH
|
671 |
+
Figure 4: The plot of the function TH√qmβ1/4 vs x+ for C = 0 (m0 = 0).
|
672 |
+
The plots of the unitless reduced free energy GF/(√qmβ1/4) vs.
|
673 |
+
x+ are
|
674 |
+
depicted in Figs. 8 and 9.
|
675 |
+
The BHs with F > 0, Cq < 0 are unstable and
|
676 |
+
the BHs with F < 0, Cq > 0 are stable. In accordance with Figs. 5-9, there
|
677 |
+
are other phases with F > 0, Cq > 0 and F < 0, Cq < 0. In the case F < 0,
|
678 |
+
Cq < 0 the BHs are less energetic than the pure radiation and, as a result,
|
679 |
+
BHs do not decay through tunneling. For large masses of BHs (C > 1) this
|
680 |
+
phase holds. Because the heat capacities are negative, the BH temperature
|
681 |
+
decreases when the mass of BH increases. Such phases are also realised in
|
682 |
+
another model [52].
|
683 |
+
6
|
684 |
+
Conclusion
|
685 |
+
The correspondence principle of the NED model holds as for weak fields our
|
686 |
+
model is converted into Maxwell’s electrodynamics. It was demonstrated that
|
687 |
+
at βB2 ≤ 1/3 (E = 0) the principles of causality and unitarity take place.
|
688 |
+
In this model the singularity of the electric field at the center of charges is
|
689 |
+
absent and the maximum electric field in the origin is E(0) = 1/√β. The
|
690 |
+
dyonic and magnetic BHs in GR were studied. It was shown that in the
|
691 |
+
self-dual case (qe = qm) the corrections to Coulomb’s law and RN solutions
|
692 |
+
are absent. The Ricci scalar does not have the singularity and as r → ∞
|
693 |
+
space-time becomes flat.
|
694 |
+
The thermodynamics and the thermal stability of magnetized BHs were
|
695 |
+
investigated. The Hawking temperature, the heat capacity and the Helmholtz
|
696 |
+
free energy of BHs were calculated. It was demonstrated that the heat capac-
|
697 |
+
10
|
698 |
+
|
699 |
+
0
|
700 |
+
2
|
701 |
+
4
|
702 |
+
6
|
703 |
+
8
|
704 |
+
10
|
705 |
+
−700
|
706 |
+
−600
|
707 |
+
−500
|
708 |
+
−400
|
709 |
+
−300
|
710 |
+
−200
|
711 |
+
−100
|
712 |
+
0
|
713 |
+
x+
|
714 |
+
GCq/(qmβ0.5)
|
715 |
+
|
716 |
+
|
717 |
+
C= 1
|
718 |
+
C= 2
|
719 |
+
C= 3
|
720 |
+
Figure 5: The plot of the function GCq/(q2
|
721 |
+
mβ)1/2 vs x+. The solid curve is
|
722 |
+
for C = 1, the dashed curve corresponds to C = 2, and the dashed-doted
|
723 |
+
curve corresponds to C = 3.
|
724 |
+
0
|
725 |
+
0.5
|
726 |
+
1
|
727 |
+
1.5
|
728 |
+
2
|
729 |
+
−3
|
730 |
+
−2
|
731 |
+
−1
|
732 |
+
0
|
733 |
+
1
|
734 |
+
2
|
735 |
+
3
|
736 |
+
4
|
737 |
+
5
|
738 |
+
6
|
739 |
+
7
|
740 |
+
x+
|
741 |
+
GCq/(qmβ1/2)
|
742 |
+
Figure 6: The plot of the function GCq/(q2
|
743 |
+
mβ)1/2 vs x+ for C = 0.
|
744 |
+
ity diverges at some event radii r+ (x+) for the case when the total BH mass
|
745 |
+
is the magnetic mass and the phase transitions of the second-order occurs.
|
746 |
+
We shown that there is a new stability region of BH solutions when the heat
|
747 |
+
capacity and the free energy are negative. In this case BHs are less energetic
|
748 |
+
than the pure radiation and BHs do not decay via tunneling.
|
749 |
+
References
|
750 |
+
[1] A. D. Shapere, S. Trivedi, and F. Wilczek, Mod. Phys. Lett. A 6, 2677
|
751 |
+
(1991).
|
752 |
+
11
|
753 |
+
|
754 |
+
2
|
755 |
+
2.5
|
756 |
+
3
|
757 |
+
3.5
|
758 |
+
4
|
759 |
+
4.5
|
760 |
+
5
|
761 |
+
5.5
|
762 |
+
6
|
763 |
+
−2000
|
764 |
+
−1500
|
765 |
+
−1000
|
766 |
+
−500
|
767 |
+
0
|
768 |
+
500
|
769 |
+
1000
|
770 |
+
1500
|
771 |
+
2000
|
772 |
+
2500
|
773 |
+
x+
|
774 |
+
GCq/(qmβ0.5)
|
775 |
+
Figure 7: The plot of the function GCq/(q2
|
776 |
+
mβ)1/2 vs x+ for C = 0.
|
777 |
+
0.2
|
778 |
+
0.4
|
779 |
+
0.6
|
780 |
+
0.8
|
781 |
+
1
|
782 |
+
1.2
|
783 |
+
1.4
|
784 |
+
1.6
|
785 |
+
1.8
|
786 |
+
2
|
787 |
+
−14000
|
788 |
+
−12000
|
789 |
+
−10000
|
790 |
+
−8000
|
791 |
+
−6000
|
792 |
+
−4000
|
793 |
+
−2000
|
794 |
+
0
|
795 |
+
x+
|
796 |
+
GF/(qm
|
797 |
+
2 β)1/4
|
798 |
+
|
799 |
+
|
800 |
+
C= 1
|
801 |
+
C= 2
|
802 |
+
C= 4
|
803 |
+
Figure 8: The plot of the function GF/(√qmβ1/4) vs. x+. The dashed curve
|
804 |
+
corresponds to C = 2, the solid curve is for C = 1, and the dashed-doted
|
805 |
+
curve corresponds to C = 4.
|
806 |
+
[2] S. Mignemi, Phys. Rev. D 51, 934 (1995).
|
807 |
+
[3] M. Cvetic and A. A. Tseytlin, Phys. Rev. D 53, 5619 (1996); Erratum:
|
808 |
+
Phys. Rev. D 55, 3907 (1997).
|
809 |
+
[4] D. P. Jatkar, S. Mukherji, and S. Panda, Nucl. Phys. B 484, 223 (1997).
|
810 |
+
[5] A. H. Chamseddine and W. A. Sabra, Phys. Lett. B 485, 301 (2000).
|
811 |
+
[6] D. D. K. Chow and G. Compere, Phys. Rev. D 89, 065003 (2014).
|
812 |
+
[7] P. Meessen, T. Ortin, and P. F. Ramirez, JHEP 1710, 066 (2017).
|
813 |
+
[8] H. Lu, Y. Pang, and C.N. Pope, JHEP 1311, 033 (2013).
|
814 |
+
12
|
815 |
+
|
816 |
+
0
|
817 |
+
0.5
|
818 |
+
1
|
819 |
+
1.5
|
820 |
+
2
|
821 |
+
−100
|
822 |
+
−80
|
823 |
+
−60
|
824 |
+
−40
|
825 |
+
−20
|
826 |
+
0
|
827 |
+
20
|
828 |
+
40
|
829 |
+
60
|
830 |
+
x+
|
831 |
+
GF/(qm
|
832 |
+
2 β)1/4
|
833 |
+
|
834 |
+
|
835 |
+
C= 0
|
836 |
+
C= 0.1
|
837 |
+
C= 0.2
|
838 |
+
Figure 9: The plot of the function GF/(√qmβ1/4 vs. x+). The dashed curve
|
839 |
+
corresponds to C = 0.1, the solid curve is for C = 0, and the dashed-doted
|
840 |
+
curve corresponds to C = 0.2.
|
841 |
+
[9] S. A. Hartnoll and P. Kovtun, Phys. Rev. D 76, 066001 (2007).
|
842 |
+
[10] S. A. Hartnoll, P. K. Kovtun , M. Muller, and S. Sachdev, Phys. Rev.
|
843 |
+
B 76, 144502 (2007).
|
844 |
+
[11] S. Dutta, A. Jain, and R. Soni, JHEP 1312, 060 (2013).
|
845 |
+
[12] S. I. Kruglov, Ann. Phys. 353, 299 (2015).
|
846 |
+
[13] M. Born and L. Infeld, Proc. R. Soc. Lond. 144, 425 (1934).
|
847 |
+
[14] W. Heisenberg and H. Euler, Z. Phys. 98, 714 (1936).
|
848 |
+
[15] H. H. Soleng, Phys. Rev. D 52, 6178 (1995).
|
849 |
+
[16] D. M. Gitman and A. E. Shabad, Eur. Phys. J. C 74, 3186 (2014).
|
850 |
+
[17] C. V. Costa, D. M. Gitman, and A. E. Shabad, Phys. Scripta 90, 074012
|
851 |
+
(2015).
|
852 |
+
[18] S. I. Kruglov, Commun. Theor. Phys. 66, 59 (2016).
|
853 |
+
[19] S. I. Kruglov, Mod. Phys. Lett. A 32, 1750201 (2017).
|
854 |
+
[20] R. Pellicer and R. J. Torrence, J. Math. Phys. 10, 1718 (1969).
|
855 |
+
[21] H. P. de Oliveira, Class. Quant. Grav. 11, 1469 (1994).
|
856 |
+
13
|
857 |
+
|
858 |
+
[22] E. Ay´on-Beato and A. Gar´cia, Phys. Rev. Lett. 80, 5056 (1998).
|
859 |
+
[23] K. A. Bronnikov, V. N. Melnikov, G. N. Shikin, and K. P. Staniukovich,
|
860 |
+
Ann. Phys. 118, 84 (1979).
|
861 |
+
[24] K. A. Bronnikov, Phys. Rev. D 63, 044005 (2001).
|
862 |
+
[25] K. A. Bronnikov, Phys. Rev. Lett. 85, 4641 (2000).
|
863 |
+
[26] K. A. Bronnikov, G. N. Shikin, and E. N. Sibileva, Grav. Cosmol. 9, 169
|
864 |
+
(2003).
|
865 |
+
[27] A. Burinskii and S. R. Hildebrandt, Phys. Rev. D 65, 104017 (2002).
|
866 |
+
[28] J. Diaz-Alonso and D. Rubiera-Garcia, Phys. Rev. D 81, 064021 (2010).
|
867 |
+
[29] N. Breton, Gen. Rel. Grav. 37, 643 (2005).
|
868 |
+
[30] M. Novello, S. E. Perez Bergliaffa, and J. M. Salim, Class. Quant. Grav.
|
869 |
+
17, 3821 (2000).
|
870 |
+
[31] R. Garcia-Salcedo, T. Gonzalez, and I. Quiros, Phys. Rev. D 89, 084047
|
871 |
+
(2014).
|
872 |
+
[32] J. P. S. Lemos and V. T. Zanchin, Phys. Rev. D 83, 124005 (2011).
|
873 |
+
[33] L. Balart and E. C. Vagenas, Phys. Rev. D 90, 124045 (2014).
|
874 |
+
[34] S. I. Kruglov, Phys. Rev. D 94, 044026 (2016).
|
875 |
+
[35] S. I. Kruglov, Ann. Phys. (Berlin) 528, 588 (2016).
|
876 |
+
[36] H. Yajima and T. Tamaki, Phys. Rev. D 63, 064007 (2001).
|
877 |
+
[37] K. A. Bronnikov, Grav. Cosmol. 23, 343 (2017).
|
878 |
+
[38] K. A. Bronnikov, Int. J. Mod. Phys. D 27, 1841005 (2018).
|
879 |
+
[39] S. I. Kruglov, Int. J. Mod. Phys. A 33, 1850023 (2018).
|
880 |
+
[40] S. I. Kruglov, Ann. Phys. 383, 550 (2017).
|
881 |
+
[41] R. Garc´ıa-Salcedo and N. Breton, Int. J. Mod. Phys. A 15, 4341 (2000).
|
882 |
+
14
|
883 |
+
|
884 |
+
[42] C. S. Camara, M. R. de Garcia Maia, J. C. Carvalho, and J. A. S. Lima,
|
885 |
+
Phys. Rev. D 69, 123504 (2004).
|
886 |
+
[43] E. Elizalde, J. E. Lidsey, S. Nojiri, and S. D. Odintsov, Phys. Lett. B
|
887 |
+
574, 1 (2003).
|
888 |
+
[44] M. Novello, S. E. Perez Bergliaffa, and J. M. Salim, Phys. Rev. D 69,
|
889 |
+
127301 (2004).
|
890 |
+
[45] M. Novello, E. Goulart, J. M. Salim, and S. E. Perez Bergliaffa, Class.
|
891 |
+
Quant. Grav. 24, 3021 (2007).
|
892 |
+
[46] D. N. Vollick, Phys. Rev. D 78, 063524 (2008).
|
893 |
+
[47] S. I. Kruglov, Phys. Rev. D 92, 123523 (2015).
|
894 |
+
[48] S. I. Kruglov, Int. J. Mod. Phys. A 32, 1750071 (2017).
|
895 |
+
[49] S. I. Kruglov, Int. J. Mod. Phys. D 25, 1640002 (2016).
|
896 |
+
[50] A. E. Shabad and V. V. Usov, Phys. Rev. D 83, 105006 (2011).5).
|
897 |
+
[51] S. W. Hawking and D. N. Page, Commun. Math. Phys. 87, 577 (1983).
|
898 |
+
[52] J. A. R. Cembranos, A. Cruz-Dombriz, and J. Jarillo, Universe, 1, 412
|
899 |
+
(2015).
|
900 |
+
15
|
901 |
+
|
E9FKT4oBgHgl3EQfaS5z/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
ENE4T4oBgHgl3EQf6g5Z/content/tmp_files/2301.05332v1.pdf.txt
ADDED
@@ -0,0 +1,2902 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A
|
2 |
+
CREDIT INDEX
|
3 |
+
YOSHIHIRO SHIRAI
|
4 |
+
Department of Mathematics, University of Maryland, College Park
|
5 |
+
Abstract. A two dimensional pure jump process is proposed to model the evolution of the risk
|
6 |
+
free rate and default intensities for the purpose of evaluating option contracts on a credit index.
|
7 |
+
Time evolution in credit markets is assumed to follow a gamma process evaluated at calendar time
|
8 |
+
in order to reflect different levels of business activity in the credit and Treasury markets, which
|
9 |
+
ultimately reflect differences in preferences and incentives of credit products investors, as well as
|
10 |
+
the structure of the credit market itself, with those of their respective counterparts in the Treasury
|
11 |
+
market. Formulas for the characteristic function, zero coupon bonds and moments of the process
|
12 |
+
are derived, and its parameters calibrated to market prices of options on a credit index. Model and
|
13 |
+
market implied credit spreads moments are estimated and compared.
|
14 |
+
1. Introduction
|
15 |
+
This paper proposes a new valuation method for credit index swaptions (henceforth, CDXOs),
|
16 |
+
which are options to enter at a predetermined date a credit index swap. The current literature
|
17 |
+
(see Brigo & Morini (2011) and Armstrong & Rutkowski (2009), as well as Pedersen (2003) and
|
18 |
+
Doctor & Goulden (2007)) focuses on developing a Black-type formula for the purpose of retrieving
|
19 |
+
the CDXO price from its quotation, which is expressed in terms of the underlying spread, and,
|
20 |
+
particularly, on the issue of including the so called front end protection into the CDXO payoff.1
|
21 |
+
Apart from this formulation and to the best of the author’s knowledge, there are no generally
|
22 |
+
accepted and/or standard valuation methods for the pricing of credit index derivatives that also
|
23 |
+
match the statistical features of credit spreads. The main contribution of this paper is then to
|
24 |
+
specify an underlying Markov process X that ultimately defines both short rate and credit spread
|
25 |
+
dynamics and is such that:
|
26 |
+
(i) a reliable and fast numerical method can be implemented to obtain CDX and CDXO prices;
|
27 |
+
(ii) the model parameters can be calibrated to fit sufficiently well the option price surface; and
|
28 |
+
(iii) the model implied statistical properties of the credit spread fit those implied by the market.
|
29 |
+
We assume in particular that X is the two dimensional process (r, λ), where r is the short rate, and
|
30 |
+
λ the default intensity process of each entity in the underlying index. The default time for entity i
|
31 |
+
is then modeled as the first time the default intensity integrated process Λ reaches a threshold εi,
|
32 |
+
where ε1, ..., εn are independent copies of an exponential random variable and n is the numbers of
|
33 |
+
entities in the index.
|
34 |
+
E-mail address: [email protected].
|
35 |
+
Date: January 16, 2023.
|
36 |
+
2020 Mathematics Subject Classification. 60G18, 60G51, 91G20.
|
37 |
+
Key words and phrases. Multiple Gamma Processes, Credit Index Options, Credit Spreads.
|
38 |
+
1Applying the conversion formula requires several inputs, such as the CDXO annuity (also referred to as the
|
39 |
+
hypothetical bond of the CDXO), which are typically unavailable to outsiders. Testing and calibration of the model
|
40 |
+
here proposed with real market data is made possible thanks to time series of CDXO prices provided by Morgan
|
41 |
+
Stanley.
|
42 |
+
1
|
43 |
+
arXiv:2301.05332v1 [q-fin.PR] 12 Jan 2023
|
44 |
+
|
45 |
+
2
|
46 |
+
A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
47 |
+
We take mean reverting processes for r and λ. Randomness of the rate r is represented by a
|
48 |
+
gamma process gr, whereas for λ it is the sum of a double gamma process gλ ◦ gτ and a scalar
|
49 |
+
multiple ρ of gr itself. Multiple gamma processes were first investigated in Madan et al. (2020),
|
50 |
+
for the purpose of randomizing the speed at which jumps occur. To our knowledge, ours is the
|
51 |
+
first application of a pure jump process with infinite arrival rate in credit risk modeling. Our focus
|
52 |
+
on pure-jump models is also motivated by the possibility that such framework offers to apply the
|
53 |
+
theory of dynamic spectral risk measures (see Madan et al. (2017)), thus introducing nonlinearity
|
54 |
+
in the valuation of credit index products. Because of this, and although the exploration of the
|
55 |
+
applications of nonlinear valuations of credit index derivatives is left to future research, we ignore
|
56 |
+
here the relatively small accounting issues related to the front end protection, and assume that no
|
57 |
+
defaults can occur before the time T0 at which the forward/swaption contract expires.
|
58 |
+
Default times as above, known as doubly stochastic random times, are commonly used in credit
|
59 |
+
risk modeling (see Bielecki & Rutkowski (2002) and McNeil et al. (2005)) and their development
|
60 |
+
goes back to Duffie & Singleton (1999), Lando (1998), Jarrow et al. (1997)) and Madan & Unal
|
61 |
+
(1998). Common specifications for rate and intensity processes are the affine models developed by
|
62 |
+
Duffee & Kan (6) for diffusion models and Duffie et al. (2000)) and Duffie and Garlenau (Duffie
|
63 |
+
& Garleanu (2001) for basic affine jump-diffusion models. We mention that reduced form model
|
64 |
+
with non doubly stochastic random times are also possible, although such a direction was not
|
65 |
+
investigated here. For their development see, e.g., Kusuoka (1999) and Elliot et al. (2000).
|
66 |
+
We derive the Levy measure of the process (r, λ), based on which prices of zero coupon bonds
|
67 |
+
can be computed analytically. Moments, stationary distribution and characteristic exponent of the
|
68 |
+
random vector (rt, λt) for t ≥ 0 are also computed analytically, and level curves of its bivariate
|
69 |
+
density for different parameters are plotted using a 2D-version of the FFT algorithm (similar to
|
70 |
+
the one in Hurd & Zhou (2010)). We then derive analytical formulas for discounted payoff of credit
|
71 |
+
index swaps, and the partial integro differential equation (PIDE) for credit index swaptions prices,
|
72 |
+
together with a finite difference scheme for its solution. Calibration is performed for each maturity
|
73 |
+
to all traded strikes of options on the IG CDX index as of 2 January 2020. We do not perform
|
74 |
+
a stability analysis, but we show that the numerical error for a given set of parameters (obtained
|
75 |
+
from calibration) is close enough to the prices obtained via Montecarlo simulation.
|
76 |
+
Finally, we compare market and model implied summary statistics of credit spreads for a specific
|
77 |
+
maturity. As shown in Carr & Madan (2001), variance, skewness and kurtosis of an equity position
|
78 |
+
under the risk neutral measure can be replicated with a continuum of option contracts. Here it is
|
79 |
+
shown that variance, skewness and kurtosis of the spread of a credit index can be replicated with a
|
80 |
+
continuum of credit index swaptions under the measure QA corresponding to choosing as numeraire
|
81 |
+
the annuity of the index. Our model is then calibrated to market prices for all strikes and for a
|
82 |
+
specific maturity for the period between 2 January 2020 through 5 June 2020, and market and
|
83 |
+
model implied variance, skewness and kurtosis of the credit spreads are compared. The closer these
|
84 |
+
are, the better the model approximates the market implied densities. The results of our analysis
|
85 |
+
show that our model is generally able to capture positive skewness and leptokurtic features of
|
86 |
+
CDX spreads under the measure QA, and the model and market implied moments are of the same
|
87 |
+
magnitude. We observe, in particular, that the replication of credits spreads with option contracts
|
88 |
+
under the measure QA is a novel way to extract model-free statistical properties of credit spreads
|
89 |
+
from market prices of options, allowing the validation of any model of credit spreads.
|
90 |
+
The rest of the paper is organized as follows. In section 2 we review the basics of credit index
|
91 |
+
derivatives and their market, and in section 3 the fundamental mathematical framework is intro-
|
92 |
+
duced. In section 4 we specify the pure-jump dynamics of short rate and default intensity, derive
|
93 |
+
the characteristic exponent of the underlying Markov process and the valuation PIDE. A simple
|
94 |
+
finite difference scheme is tested in section 5, and a comparison of model and market results is
|
95 |
+
shown in section 6. Section 7 concludes.
|
96 |
+
|
97 |
+
A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
98 |
+
3
|
99 |
+
2. Credit Index Derivatives and their Market
|
100 |
+
The last few decades saw a spectacular rise in trading volumes of credit derivatives, such as
|
101 |
+
credit default swaps, credit index swaps, single tranche credit default obligations, etc. One reason
|
102 |
+
for this is that the main contract’s features of credit default swaps, which form the basic asset
|
103 |
+
class in credit markets, have been standardized,2, thus allowing a relatively easy implementation
|
104 |
+
of hedging and speculative strategies and making the credit default swaps market more liquid than
|
105 |
+
that of corporate bonds. However, the details of credit derivatives contracts remain complex and
|
106 |
+
satisfactory valuation methods for credit index forwards and swaptions are yet to be determined.
|
107 |
+
To introduce the mathematical problem, recall that a credit default swap (CDS) is an over the
|
108 |
+
counter contract between two counterparties - the protection buyer and seller - in which protection
|
109 |
+
against the risk of default of an underlying entity (usually a company issuing bonds in the debt
|
110 |
+
market) is provided by the seller to the buyer. The latter pays the former a predetermined premium
|
111 |
+
K (defined as a credit spread multiplied by the contract’s notional) at regular intervals until the
|
112 |
+
contract expires and obtains a contingent payment from the seller triggered by any credit event
|
113 |
+
(such as default, restructuring, downgrade, etc.) concerning the underlying entity.
|
114 |
+
A credit index swap (CDX) can be thought of as a portfolio of credit default swaps. There are
|
115 |
+
two families of credit indices, the CDX, which refers to American companies, and the iTraxx, which
|
116 |
+
refers to European or to Asian and Australian ones. Each family is composed of different indices,
|
117 |
+
each of which representing a different class of credit quality. A summary of the main credit indices
|
118 |
+
is shown in table 1. It is important to observe that, in order to reflect changes in the credit quality
|
119 |
+
of the constituents, the composition of most credit indices changes every six months on March 20
|
120 |
+
and September 20. Each series of an index corresponds to a specific roll date, and older series
|
121 |
+
continue to trade, but their market is far less liquid (see McNeil et al. (2005)).
|
122 |
+
Name
|
123 |
+
Pool size
|
124 |
+
Region
|
125 |
+
Credit Quality
|
126 |
+
CDX.NA.IG
|
127 |
+
125
|
128 |
+
North America
|
129 |
+
Investment Grade
|
130 |
+
CDX.NA.IG.HVOL
|
131 |
+
30
|
132 |
+
North America
|
133 |
+
Low-quality Investment Grade
|
134 |
+
CDX.NA.HY
|
135 |
+
100
|
136 |
+
North America
|
137 |
+
Speculative Grade
|
138 |
+
iTraxx Europe
|
139 |
+
125
|
140 |
+
Europe
|
141 |
+
Investment Grade
|
142 |
+
iTraxx Europe
|
143 |
+
30
|
144 |
+
Europe
|
145 |
+
Low-quality Investment Grade
|
146 |
+
Table 1. Major credit indices and their characteristics (source: McNeil et al. (2005)).
|
147 |
+
Similarly to a CDS, the cash flow associated to a credit index swap consists again of a premium
|
148 |
+
payment leg (with payments made by the protection buyer) and a default payment leg (with
|
149 |
+
payments made by the protection seller). Premium payments, which are defined as a credit spread
|
150 |
+
multiplied by the index annuity (a measure of the number of underlying issuers for which a credit
|
151 |
+
event has not occurred yet) are due at deterministic dates T0 < T1 < ... < TM, where TM is the
|
152 |
+
maturity of the contract and T0 the inception date (for forward-start contracts T0 > 0). A credit
|
153 |
+
event concerning any of the underlying entities triggers a payment by the seller. Standardized
|
154 |
+
credit index swaps have quarterly premium payments and maturity at issuance is three, five, seven
|
155 |
+
and ten years, with five years being the most liquid traded maturity.
|
156 |
+
There are two main differences between a CDX and a (portfolio of) CDS: (1) the contingent
|
157 |
+
payment of a CDX is the same for each underlying entity and (2) it does not become an empty
|
158 |
+
contract after a single credit event occurs, so the expected discounted value of the cumulated losses
|
159 |
+
before the inception date (i.e. the above mentioned front end protection) is included in the price.
|
160 |
+
2For instance, banks and financial institutions typically utilize the ISDA Master Service Agreement, published by
|
161 |
+
the International Swaps and Derivatives Association, as the framework agreement such that each futures transactions
|
162 |
+
between the parties of the agreement are mostly defined by it, leaving only specific points of the transaction open to
|
163 |
+
negotiation.
|
164 |
+
|
165 |
+
4
|
166 |
+
A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
167 |
+
3. Review and Assumptions
|
168 |
+
3.1. Hazard Rates and Doubly Stochastic Random Times.
|
169 |
+
Definition 3.1. Suppose that:
|
170 |
+
i. (Ω, F, Q) is a filtered probability space;
|
171 |
+
ii. {Ft}t≥0 a filtration on (Ω, F, Q);3
|
172 |
+
iii. τ : Ω → [0, ∞] is F-measurable and {Ht}t≥0 := σ
|
173 |
+
�
|
174 |
+
{11{τ>t}}t≥0
|
175 |
+
�
|
176 |
+
, so that τ is an Ht-stopping
|
177 |
+
time;
|
178 |
+
iv. Λ(t) = log (Q(τ > t|F∞)) is strictly increasing, finite (i.e. Q(τ > t|F∞) > 0 a.s. for every
|
179 |
+
t > 0), Ft-adapted and absolutely continuous, with Λ(t) =
|
180 |
+
� t
|
181 |
+
0 λ(s)ds.
|
182 |
+
Then, τ is called a doubly stochastic random time with Ft-conditional hazard rate process λ.
|
183 |
+
Remark 3.2. Since Λ(t) is Ft-adapted, we have Q(τ ≤ t|Ft) = Q(τ ≤ t|F∞) ∀t ≥ 0.
|
184 |
+
Lemma 3.3. Suppose X is a standard exponentially distributed random variable on (Ω, F, Q)
|
185 |
+
independent of F∞, i.e. Q(X ≤ t|F∞) = 1 − e−t for every t ≥ 0. Let λ(t) be a positive Ft-
|
186 |
+
adapted stochastic process such that Λ(t) =
|
187 |
+
� t
|
188 |
+
0 λ(s)ds is increasing and finite for every t > 0. Let
|
189 |
+
τ := inf{t ≥ 0 : Λ(t) ≥ X}. Then τ is a doubly stochastic random time with hazard process λ(t).
|
190 |
+
Proof. By definition {τ > t} = {Λ(t) < X}. Since Λ(t) is F∞-measurable and X is independent of
|
191 |
+
F∞, we have Q(τ > t|F∞) = Q(Λ(t) < X|F∞) = e−Λ(t), which proves the result.
|
192 |
+
□
|
193 |
+
Proposition 3.4 (Dellacherie Formulas). Let (Ω, F, Q) be a filtered probability space, τ a doubly
|
194 |
+
stochastic random time with {Ft}t≥0-conditional hazard rate process λ(t) and {rt}t≥0 an Ft-adapted
|
195 |
+
random process. Suppose that, for some T > 0, X is FT -measurable, {ν(t)}0≤t≤T and {Z(t)}t≥0
|
196 |
+
are Ft-adapted.4 If the random variables
|
197 |
+
|X|e−
|
198 |
+
� T
|
199 |
+
t r(s)ds,
|
200 |
+
� T
|
201 |
+
t
|
202 |
+
ν(s)e−
|
203 |
+
� s
|
204 |
+
t r(u)duds,
|
205 |
+
� T
|
206 |
+
t
|
207 |
+
|Z(s)λ(s)|e−
|
208 |
+
� s
|
209 |
+
t r(u)+λ(u)duds
|
210 |
+
are all integrable with respect to Q, then
|
211 |
+
E
|
212 |
+
�
|
213 |
+
e−
|
214 |
+
� T
|
215 |
+
t r(s)ds11{τ>T}X
|
216 |
+
��� Ft ∨ Ht
|
217 |
+
�
|
218 |
+
= 11{τ>t}E
|
219 |
+
�
|
220 |
+
e−
|
221 |
+
� T
|
222 |
+
t r(s)+λ(s)dsX
|
223 |
+
��� Ft
|
224 |
+
�
|
225 |
+
,
|
226 |
+
E
|
227 |
+
�� T
|
228 |
+
t
|
229 |
+
ν(s)e−
|
230 |
+
� s
|
231 |
+
t r(u)du11{τ>s}ds
|
232 |
+
���� Ft ∨ Ht
|
233 |
+
�
|
234 |
+
= 11{τ>t}E
|
235 |
+
�� T
|
236 |
+
t
|
237 |
+
ν(s)e−
|
238 |
+
� s
|
239 |
+
t r(u)+λ(u)duds
|
240 |
+
���� Ft
|
241 |
+
�
|
242 |
+
,
|
243 |
+
E
|
244 |
+
�
|
245 |
+
e−
|
246 |
+
� τ
|
247 |
+
t r(s)ds11{t<τ≤T}Z(τ)
|
248 |
+
��� Ft ∨ Ht
|
249 |
+
�
|
250 |
+
= 11{τ>t}E
|
251 |
+
�� T
|
252 |
+
t
|
253 |
+
Z(s)λ(s)e−
|
254 |
+
� s
|
255 |
+
t r(u)+λ(u)duds
|
256 |
+
���� Ft
|
257 |
+
�
|
258 |
+
,
|
259 |
+
where Ht = σ
|
260 |
+
�
|
261 |
+
{11{τ>t}}
|
262 |
+
�
|
263 |
+
.
|
264 |
+
Proof. See McNeil et al. (2005), proposition 10.19.
|
265 |
+
□
|
266 |
+
3.2. Basics of Forward CDS and CDX Contracts. Consider a forward-start CDS with in-
|
267 |
+
ception date T0, tenor structure T0 < ... < TM, CDS spread c and for a notional of 1 U.S. dollar.
|
268 |
+
Let (Ω, F, Q) be a probability space, {Ft}t≥0 a filtration on it, {r(t)}t≥0 an Ft-adapted random
|
269 |
+
process, and τ : Ω → [0, ∞] a doubly stochastic random time with hazard rate λ(t). Assuming that
|
270 |
+
τ represents the time of the credit event, the payments made by the protection seller (protection
|
271 |
+
leg) discounted at time t ≤ T0 are given by
|
272 |
+
Φ(t) = δ(τ)e−
|
273 |
+
� τ
|
274 |
+
t r(s)ds11{T0<τ≤TM},
|
275 |
+
3In credit risk modelling, {Ft}t≥0 is typically generated by some random process Ψ representing some measure
|
276 |
+
of economic activity.
|
277 |
+
4Typically, X is a survival claim, i.e. a promised payment if there is no default, ν is a risky stream of payments
|
278 |
+
that stops when default occurs, and Z is a payment made at default.
|
279 |
+
|
280 |
+
A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
281 |
+
5
|
282 |
+
where δ(t) is the Ft-adapted process representing loss given default. Similarly, the premium leg is
|
283 |
+
given by
|
284 |
+
Ψ(t) = c
|
285 |
+
M
|
286 |
+
�
|
287 |
+
j=1
|
288 |
+
e−
|
289 |
+
� Tj
|
290 |
+
t
|
291 |
+
r(s)ds11{τ>Tj}[Tj − Tj−1]
|
292 |
+
Using Dellacherie formulas, we have
|
293 |
+
EQ[Φ(t)|Ft ∨ Ht] = EQ �
|
294 |
+
δ(s)e−
|
295 |
+
� τ
|
296 |
+
t r(s)ds �
|
297 |
+
11{t<τ≤TM} − 11{t<τ<T0}
|
298 |
+
�
|
299 |
+
|Ft ∨ Ht
|
300 |
+
�
|
301 |
+
= 11{τ>t}EQ
|
302 |
+
�� TM
|
303 |
+
T0
|
304 |
+
λ(s)δ(s)e−
|
305 |
+
� s
|
306 |
+
t r(u)+λ(u)duds)|Ft
|
307 |
+
�
|
308 |
+
The present value of the protection buyer’s cash flow is then given by
|
309 |
+
EQ [Φ(t) − Ψ(t)|Ft] = 11{τ>t}EQ
|
310 |
+
�� TM
|
311 |
+
T0
|
312 |
+
λ(s)δ(s)e−
|
313 |
+
� s
|
314 |
+
t r(u)+λ(u)duds)
|
315 |
+
�
|
316 |
+
− 11{τ>t}c
|
317 |
+
M
|
318 |
+
�
|
319 |
+
j=1
|
320 |
+
(Tj − Tj−1)EQ
|
321 |
+
�
|
322 |
+
e−
|
323 |
+
� Tj
|
324 |
+
t
|
325 |
+
r(u)+λ(u)du
|
326 |
+
�
|
327 |
+
.
|
328 |
+
Since the CDS spread c(t, T0, TM) is chosen such that the current value of the contract is zero, we
|
329 |
+
then have
|
330 |
+
c(t, T0, TM) =
|
331 |
+
EQ �� TM
|
332 |
+
T0
|
333 |
+
λ(s)δ(s)e−
|
334 |
+
� s
|
335 |
+
t r(u)+λ(u)duds)
|
336 |
+
�
|
337 |
+
�M
|
338 |
+
j=1(Tj − Tj−1)EQ
|
339 |
+
�
|
340 |
+
e−
|
341 |
+
� Tj
|
342 |
+
t
|
343 |
+
r(u)+λ(u)du�.
|
344 |
+
We next provide the relevant definitions for forward contracts on a credit index (see Brigo &
|
345 |
+
Morini (2011) for details). Suppose that the premium payments occur at T0 < T1 < ... < TM, where
|
346 |
+
TM is the maturity of the contract and T0 is the inception date. Define the following quantities:
|
347 |
+
(i) Cumulated losses: L(t) =
|
348 |
+
δ
|
349 |
+
n
|
350 |
+
�n
|
351 |
+
j=1 11{τj<t}, where δ is the loss given default (typically
|
352 |
+
common for each name and nonrandom) and τj is the time of default of entity j;
|
353 |
+
(ii) Outstanding notional: N(t) = 1 − L(t)
|
354 |
+
δ ;
|
355 |
+
(iii) Index Annuity:
|
356 |
+
A(t) =
|
357 |
+
M
|
358 |
+
�
|
359 |
+
j=1
|
360 |
+
e−
|
361 |
+
� Tj
|
362 |
+
t
|
363 |
+
r(u)du
|
364 |
+
� Tj
|
365 |
+
Tj−1
|
366 |
+
N(s)ds ≈
|
367 |
+
M
|
368 |
+
�
|
369 |
+
j=1
|
370 |
+
e−
|
371 |
+
� Tj
|
372 |
+
t
|
373 |
+
r(u)duN(Tj)(Tj − Tj−1);
|
374 |
+
(iv) Premium leg: Ψ(t, c) = cA(t);
|
375 |
+
(v) Protection leg:
|
376 |
+
Φ(t) =
|
377 |
+
� TM
|
378 |
+
T0
|
379 |
+
e−
|
380 |
+
� s
|
381 |
+
t r(u)dudL(s) ≈
|
382 |
+
M
|
383 |
+
�
|
384 |
+
j=1
|
385 |
+
e−
|
386 |
+
� Tj
|
387 |
+
t
|
388 |
+
r(u)du [L(Tj) − L(Tj−1)]
|
389 |
+
(vi) Front End Protection: F(t) = e−
|
390 |
+
� T0
|
391 |
+
t
|
392 |
+
r(s)dsL(T0), t ≤ T0, is the discounted value at time
|
393 |
+
t of cumulated losses at time T0.
|
394 |
+
The discounted payoff of a credit index swap is then given by
|
395 |
+
e−
|
396 |
+
� T0
|
397 |
+
t
|
398 |
+
r(s)ds [Φ(T0) − Ψ(T0) + F(T0)] = Φ(t) − Ψ(t) + F(t).
|
399 |
+
(3.1)
|
400 |
+
Thus, any time a default event is triggered for any of the names composing the index, the name
|
401 |
+
that defaulted is removed from the index and a payment of size δ/n is made by the protection
|
402 |
+
seller, provided the default event happens after the inception date of the credit index swap. If the
|
403 |
+
|
404 |
+
6
|
405 |
+
A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
406 |
+
event happens before the inception date, then the name that defaulted is again removed from the
|
407 |
+
index, and the loss is paid at the inception of the swap.
|
408 |
+
As mentioned, we avoid technical complications related to the front end protection, and assume
|
409 |
+
no defaults occur before inception. The discounted payoff at 0 ≤ t ≤ T0 is then
|
410 |
+
e−
|
411 |
+
� T0
|
412 |
+
t
|
413 |
+
r(s)ds [Φ(T0) − Ψ(T0)] = Φ(t) − Ψ(t).
|
414 |
+
(3.2)
|
415 |
+
To model different default rates, we consider a common intensity process {λ(t)}t≥0 for each under-
|
416 |
+
lying name (i.e. the pool is “homogeneous”), and, for i = 1, ..., n, we define the default time
|
417 |
+
τ i = inf {t > 0 : Λ(t) > εi} ,
|
418 |
+
where ε1, ..., εn are independent exponential random variables. Then (McNeil et al. (2005), lemma
|
419 |
+
9.33), the default times are conditionally independent doubly stochastic random times, i.e. each τ i
|
420 |
+
is a doubly stochastic random time with respect to Ft and
|
421 |
+
Q(τ 1 > t, ..., τ n > t|F∞) =
|
422 |
+
n
|
423 |
+
�
|
424 |
+
i=1
|
425 |
+
Q(τ i > t|F∞).
|
426 |
+
In this case, we obtain for i = 1, ..., n,
|
427 |
+
EQ
|
428 |
+
�
|
429 |
+
e−
|
430 |
+
� Tℓ
|
431 |
+
T0 r(u)du11{τ i>Tℓ−1}|FT0 ∨ HT0
|
432 |
+
�
|
433 |
+
= EQ
|
434 |
+
�
|
435 |
+
e−
|
436 |
+
� Tℓ
|
437 |
+
T0 r(u)du11{τ i>Tℓ−1}|FT0 ∨ Hi
|
438 |
+
T0
|
439 |
+
�
|
440 |
+
,
|
441 |
+
where for every T ≥ t ≥ 0, Ht = ∨n
|
442 |
+
i=1Hi
|
443 |
+
t and Hi
|
444 |
+
t = σ({11{τ i>t}, t ≥ 0}).
|
445 |
+
Hence, if 1 ≤ i ≤ n, 1 ≤ ℓ ≤ M, and using the Dellacherie formulas 3.4 and the tower property
|
446 |
+
of conditional expectation, there is a function gℓ : R3
|
447 |
+
+ → R such that
|
448 |
+
EQ
|
449 |
+
�
|
450 |
+
e−
|
451 |
+
� Tℓ
|
452 |
+
T0 r(u)du11{τ i>Tℓ−1}|FT0 ∨ HT0
|
453 |
+
�
|
454 |
+
= EQ
|
455 |
+
�
|
456 |
+
e−
|
457 |
+
� Tℓ−1
|
458 |
+
T0
|
459 |
+
r(u)du11{τ i>Tℓ−1}P(Tℓ−1, Tℓ)|FT0 ∨ HT0
|
460 |
+
�
|
461 |
+
= 11{τ i>T0}EQ
|
462 |
+
�
|
463 |
+
e−
|
464 |
+
� Tℓ−1
|
465 |
+
T0
|
466 |
+
r(u)+λ(u)duP(Tℓ−1, Tℓ)|FT0
|
467 |
+
�
|
468 |
+
= 11{τ i>T0}gℓ(T0, r(T0), λ(T0)),
|
469 |
+
where, for every T ≥ t ≥ 0,
|
470 |
+
P(t, T) = EQ �
|
471 |
+
e−
|
472 |
+
� T
|
473 |
+
t r(u)du|Ft
|
474 |
+
�
|
475 |
+
.
|
476 |
+
Similarly, there is a function hℓ : R3
|
477 |
+
+ → R such that
|
478 |
+
EQ
|
479 |
+
�
|
480 |
+
e−
|
481 |
+
� Tℓ
|
482 |
+
T0 r(u)du11{τ i>Tℓ}|FT0 ∨ HT0
|
483 |
+
�
|
484 |
+
= 11{τ i>T0}EQ
|
485 |
+
�
|
486 |
+
e−
|
487 |
+
� Tℓ
|
488 |
+
T0 r(u)+λ(u)du|FT0
|
489 |
+
�
|
490 |
+
= 11{τ i>T0}hℓ(T0, r(T0), λ(T0)).
|
491 |
+
Therefore, setting g = �M
|
492 |
+
ℓ=1 gℓ, h = �M
|
493 |
+
ℓ=1 hℓ, and using the tower property of conditional expecta-
|
494 |
+
tion, we have
|
495 |
+
EQ [Φ(T0)|FT0 ∨ HT0] =
|
496 |
+
M
|
497 |
+
�
|
498 |
+
ℓ=1
|
499 |
+
δ
|
500 |
+
n
|
501 |
+
n
|
502 |
+
�
|
503 |
+
i=1
|
504 |
+
EQ
|
505 |
+
�
|
506 |
+
e−
|
507 |
+
� Tℓ
|
508 |
+
T0 r(u)du(11{τ i>Tℓ−1} − 11{τ i>Tℓ})|FT0 ∨ HT0
|
509 |
+
�
|
510 |
+
= δ
|
511 |
+
n
|
512 |
+
n
|
513 |
+
�
|
514 |
+
i=1
|
515 |
+
11{τ i>T0} [(g(T0, r(T0), λ(T0)) − h(T0, r(T0), λ(T0)))] .
|
516 |
+
|
517 |
+
A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
518 |
+
7
|
519 |
+
As for the premium leg, similar calculations give
|
520 |
+
EQ [A(T0)|FT0 ∨ HT0] =
|
521 |
+
M
|
522 |
+
�
|
523 |
+
ℓ=1
|
524 |
+
(Tℓ − Tℓ−1) 1
|
525 |
+
n
|
526 |
+
n
|
527 |
+
�
|
528 |
+
i=1
|
529 |
+
EQ
|
530 |
+
�
|
531 |
+
e−
|
532 |
+
� Tℓ
|
533 |
+
T0 r(u)du11{τ i>Tℓ}|FT0 ∨ HT0
|
534 |
+
�
|
535 |
+
=
|
536 |
+
M
|
537 |
+
�
|
538 |
+
ℓ=1
|
539 |
+
(Tℓ − Tℓ−1) 1
|
540 |
+
n
|
541 |
+
n
|
542 |
+
�
|
543 |
+
i=1
|
544 |
+
11{τ i>T0}hℓ(T0, r(T0), λ(T0))
|
545 |
+
The payoff π(T0, r(T0), λ(T0)) of a receiver credit index swaption5 is then
|
546 |
+
π(T0, r(T0), λ(T0))
|
547 |
+
=
|
548 |
+
�
|
549 |
+
κ
|
550 |
+
M
|
551 |
+
�
|
552 |
+
ℓ=1
|
553 |
+
(Tℓ − Tℓ−1)hℓ(T0, r(T0), λ(T0)) − δ (g(T0, r(T0), λ(T0)) − h(T0, r(T0), λ(T0)))
|
554 |
+
�+
|
555 |
+
.
|
556 |
+
This allows one to define a PIDE for the price of a credit index swaption at time t in terms of the
|
557 |
+
vector (t, r(t), λ(t)). 6
|
558 |
+
For the approximated payoff π of the credit index swaption, and given the short rate r(t) at time
|
559 |
+
t, the spread c(t, T0, TM) is a function of λ(t) only:
|
560 |
+
c(t, λ(t), T0, TM) := δEQ [Φ(t)|Ft]
|
561 |
+
EQ [A(t)Ft] = δ g(t, r(t), λ(t)) − h(t, r(t), λ(t))
|
562 |
+
�M
|
563 |
+
ℓ=1(Tℓ − Tℓ−1)hℓ(t, r(t), λ(t))
|
564 |
+
.
|
565 |
+
(3.3)
|
566 |
+
Remark 3.5. For the case of semiannual payments, the credit spread is given by
|
567 |
+
κ(t, λ(t), T0, TM) = δEQ [Φ(t)|Ft]
|
568 |
+
EQ [A(t)Ft] = 2δ
|
569 |
+
�g(t, r(t), λ(t))
|
570 |
+
h(t, r(t), λ(t)) − 1
|
571 |
+
�
|
572 |
+
.
|
573 |
+
If we assume that there is only one payment, then we obtain the following familiar approximated
|
574 |
+
relationship between credit spread and hazard rate:
|
575 |
+
κ(t, λ(t), T0, TM) = 2δ
|
576 |
+
�
|
577 |
+
K1(t, T0)e
|
578 |
+
λ(t)
|
579 |
+
�
|
580 |
+
1−e−θλ/2
|
581 |
+
θλ
|
582 |
+
�
|
583 |
+
− 1
|
584 |
+
�
|
585 |
+
≈ δK1(t, T0)λ(t) + K2(t, T0),
|
586 |
+
where K1(t, T0) and K2(t, T0) are constants that do not depend on λ(t) nor r(t). In particular, the
|
587 |
+
credit spread is, in first order approximation, an affine linear function of the hazard rate.
|
588 |
+
4. A Model for Stochastic Rate and Default Intensity
|
589 |
+
In this section, we propose a specification for the dynamics of the vector process X = (r, λ) for
|
590 |
+
the purpose of pricing credit index derivatives. Inspired by Eberlein et al. (2013), a simple choice
|
591 |
+
for the dynamics of X is
|
592 |
+
(4.1)
|
593 |
+
�
|
594 |
+
dr(t) = θr(µr − r(t))dt + dgr(t)
|
595 |
+
dλ(t) = θλ(µλ − λ(t))dt + (dgλ(t) + ρdgr(t)) ,
|
596 |
+
where gr and gλ are two independent gamma processes with scale parameters cr and cλ, and shape
|
597 |
+
parameters γr and γλ respectively.7 Parameters θr and θλ are positive and measure the speed of
|
598 |
+
mean reversion toward the long term average µr and µλ respectively. The magnitude of the impact
|
599 |
+
5An option contract on a CDX index is of receiver type if the holder has the right, not the obligation, to sell
|
600 |
+
protection, and it is of payer type if the holder has the right, not the obligation, to buy protection.
|
601 |
+
6The value of λ(t) is not directly observable, but can be retrieved by equating to zero the value of the forward
|
602 |
+
contract at the strike for which put-call parity holds.
|
603 |
+
7Equivalently, for instance, gr(1) is a gamma random variable with mean γr/cr and variance γr/c2
|
604 |
+
r. Furthermore,
|
605 |
+
log
|
606 |
+
�
|
607 |
+
EQ �
|
608 |
+
eiugr(t)��
|
609 |
+
= γrt log
|
610 |
+
�
|
611 |
+
cr
|
612 |
+
cr−iu
|
613 |
+
�
|
614 |
+
, and its Levy density ϕr is given by ϕr(x) = γr e−crx
|
615 |
+
x
|
616 |
+
.
|
617 |
+
|
618 |
+
8
|
619 |
+
A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
620 |
+
of jumps in the short rate on the default intensity is modeled by the variable ρ ∈ R. To better
|
621 |
+
fit the option price surface,8 and recognizing that economic activity in the highly liquid Treasury
|
622 |
+
market often evolves at a different pace than in the debt markets, we further subordinate gλ to a
|
623 |
+
gamma time change. Specifically, given a third gamma process gτ with parameters cτ and γτ, we
|
624 |
+
set ˜gλ(t) = gλ ◦ gτ(t), and consider
|
625 |
+
(4.2)
|
626 |
+
�
|
627 |
+
dr(t) = θr[µr − r(t)]dt + dgr(t)
|
628 |
+
dλ(t) = θλ[µλ − λ(t)]dt + (d˜gλ(t) + ρdgr(t))) ,
|
629 |
+
Standard calculations (see e.g. Barndorff-Nielsen (1998)) give the following solution for the system
|
630 |
+
4.2 given the initial condition (r(0), λ(0)):
|
631 |
+
(4.3)
|
632 |
+
�
|
633 |
+
r(t) = r(0)e−θrt + µr(1 − e−θrt) +
|
634 |
+
� t
|
635 |
+
0 e−θr(t−u)dgr(u)
|
636 |
+
λ(t) = λ(0)e−θλt + µλ(1 − e−θλt) +
|
637 |
+
� t
|
638 |
+
0 e−θλ(t−u)d(ρgr(u) + ˜gλ(u)]
|
639 |
+
In general, given a gamma process g with parameters c and γ and conditioning on gτ(t), the
|
640 |
+
characteristic function of the process ˜g = g ◦ gτ is
|
641 |
+
E
|
642 |
+
�
|
643 |
+
eiθ˜g(t)�
|
644 |
+
= E
|
645 |
+
�
|
646 |
+
ψg(θ)gτ(t)�
|
647 |
+
= E
|
648 |
+
�
|
649 |
+
egτ(t) log(ψg(θ))�
|
650 |
+
= exp
|
651 |
+
�
|
652 |
+
γτt
|
653 |
+
� ∞
|
654 |
+
0
|
655 |
+
�
|
656 |
+
ex log(ψg(θ)) − 1
|
657 |
+
�
|
658 |
+
ϕτ(x)dx
|
659 |
+
�
|
660 |
+
= exp
|
661 |
+
�
|
662 |
+
γτt
|
663 |
+
� ∞
|
664 |
+
0
|
665 |
+
(ψg(θ)x − 1) ϕτ(x)dx
|
666 |
+
�
|
667 |
+
= exp
|
668 |
+
�
|
669 |
+
γτt
|
670 |
+
� ∞
|
671 |
+
0
|
672 |
+
�� ∞
|
673 |
+
0
|
674 |
+
eiθypx(y)dy − 1
|
675 |
+
�
|
676 |
+
ϕτ(x)dx
|
677 |
+
�
|
678 |
+
= exp
|
679 |
+
�
|
680 |
+
γτt
|
681 |
+
� ∞
|
682 |
+
0
|
683 |
+
� ∞
|
684 |
+
0
|
685 |
+
�
|
686 |
+
eiθy − 1
|
687 |
+
�
|
688 |
+
px(y)ϕτ(x)dydx
|
689 |
+
�
|
690 |
+
= exp
|
691 |
+
�
|
692 |
+
γτt
|
693 |
+
� ∞
|
694 |
+
0
|
695 |
+
�
|
696 |
+
eiθy − 1
|
697 |
+
� �� ∞
|
698 |
+
0
|
699 |
+
px(y)ϕτ(x)dx
|
700 |
+
�
|
701 |
+
dy
|
702 |
+
�
|
703 |
+
.
|
704 |
+
where ψg(θ) := E
|
705 |
+
�
|
706 |
+
eiθg(1)�
|
707 |
+
and where px is the density of a gamma distribution with parameters c
|
708 |
+
and ��x. Therefore, the Levy density of the process ˜g is the weighted gamma Levy density:
|
709 |
+
ϕ˜g(y) =
|
710 |
+
� ∞
|
711 |
+
0
|
712 |
+
px(y)ϕτ(x)dx = γτ
|
713 |
+
e−cτy
|
714 |
+
y
|
715 |
+
� ∞
|
716 |
+
0
|
717 |
+
(cy)γx
|
718 |
+
Γ(γx)
|
719 |
+
e−cx
|
720 |
+
x dx, y > 0,
|
721 |
+
(4.4)
|
722 |
+
where Γ is the gamma function. Note that the use of Fubini-Tonelli in the above derivation follows
|
723 |
+
from the fact that the Levy density ϕ˜g is well defined, which follows from Stirling approximation
|
724 |
+
and an integration by parts. Finally, from 4.4, ˜g has infinite arrival rate, and finite variation.
|
725 |
+
4.1. Characteristic Exponent, Zero Coupon Bond Prices and Valuation PIDE. In this
|
726 |
+
section we compute the joint characteristic exponent of X and its integrated process. To simplify
|
727 |
+
notation, we henceforth drop the tilde and thus assume that the process gλ is subordinated to gτ.
|
728 |
+
8Although not reported here, we experimented with real market data and found that model 4.1 is not rich enough
|
729 |
+
to fit, in particular, the prices of out of the money options. Note also that other modeling choices for λ that are not
|
730 |
+
investigated here are possible, e.g. one could consider an integrated truncated bilateral gamma process.
|
731 |
+
|
732 |
+
A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
733 |
+
9
|
734 |
+
Note that the integrated Ornstein-Uhlenbeck (“OU”) process Yr(t) associated to r(t) is given by
|
735 |
+
Yr(t) =
|
736 |
+
� t
|
737 |
+
0
|
738 |
+
r(s)ds =
|
739 |
+
� t
|
740 |
+
0
|
741 |
+
r(0)e−θrs + µr
|
742 |
+
�
|
743 |
+
1 − e−θrs�
|
744 |
+
ds +
|
745 |
+
� t
|
746 |
+
0
|
747 |
+
� s
|
748 |
+
0
|
749 |
+
e−θr(s−u)dgr(u)ds
|
750 |
+
= r(0)1 − e−θrt
|
751 |
+
θr
|
752 |
+
+ µr
|
753 |
+
�
|
754 |
+
t − 1 − e−θrt
|
755 |
+
θr
|
756 |
+
�
|
757 |
+
+
|
758 |
+
� t
|
759 |
+
0
|
760 |
+
� t
|
761 |
+
u
|
762 |
+
e−θr(s−u)dsdgr(u)
|
763 |
+
= µrt + (r(0) − µr)
|
764 |
+
�1 − e−θrt
|
765 |
+
θr
|
766 |
+
�
|
767 |
+
+
|
768 |
+
� t
|
769 |
+
0
|
770 |
+
1 − e−θr(t−u)
|
771 |
+
θr
|
772 |
+
dgr(u)
|
773 |
+
Similarly, the integrated OU process Yλ(t) associated to λ(t) is given by
|
774 |
+
Yλ(t) = µλt + (λ(0) − µλ)
|
775 |
+
�1 − e−θλt
|
776 |
+
θλ
|
777 |
+
�
|
778 |
+
+
|
779 |
+
� t
|
780 |
+
0
|
781 |
+
1 − e−θλ(t−u)
|
782 |
+
θλ
|
783 |
+
d(ρgr + gλ)(u)
|
784 |
+
Therefore, for every α1, α2, α3, α4 ∈ R, we have
|
785 |
+
α1Yr(t) + α2Yλ(t) + α3r(t) + α4λ(t)
|
786 |
+
= α1
|
787 |
+
�
|
788 |
+
µrt + (r(0) − µr)
|
789 |
+
�1 − e−θrt
|
790 |
+
θr
|
791 |
+
��
|
792 |
+
+ α2
|
793 |
+
�
|
794 |
+
µλt + (λ(0) − µλ)
|
795 |
+
�1 − e−θλt
|
796 |
+
θλ
|
797 |
+
��
|
798 |
+
+ α3
|
799 |
+
�
|
800 |
+
r(0)e−θrt + µr(1 − e−θrt)
|
801 |
+
�
|
802 |
+
+ α4
|
803 |
+
�
|
804 |
+
λ(0)e−θλt + µλ(1 − e−θλt)
|
805 |
+
�
|
806 |
+
+
|
807 |
+
� t
|
808 |
+
0
|
809 |
+
�
|
810 |
+
α1
|
811 |
+
�
|
812 |
+
1 − e−θr(t−u)
|
813 |
+
θr
|
814 |
+
�
|
815 |
+
+ α2ρ
|
816 |
+
�
|
817 |
+
1 − e−θλ(t−u)
|
818 |
+
θλ
|
819 |
+
�
|
820 |
+
+ α3
|
821 |
+
�
|
822 |
+
e−θr(t−u)�
|
823 |
+
+ α4ρ
|
824 |
+
�
|
825 |
+
e−θλ(t−u)��
|
826 |
+
dgr(u)
|
827 |
+
+
|
828 |
+
� t
|
829 |
+
0
|
830 |
+
�
|
831 |
+
α2
|
832 |
+
�
|
833 |
+
1 − e−θλ(t−u)
|
834 |
+
θλ
|
835 |
+
�
|
836 |
+
+ α4
|
837 |
+
�
|
838 |
+
e−θλ(t−u)��
|
839 |
+
dgλ(u).
|
840 |
+
Next, set
|
841 |
+
ξr(t, r, α1, α3) = α1
|
842 |
+
�
|
843 |
+
µrt + (r − µr)
|
844 |
+
�1 − e−θrt
|
845 |
+
θr
|
846 |
+
��
|
847 |
+
+ α3
|
848 |
+
�
|
849 |
+
re−θrt + µr(1 − e−θrt)
|
850 |
+
�
|
851 |
+
ξλ(t, λ, α2, α4) = α2
|
852 |
+
�
|
853 |
+
µλt + (λ − µλ)
|
854 |
+
�1 − e−θλt
|
855 |
+
θλ
|
856 |
+
��
|
857 |
+
+ α4
|
858 |
+
�
|
859 |
+
λe−θλt + µλ(1 − e−θλt)
|
860 |
+
�
|
861 |
+
ψr(t, u, α1, α2, α3, α4) = α1
|
862 |
+
�
|
863 |
+
1 − e−θr(t−u)
|
864 |
+
θr
|
865 |
+
�
|
866 |
+
+ α2ρ
|
867 |
+
�
|
868 |
+
1 − e−θλ(t−u)
|
869 |
+
θλ
|
870 |
+
�
|
871 |
+
+ α3
|
872 |
+
�
|
873 |
+
e−θr(t−u)�
|
874 |
+
+ α4ρe−θλ(t−u)
|
875 |
+
ψλ(t, u, α2, α4) = α2
|
876 |
+
�
|
877 |
+
1 − e−θλ(t−u)
|
878 |
+
θλ
|
879 |
+
�
|
880 |
+
+ α4
|
881 |
+
�
|
882 |
+
e−θλ(t−u)�
|
883 |
+
.
|
884 |
+
Since |ψr(t, u, α1, α2, α3, α4)| ≤ |α1| + |ρα2| + |α3| + ρ|α4|, we have
|
885 |
+
EQ
|
886 |
+
0
|
887 |
+
�
|
888 |
+
exp
|
889 |
+
�� t
|
890 |
+
0
|
891 |
+
iψr(t, u, α1, α2, α3, α4)dgr(u)
|
892 |
+
��
|
893 |
+
= lim
|
894 |
+
n→∞ EQ
|
895 |
+
0
|
896 |
+
�
|
897 |
+
exp
|
898 |
+
�n−1
|
899 |
+
�
|
900 |
+
k=0
|
901 |
+
iψr(t, kt/n, α1, α2, α3, α4) (gr((k + 1)t/n) − gr(kt/n))
|
902 |
+
��
|
903 |
+
,
|
904 |
+
|
905 |
+
10
|
906 |
+
A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
907 |
+
where Ft is the filtration generated by X and EQ
|
908 |
+
0 [·] denotes expectation conditional on F0. By the
|
909 |
+
stationariness and independence of the increments of the process gr, we have
|
910 |
+
lim
|
911 |
+
n→∞ EQ
|
912 |
+
0
|
913 |
+
�
|
914 |
+
exp
|
915 |
+
�n−1
|
916 |
+
�
|
917 |
+
k=0
|
918 |
+
iψr(t, kt/n, α1, α2, α3, α4) (gr((k + 1)t/n) − gr(kt/n))
|
919 |
+
��
|
920 |
+
= lim
|
921 |
+
n→∞
|
922 |
+
n−1
|
923 |
+
�
|
924 |
+
k=0
|
925 |
+
EQ
|
926 |
+
0 [exp (iψr(t, kt/n, α1, α2, α3, α4)gr(t/n))] .
|
927 |
+
By the properties of the Levy density ϕr of the process gr, we have, for every ψ ∈ C,
|
928 |
+
1
|
929 |
+
t log
|
930 |
+
�
|
931 |
+
EQ
|
932 |
+
0
|
933 |
+
�
|
934 |
+
eiψg(t)��
|
935 |
+
=
|
936 |
+
�
|
937 |
+
R
|
938 |
+
�
|
939 |
+
eiψy − 1
|
940 |
+
�
|
941 |
+
ϕr(y)dy = −γr log
|
942 |
+
�
|
943 |
+
1 − iψ
|
944 |
+
cr
|
945 |
+
�
|
946 |
+
,
|
947 |
+
⇒ lim
|
948 |
+
n→∞
|
949 |
+
n−1
|
950 |
+
�
|
951 |
+
k=0
|
952 |
+
EQ
|
953 |
+
0 [exp (iψr(t, kt/n, α1, α2, α3, α4)gr(t/n))]
|
954 |
+
= lim
|
955 |
+
n→∞
|
956 |
+
n−1
|
957 |
+
�
|
958 |
+
k=0
|
959 |
+
exp
|
960 |
+
� t
|
961 |
+
n
|
962 |
+
� ∞
|
963 |
+
0
|
964 |
+
�
|
965 |
+
eiψr(t,kt/n,α1,α2,α3,α4)y − 1
|
966 |
+
�
|
967 |
+
ϕr(y)dy
|
968 |
+
�
|
969 |
+
= lim
|
970 |
+
n→∞ exp
|
971 |
+
�n−1
|
972 |
+
�
|
973 |
+
k=0
|
974 |
+
t
|
975 |
+
n
|
976 |
+
� ∞
|
977 |
+
0
|
978 |
+
�
|
979 |
+
eiψr(t,kt/n,α1,α2,α3,α4)y − 1
|
980 |
+
�
|
981 |
+
ϕr(y)dy
|
982 |
+
�
|
983 |
+
= exp
|
984 |
+
�� t
|
985 |
+
0
|
986 |
+
� ∞
|
987 |
+
0
|
988 |
+
�
|
989 |
+
eiψr(t,u,α1,α2,α3,α4)y − 1
|
990 |
+
�
|
991 |
+
ϕr(y)dydu
|
992 |
+
�
|
993 |
+
= exp
|
994 |
+
�
|
995 |
+
−γr
|
996 |
+
� t
|
997 |
+
0
|
998 |
+
log
|
999 |
+
�
|
1000 |
+
1 − iψr(t, u, α1, α2, α3, α4)
|
1001 |
+
cr
|
1002 |
+
�
|
1003 |
+
du
|
1004 |
+
�
|
1005 |
+
.
|
1006 |
+
Similar calculations can be done with respect to ψλ(t, u, α2, α4), yielding
|
1007 |
+
EQ
|
1008 |
+
0
|
1009 |
+
�
|
1010 |
+
exp
|
1011 |
+
�� t
|
1012 |
+
0
|
1013 |
+
iψλ(t, u, α2, α4)dgλ(u)
|
1014 |
+
��
|
1015 |
+
= exp
|
1016 |
+
�� t
|
1017 |
+
0
|
1018 |
+
� ∞
|
1019 |
+
0
|
1020 |
+
�
|
1021 |
+
eiψλ(t,u,α2,α4)y − 1
|
1022 |
+
�
|
1023 |
+
ϕλ(y)dydu
|
1024 |
+
�
|
1025 |
+
= exp
|
1026 |
+
�
|
1027 |
+
−γτ
|
1028 |
+
� t
|
1029 |
+
0
|
1030 |
+
log
|
1031 |
+
�
|
1032 |
+
1 + γλ
|
1033 |
+
cτ
|
1034 |
+
log
|
1035 |
+
�
|
1036 |
+
1 − i
|
1037 |
+
cλ
|
1038 |
+
ψλ(t, u, α2, α4)
|
1039 |
+
��
|
1040 |
+
du
|
1041 |
+
�
|
1042 |
+
Therefore, the characteristic exponent of the vector process (Yr(t), Yλ(t), r(t), λ(t)) is given by
|
1043 |
+
(4.5)
|
1044 |
+
log
|
1045 |
+
�
|
1046 |
+
EQ �
|
1047 |
+
eiα1Yr(T)+iα2Yλ(T)+iα3r(T)+iα4λ(T)|Ft
|
1048 |
+
��
|
1049 |
+
=
|
1050 |
+
� T
|
1051 |
+
t
|
1052 |
+
� ∞
|
1053 |
+
0
|
1054 |
+
�
|
1055 |
+
eiψr(T,u,α1,α2,α3,α4)y − 1
|
1056 |
+
�
|
1057 |
+
ϕr(y)dydu +
|
1058 |
+
� T
|
1059 |
+
t
|
1060 |
+
� ∞
|
1061 |
+
0
|
1062 |
+
�
|
1063 |
+
eiψλ(T,u,α2,α4)y − 1
|
1064 |
+
�
|
1065 |
+
ϕλ(y)dydu
|
1066 |
+
+ iξr(T − t, r(t), α1, α3) + iξλ(T − t, λ(t), α2, α4).
|
1067 |
+
This implies immediately that the risk neutral price P(t, T) at time t of a zero coupon bond with
|
1068 |
+
maturity T and no default risk is given by
|
1069 |
+
P(t, T) = exp
|
1070 |
+
�
|
1071 |
+
−µr(T − t) − (r(t) − µr)
|
1072 |
+
�
|
1073 |
+
1 − e−θr(T−t)
|
1074 |
+
θr
|
1075 |
+
�
|
1076 |
+
+
|
1077 |
+
� T
|
1078 |
+
t
|
1079 |
+
� ∞
|
1080 |
+
0
|
1081 |
+
�
|
1082 |
+
e− 1−e−θr(T −u)
|
1083 |
+
θr
|
1084 |
+
y − 1
|
1085 |
+
�
|
1086 |
+
ϕr(y)dydu
|
1087 |
+
�
|
1088 |
+
.
|
1089 |
+
|
1090 |
+
A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
1091 |
+
11
|
1092 |
+
4.1.1. Density, Summary Statistics and Stationary Distribution. To gain further insights into the
|
1093 |
+
dynamics of the vector (r(t), λ(t)), we will now derive its density fr,λ conditional on (r(0), λ(0)).
|
1094 |
+
Denoting by φ the (joint) Fourier transform, and assuming µr = µλ = 0, we have, for every
|
1095 |
+
α = (α1, α2) ∈ R2,
|
1096 |
+
log (φ(α1, α2)) = log
|
1097 |
+
��
|
1098 |
+
R2
|
1099 |
+
e−2πi(α1r+α2λ)fr,λ(r, λ)drdλ
|
1100 |
+
�
|
1101 |
+
= log
|
1102 |
+
�
|
1103 |
+
EQ
|
1104 |
+
0
|
1105 |
+
�
|
1106 |
+
e−2πi(α1r(t)+α2λ(t))��
|
1107 |
+
= −2πiα1r(0)e−θrt − 2πiα2λ(0)e−θλt
|
1108 |
+
− γr
|
1109 |
+
� t
|
1110 |
+
0
|
1111 |
+
log
|
1112 |
+
�
|
1113 |
+
1 + 2πi
|
1114 |
+
cr
|
1115 |
+
�
|
1116 |
+
α1e−θr(t−u) + ρα2e−θλ(t−u)��
|
1117 |
+
du
|
1118 |
+
− γτ
|
1119 |
+
� t
|
1120 |
+
0
|
1121 |
+
log
|
1122 |
+
�
|
1123 |
+
1 + γλ
|
1124 |
+
cτ
|
1125 |
+
log
|
1126 |
+
�
|
1127 |
+
1 + 2πi
|
1128 |
+
cλ
|
1129 |
+
α2e−θλ(t−u)
|
1130 |
+
��
|
1131 |
+
du
|
1132 |
+
By Fourier inversion and a change of variable, the joint density of (r(t), λ(t)) is then given by
|
1133 |
+
fr,λ(r, λ) =
|
1134 |
+
1
|
1135 |
+
4π2
|
1136 |
+
�
|
1137 |
+
R2
|
1138 |
+
ei(α1r+α2λ)φ
|
1139 |
+
�α1
|
1140 |
+
2π, α2
|
1141 |
+
2π
|
1142 |
+
�
|
1143 |
+
dα1dα2
|
1144 |
+
(4.6)
|
1145 |
+
As shown in Hurd & Zhou (2010), this double integral can be computed using a two dimensional
|
1146 |
+
fast Fourier transform. Specifically, we set N = 213, B = 106, η = 2B
|
1147 |
+
N , λ = 2π
|
1148 |
+
Nη = π
|
1149 |
+
B, b = Nλ
|
1150 |
+
2 = π
|
1151 |
+
η ,
|
1152 |
+
and approximate 4.6 by a double sum over the grid in the frequency domain
|
1153 |
+
F =
|
1154 |
+
�
|
1155 |
+
αk = (αk1, αk2) : k = (k1, k2) ∈ {0, 1, ..., N − 1}2�
|
1156 |
+
, αki = −B + kiη, i = 1, 2
|
1157 |
+
with corresponding grid in the space domain given by
|
1158 |
+
S =
|
1159 |
+
�
|
1160 |
+
xℓ = (xℓ1, xℓ2) : ℓ = (ℓ1, ℓ2) ∈ {0, 1, ..., N − 1}2�
|
1161 |
+
, xℓi = −b + ℓiη, i = 1, 2.
|
1162 |
+
Thus, we have the approximation
|
1163 |
+
fr,λ(r, λ) ≈ η2
|
1164 |
+
4π2
|
1165 |
+
N−1
|
1166 |
+
�
|
1167 |
+
k1,k2=0
|
1168 |
+
eiαkx′
|
1169 |
+
ℓφ
|
1170 |
+
�αk1
|
1171 |
+
2π , αk2
|
1172 |
+
2π
|
1173 |
+
�
|
1174 |
+
= (−1)ℓ1+ℓ2
|
1175 |
+
�ηN
|
1176 |
+
2π
|
1177 |
+
�2 1
|
1178 |
+
N2
|
1179 |
+
N−1
|
1180 |
+
�
|
1181 |
+
k1,k2=0
|
1182 |
+
e2πikℓ′/N(−1)k1+k2φ
|
1183 |
+
�αk1
|
1184 |
+
2π , αk2
|
1185 |
+
2π
|
1186 |
+
�
|
1187 |
+
,
|
1188 |
+
where the last double sum can be computed for instance in Matlab with the command ifft2.
|
1189 |
+
Figures 1 and 2 show the bivariate density and the marginals of the vector (r(t), λ(t) for t = 1
|
1190 |
+
year, and for r(0) = 0.0146, θr = 0.5500 cr = 400.0005, γr = 3.9475, ρ = 0.1548, λ(0) = 0; θλ =
|
1191 |
+
3.3533, cλ = 4.3178, γλ = 6.0617, cτ = 3.5298, γτ = 190.0001.
|
1192 |
+
The characteristic function allows one to compute the basic summary statistics for r(t) and λ(t):
|
1193 |
+
EQ[r(t)] = r(0)e−θrt + 1
|
1194 |
+
i
|
1195 |
+
∂
|
1196 |
+
∂α
|
1197 |
+
�
|
1198 |
+
−γr
|
1199 |
+
� t
|
1200 |
+
0
|
1201 |
+
log
|
1202 |
+
�
|
1203 |
+
1 − iαe−θr(t−u)
|
1204 |
+
cr
|
1205 |
+
�
|
1206 |
+
du
|
1207 |
+
�
|
1208 |
+
α=0
|
1209 |
+
= r(0)e−θrt + γr
|
1210 |
+
cr
|
1211 |
+
1 − e−θrt
|
1212 |
+
θr
|
1213 |
+
,
|
1214 |
+
VQ[r(t)] = − ∂2
|
1215 |
+
∂α2
|
1216 |
+
�
|
1217 |
+
−γr
|
1218 |
+
� t
|
1219 |
+
0
|
1220 |
+
log
|
1221 |
+
�
|
1222 |
+
1 − iαe−θr(t−u)
|
1223 |
+
cr
|
1224 |
+
�
|
1225 |
+
du
|
1226 |
+
�
|
1227 |
+
α=0
|
1228 |
+
= γr
|
1229 |
+
c2r
|
1230 |
+
1 − e−2θrt
|
1231 |
+
2θr
|
1232 |
+
EQ
|
1233 |
+
0 [λ(t)] = λ(0)e−θλt +
|
1234 |
+
�ργr
|
1235 |
+
cr
|
1236 |
+
+ γτ
|
1237 |
+
cτ
|
1238 |
+
γλ
|
1239 |
+
cλ
|
1240 |
+
� 1 − e−θλt
|
1241 |
+
θλ
|
1242 |
+
VQ
|
1243 |
+
0 [λ(t)] =
|
1244 |
+
�ρ2γr
|
1245 |
+
c2r
|
1246 |
+
+ γτγλ
|
1247 |
+
c2τc2
|
1248 |
+
λ
|
1249 |
+
(γλ + cτ)
|
1250 |
+
� 1 − e−2θλt
|
1251 |
+
2θλ
|
1252 |
+
.
|
1253 |
+
|
1254 |
+
12
|
1255 |
+
A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
1256 |
+
(a)
|
1257 |
+
Bivariate
|
1258 |
+
density
|
1259 |
+
of
|
1260 |
+
the
|
1261 |
+
random
|
1262 |
+
vector
|
1263 |
+
(r(t), λ(t)) for t = 1 year.
|
1264 |
+
0.008
|
1265 |
+
0.01
|
1266 |
+
0.012
|
1267 |
+
0.014
|
1268 |
+
0.016
|
1269 |
+
0.018
|
1270 |
+
r
|
1271 |
+
0
|
1272 |
+
0.2
|
1273 |
+
0.4
|
1274 |
+
0.6
|
1275 |
+
0.8
|
1276 |
+
1
|
1277 |
+
1.2
|
1278 |
+
1.4
|
1279 |
+
1.6
|
1280 |
+
10-3
|
1281 |
+
1
|
1282 |
+
2
|
1283 |
+
3
|
1284 |
+
4
|
1285 |
+
5
|
1286 |
+
6
|
1287 |
+
7
|
1288 |
+
8
|
1289 |
+
9
|
1290 |
+
10
|
1291 |
+
11
|
1292 |
+
105
|
1293 |
+
(b) Bivariate density contour of the random vector
|
1294 |
+
(r(t), λ(t)) for t = 1 year.
|
1295 |
+
Figure 1
|
1296 |
+
0
|
1297 |
+
0.005
|
1298 |
+
0.01
|
1299 |
+
0.015
|
1300 |
+
0.02
|
1301 |
+
0.025
|
1302 |
+
0.03
|
1303 |
+
0
|
1304 |
+
50
|
1305 |
+
100
|
1306 |
+
150
|
1307 |
+
200
|
1308 |
+
250
|
1309 |
+
(a) Marginal densities for the short rate for θr ∈
|
1310 |
+
{0.16, 1, 5} and t = 1 year.
|
1311 |
+
0
|
1312 |
+
0.001
|
1313 |
+
0.002
|
1314 |
+
0.003
|
1315 |
+
0.004
|
1316 |
+
0.005
|
1317 |
+
0.006
|
1318 |
+
0.007
|
1319 |
+
0
|
1320 |
+
500
|
1321 |
+
1000
|
1322 |
+
1500
|
1323 |
+
2000
|
1324 |
+
2500
|
1325 |
+
(b)
|
1326 |
+
Default
|
1327 |
+
intensity
|
1328 |
+
marginal
|
1329 |
+
for
|
1330 |
+
θλ
|
1331 |
+
∈
|
1332 |
+
{0.5, 2.06, 4} and t = 1 year.
|
1333 |
+
Figure 2
|
1334 |
+
(a) Stationary/limiting bivariate density of the
|
1335 |
+
random vector (r(t), λ(t)).
|
1336 |
+
0.006
|
1337 |
+
0.008
|
1338 |
+
0.01
|
1339 |
+
0.012
|
1340 |
+
0.014
|
1341 |
+
0.016
|
1342 |
+
r
|
1343 |
+
0.4
|
1344 |
+
0.6
|
1345 |
+
0.8
|
1346 |
+
1
|
1347 |
+
1.2
|
1348 |
+
1.4
|
1349 |
+
1.6
|
1350 |
+
1.8
|
1351 |
+
10-3
|
1352 |
+
0.5
|
1353 |
+
1
|
1354 |
+
1.5
|
1355 |
+
2
|
1356 |
+
2.5
|
1357 |
+
3
|
1358 |
+
3.5
|
1359 |
+
4
|
1360 |
+
4.5
|
1361 |
+
105
|
1362 |
+
(b) Stationary/limiting bivariate density contour
|
1363 |
+
of the random vector (r(t), λ(t)).
|
1364 |
+
Figure 3
|
1365 |
+
|
1366 |
+
×105
|
1367 |
+
12
|
1368 |
+
10-
|
1369 |
+
8 -
|
1370 |
+
-9
|
1371 |
+
4 -
|
1372 |
+
2 -
|
1373 |
+
0
|
1374 |
+
0
|
1375 |
+
0.5
|
1376 |
+
1
|
1377 |
+
1.5
|
1378 |
+
×10-3
|
1379 |
+
2
|
1380 |
+
入0.01
|
1381 |
+
0.015
|
1382 |
+
0.02
|
1383 |
+
r×105
|
1384 |
+
5 -
|
1385 |
+
4-
|
1386 |
+
3 -
|
1387 |
+
2 -
|
1388 |
+
1
|
1389 |
+
0
|
1390 |
+
0
|
1391 |
+
0.5
|
1392 |
+
1
|
1393 |
+
1.5
|
1394 |
+
×10-3
|
1395 |
+
入
|
1396 |
+
20.005
|
1397 |
+
0.01
|
1398 |
+
0.015
|
1399 |
+
0.02
|
1400 |
+
rA PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
1401 |
+
13
|
1402 |
+
These calculations show that higher values of the parameters θr and θλ imply smaller short rate
|
1403 |
+
and default intensity, whereas the smaller θr and θλ, the higher the variance and expected value of
|
1404 |
+
r(t) and λ(t) respectively (see also figure 2).
|
1405 |
+
Finally, we observe that the change of variable −(t − u) = v gives
|
1406 |
+
r(t) = r(0)e−θrt +
|
1407 |
+
� t
|
1408 |
+
0
|
1409 |
+
e−θr(t−u)dgr(u)
|
1410 |
+
= r(0)e−θrt +
|
1411 |
+
� 0
|
1412 |
+
−t
|
1413 |
+
eθrvdgr(v) −→
|
1414 |
+
� 0
|
1415 |
+
−∞
|
1416 |
+
eθrvdgr(v),
|
1417 |
+
λ(t) = λ(0)e−θλt +
|
1418 |
+
� t
|
1419 |
+
0
|
1420 |
+
e−θλ(t−u)(dgλ(u) + ρdgr(u))
|
1421 |
+
= λ(0)e−θλt +
|
1422 |
+
� 0
|
1423 |
+
−t
|
1424 |
+
eθλv(dgλ(u) + ρdgr(u)) −→
|
1425 |
+
� 0
|
1426 |
+
−∞
|
1427 |
+
e−θλv(dgλ(v) + ρdgr(v))
|
1428 |
+
and also that if the initial condition satisfies
|
1429 |
+
(r(0), λ(0)) =
|
1430 |
+
�� 0
|
1431 |
+
−∞
|
1432 |
+
eθrvdgr(v),
|
1433 |
+
� 0
|
1434 |
+
−∞
|
1435 |
+
eθλv(dgλ(v) + ρdr(v))
|
1436 |
+
�
|
1437 |
+
,
|
1438 |
+
then
|
1439 |
+
r(t) =
|
1440 |
+
� t
|
1441 |
+
−∞
|
1442 |
+
e−θr(t−u)dgr(u) =
|
1443 |
+
� 0
|
1444 |
+
−∞
|
1445 |
+
eθrvdgr(v),
|
1446 |
+
λ(t) =
|
1447 |
+
� t
|
1448 |
+
−∞
|
1449 |
+
eθ−λ(t−u)(dgλ(u) + ρdgr(u)) =
|
1450 |
+
� 0
|
1451 |
+
−∞
|
1452 |
+
eθλv(dgλ(u) + ρdgr(u)).
|
1453 |
+
Thus, the limiting/stationary distribution of the process (r(t), λ(t)) is that of the random vector
|
1454 |
+
�� 0
|
1455 |
+
−∞
|
1456 |
+
eθrvdgr(v),
|
1457 |
+
� 0
|
1458 |
+
−∞
|
1459 |
+
eθλv(dgλ(v) + ρdr(v))
|
1460 |
+
�
|
1461 |
+
.
|
1462 |
+
Similar calculations as above give the Fourier transform of the stationary bivariate density:
|
1463 |
+
φ(α1, α2) = exp
|
1464 |
+
�
|
1465 |
+
−γr
|
1466 |
+
� 0
|
1467 |
+
−∞
|
1468 |
+
log
|
1469 |
+
�
|
1470 |
+
1 + 2πi
|
1471 |
+
cr
|
1472 |
+
�
|
1473 |
+
α1eθrv + ρα2eθλv��
|
1474 |
+
dv
|
1475 |
+
−γτ
|
1476 |
+
� 0
|
1477 |
+
−∞
|
1478 |
+
log
|
1479 |
+
�
|
1480 |
+
1 + γλ
|
1481 |
+
cτ
|
1482 |
+
log
|
1483 |
+
�
|
1484 |
+
1 + 2πi
|
1485 |
+
cλ
|
1486 |
+
α2eθλv
|
1487 |
+
��
|
1488 |
+
dv
|
1489 |
+
�
|
1490 |
+
.
|
1491 |
+
(4.7)
|
1492 |
+
Figure 3 show the limiting density of the bivariate process (r(t), λ(t)), obtained by Fourier inversion
|
1493 |
+
and approximating the improper integrals in 4.7 with their proper version on [−100, 0].
|
1494 |
+
4.1.2. Linear PIDE for CDX Swaption prices. By the first fundamental theorem of asset pricing,
|
1495 |
+
we know that the price u(t, r, λ) of a CDX swpation is given by
|
1496 |
+
u(t, r(t), λ(t)) = EQ �
|
1497 |
+
e−
|
1498 |
+
� T0
|
1499 |
+
t
|
1500 |
+
r(u)duπ(r(T0), λ(T0))
|
1501 |
+
��� Ft
|
1502 |
+
�
|
1503 |
+
.
|
1504 |
+
Therefore, setting for every t ∈ [0, T]
|
1505 |
+
M(t) = e−
|
1506 |
+
� t
|
1507 |
+
0 r(u)duu(t, r(t), λ(t))
|
1508 |
+
we have
|
1509 |
+
M(t) = EQ [M(T)| Ft]
|
1510 |
+
|
1511 |
+
14
|
1512 |
+
A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
1513 |
+
i.e. M(t) is a martingale. Furthermore, by Ito’s lemma for semimartingales, we have
|
1514 |
+
M(t) = M(0) −
|
1515 |
+
� t
|
1516 |
+
0
|
1517 |
+
e−
|
1518 |
+
� s
|
1519 |
+
0 r(u)dur(s)u(s, r(s), λ(s))ds
|
1520 |
+
+
|
1521 |
+
� t
|
1522 |
+
0
|
1523 |
+
e−
|
1524 |
+
� s
|
1525 |
+
0 r(u)du [ut + ur (θr(µr − r(s)) + uλ (θλ(µλ − λ(s))] ds
|
1526 |
+
+
|
1527 |
+
�
|
1528 |
+
(0,t]×R2
|
1529 |
+
+\{0}
|
1530 |
+
e−
|
1531 |
+
� s
|
1532 |
+
0 r(u)duDt,r,λ
|
1533 |
+
u
|
1534 |
+
(y)N(ds, dy).
|
1535 |
+
where N is the Poisson random measure associated to the process (gr(t), gλ(t)+ρgr(t)).9 Therefore,
|
1536 |
+
if we denote by ϕ the Levy density of (gr(t), gλ(t)+ρgr(t)), and we add and subtract the compensator
|
1537 |
+
to M(t), we obtain, after reversing time, the following PIDE for u(t, r(t), λ(t)):
|
1538 |
+
(4.8)
|
1539 |
+
�
|
1540 |
+
�
|
1541 |
+
�
|
1542 |
+
�
|
1543 |
+
�
|
1544 |
+
ut + ru − ∇u · α(r(t), λ(t)) −
|
1545 |
+
�
|
1546 |
+
R2
|
1547 |
+
+\{0} Dt,r,λ
|
1548 |
+
u
|
1549 |
+
(y)ϕ(y)dy = 0
|
1550 |
+
u(0, r, λ) = π(T0, r, λ)
|
1551 |
+
+ boundary conditions
|
1552 |
+
where ∇u is the partial gradient of u with respect to r and λ (a two dimensional row vector) and
|
1553 |
+
α(r, λ) := (θr(µr − r), θλ(µλ − λ))T . Finally, it is easy to see that, for every (ur, uλ) ∈ R2,
|
1554 |
+
EQ �
|
1555 |
+
eiurgr(t)+iuλ(ρgr(t)+gλ(t)�
|
1556 |
+
= EQ �
|
1557 |
+
ei(ur+ρuλ)gr(t)�
|
1558 |
+
EQ �
|
1559 |
+
eiuλgλ(t)�
|
1560 |
+
= e
|
1561 |
+
�
|
1562 |
+
{yλ=ρyr}(eiuryr+uλyλ−1)ϕr(yr)dyrdyλe
|
1563 |
+
� ∞
|
1564 |
+
0 (eiuλyλ−1)ϕλ(yλ)dyλ,
|
1565 |
+
which implies
|
1566 |
+
�
|
1567 |
+
R2
|
1568 |
+
+\{0}
|
1569 |
+
Dt,r,λ
|
1570 |
+
u
|
1571 |
+
(y)ϕ(y)dy =
|
1572 |
+
� ∞
|
1573 |
+
0
|
1574 |
+
Dt,r,λ
|
1575 |
+
u
|
1576 |
+
(yr, ρyr)ϕr(yr)dyr +
|
1577 |
+
� ∞
|
1578 |
+
0
|
1579 |
+
Dt,r,λ
|
1580 |
+
u
|
1581 |
+
(0, yλ)ϕλ(yλ)dyλ.
|
1582 |
+
5. Numerical Results
|
1583 |
+
We implemented a finite difference scheme for the valuation PIDE, whose construction is reported
|
1584 |
+
in the appendix. The scheme was then tested taking as final condition the payoff of a forward
|
1585 |
+
start CDX swap, whose current value admits an integral representation. The proof is based on
|
1586 |
+
9It is easy to see that jumps in r and λ correspond to jumps in gr and gr + ρgλ, and their magnitude is also the
|
1587 |
+
same. In fact,
|
1588 |
+
lim
|
1589 |
+
s→0 r(t + s) − r(t) = lim
|
1590 |
+
s→0
|
1591 |
+
� t+s
|
1592 |
+
t
|
1593 |
+
e−θr(t−u)dgr(u) = lim
|
1594 |
+
s→0
|
1595 |
+
� s
|
1596 |
+
0
|
1597 |
+
eθrudgr(u + t)
|
1598 |
+
= lim
|
1599 |
+
s→0 lim
|
1600 |
+
n→∞
|
1601 |
+
n
|
1602 |
+
�
|
1603 |
+
k=0
|
1604 |
+
eθr ks
|
1605 |
+
n [gr(t + (k + 1)s/n) − gr(t + ks/n)]
|
1606 |
+
≤ lim
|
1607 |
+
s→0 lim
|
1608 |
+
n→0
|
1609 |
+
n
|
1610 |
+
�
|
1611 |
+
k=0
|
1612 |
+
eθrs[gr(t + (k + 1)s/n) − gr(t + ks/n)]
|
1613 |
+
= lim
|
1614 |
+
s→0 eθrs[gr(t + s) − gr(t)] = lim
|
1615 |
+
s→0[gr(t + s) − gr(t)].
|
1616 |
+
On the other hand, since θr > 0,
|
1617 |
+
lim
|
1618 |
+
s→0[gr(t + s) − gr(t)] = lim
|
1619 |
+
s→0
|
1620 |
+
� s
|
1621 |
+
0
|
1622 |
+
dgr(u + t) ≤ lim
|
1623 |
+
s→0
|
1624 |
+
� s
|
1625 |
+
0
|
1626 |
+
eθrudgr(u + t) = lim
|
1627 |
+
s→0 r(t + s) − r(t).
|
1628 |
+
Similar considerations hold for jumps in the default intensity. Therefore, the Poisson random measure associated to
|
1629 |
+
(r(t), λ(t)) must be the same as the one associated to (gr(t), gλ(t) + ρgr(t)).
|
1630 |
+
|
1631 |
+
A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
1632 |
+
15
|
1633 |
+
calculations that are similar to those performed in section 4.1.1. In particular, we obtain that, for
|
1634 |
+
every t ∈ [0, T0], ℓ = 1, ..., M,
|
1635 |
+
EQ
|
1636 |
+
�
|
1637 |
+
e−
|
1638 |
+
� T0
|
1639 |
+
t
|
1640 |
+
r(u)duEQ
|
1641 |
+
�
|
1642 |
+
e−
|
1643 |
+
� Tℓ
|
1644 |
+
T0 r(u)+λ(u)du|FT0
|
1645 |
+
����� Ft
|
1646 |
+
�
|
1647 |
+
= EQ �
|
1648 |
+
e−
|
1649 |
+
� Tℓ
|
1650 |
+
t
|
1651 |
+
r(u)dueξr(Tℓ−T0,r(T0),−1,0)+ξλ(Tℓ−T0,λ(T0),−1,0)|Ft
|
1652 |
+
�
|
1653 |
+
× e
|
1654 |
+
� Tℓ
|
1655 |
+
T0
|
1656 |
+
� ∞
|
1657 |
+
0 (eψr(Tℓ,u,−1,−1,0,0)y−1)ϕr(y)dydu+
|
1658 |
+
� Tℓ
|
1659 |
+
T0
|
1660 |
+
� ∞
|
1661 |
+
0 (eψλ(Tℓ,u,−1,0)y−1)ϕλ(y)dydu
|
1662 |
+
= eξr(T0−t,r(t),−1,b3)+ξλ(T0−t,λ(t),0,b4)
|
1663 |
+
× e
|
1664 |
+
� T0
|
1665 |
+
t
|
1666 |
+
� ∞
|
1667 |
+
0 (eψr(T0,u,−1,0,b3,b4)y−1)ϕr(y)dydu+
|
1668 |
+
� T0
|
1669 |
+
t
|
1670 |
+
� ∞
|
1671 |
+
0 (eψλ(T0,u,0,b4)y−1)ϕλ(y)dydu
|
1672 |
+
× e
|
1673 |
+
� Tℓ
|
1674 |
+
T0
|
1675 |
+
� ∞
|
1676 |
+
0 (eψr(Tℓ,u,−1,−1,0)y−1)ϕr(y)dydu+
|
1677 |
+
� Tℓ
|
1678 |
+
T0
|
1679 |
+
� ∞
|
1680 |
+
0 (eψλ(Tℓ,u,−1,0)y−1)ϕλ(y)dydu,
|
1681 |
+
EQ
|
1682 |
+
�
|
1683 |
+
e−
|
1684 |
+
� T0
|
1685 |
+
t
|
1686 |
+
r(u)duEQ
|
1687 |
+
�
|
1688 |
+
e−
|
1689 |
+
� T��−1
|
1690 |
+
T0
|
1691 |
+
r(u)+λ(u)duP(Tℓ−1, Tℓ)|FT0
|
1692 |
+
����� Ft
|
1693 |
+
�
|
1694 |
+
= EQ
|
1695 |
+
�
|
1696 |
+
e−
|
1697 |
+
� T0
|
1698 |
+
t
|
1699 |
+
r(u)duEQ
|
1700 |
+
�
|
1701 |
+
e−
|
1702 |
+
� Tℓ−1
|
1703 |
+
T0
|
1704 |
+
r(u)+λ(u)duer(Tℓ−1)α3|FT0
|
1705 |
+
����� Ft
|
1706 |
+
�
|
1707 |
+
× e
|
1708 |
+
� Tℓ
|
1709 |
+
Tℓ−1
|
1710 |
+
� ∞
|
1711 |
+
0
|
1712 |
+
�
|
1713 |
+
e
|
1714 |
+
− 1−e−θr(Tℓ−u)
|
1715 |
+
θr
|
1716 |
+
y−1
|
1717 |
+
�
|
1718 |
+
ϕr(y)dydu
|
1719 |
+
= EQ �
|
1720 |
+
e−
|
1721 |
+
� T0
|
1722 |
+
t
|
1723 |
+
r(u)dueξr(Tℓ−1−T0,r(T0),−1,α3)+ξλ(Tℓ−1−T0,λ(T0),−1,0)��� Ft
|
1724 |
+
�
|
1725 |
+
× e
|
1726 |
+
� Tℓ−1
|
1727 |
+
T0
|
1728 |
+
� ∞
|
1729 |
+
0
|
1730 |
+
�
|
1731 |
+
eψr(Tℓ−1,u,−1,−1,α3,0)y−1
|
1732 |
+
�
|
1733 |
+
ϕr(y)dydu+
|
1734 |
+
� Tℓ−1
|
1735 |
+
T0
|
1736 |
+
� ∞
|
1737 |
+
0
|
1738 |
+
�
|
1739 |
+
eψλ(Tℓ−1,u,−1,0)y−1
|
1740 |
+
�
|
1741 |
+
ϕλ(y)dydu
|
1742 |
+
× e
|
1743 |
+
� Tℓ
|
1744 |
+
Tℓ−1
|
1745 |
+
� ∞
|
1746 |
+
0
|
1747 |
+
�
|
1748 |
+
e
|
1749 |
+
− 1−e−θr(Tℓ−u)
|
1750 |
+
θr
|
1751 |
+
y−1
|
1752 |
+
�
|
1753 |
+
ϕr(y)dydu
|
1754 |
+
= eξr(T0−t,r(t),−1,a3)+ξλ(T0−t,λ(t),0,a4)
|
1755 |
+
× e
|
1756 |
+
� T0
|
1757 |
+
t
|
1758 |
+
� ∞
|
1759 |
+
0 (eψr(T0,u,−1,0,a3,a4)y−1)ϕr(y)dydu+
|
1760 |
+
� T0
|
1761 |
+
t
|
1762 |
+
� ∞
|
1763 |
+
0 (eψλ(T0,u,0,a4)y−1)ϕλ(y)dydu.
|
1764 |
+
× e
|
1765 |
+
� Tℓ−1
|
1766 |
+
T0
|
1767 |
+
� ∞
|
1768 |
+
0
|
1769 |
+
�
|
1770 |
+
eψr(Tℓ−1,u,−1,−1,α3,0)y−1
|
1771 |
+
�
|
1772 |
+
ϕr(y)dydu+
|
1773 |
+
� Tℓ−1
|
1774 |
+
T0
|
1775 |
+
� ∞
|
1776 |
+
0
|
1777 |
+
�
|
1778 |
+
eψλ(Tℓ−1,u,−1,0)y−1
|
1779 |
+
�
|
1780 |
+
ϕλ(y)dydu
|
1781 |
+
× e
|
1782 |
+
� Tℓ
|
1783 |
+
Tℓ−1
|
1784 |
+
� ∞
|
1785 |
+
0
|
1786 |
+
�
|
1787 |
+
e
|
1788 |
+
− 1−e−θr(Tℓ−u)
|
1789 |
+
θr
|
1790 |
+
y−1
|
1791 |
+
�
|
1792 |
+
ϕr(y)dydu
|
1793 |
+
,
|
1794 |
+
where
|
1795 |
+
α3 := −1 − e−θr(Tℓ−Tℓ−1)
|
1796 |
+
θr
|
1797 |
+
, a3 := −1 − e−θr(Tℓ−1−T0)
|
1798 |
+
θr
|
1799 |
+
+ α3e−θr(Tℓ−1−T0),
|
1800 |
+
a4 := −1 − e−θλ(Tℓ−1−T0)
|
1801 |
+
θλ
|
1802 |
+
, b3 := −1 − e−θr(Tℓ−T0)
|
1803 |
+
θr
|
1804 |
+
, b4 := −1 − e−θλ(Tℓ−T0)
|
1805 |
+
θλ
|
1806 |
+
.
|
1807 |
+
We considered as before the following set of parameters,
|
1808 |
+
r(0) = 0.0146, θr = 0.5500 cr = 400.0005, γr = 3.9475, ρ = 0.1548;
|
1809 |
+
λ(0) = 0, θλ = 3.3533, cλ = 4.3178, γλ = 6.0617, cτ = 3.5298, γτ = 190.0001,
|
1810 |
+
and we also assumed that the forward contract matures in 15 days, while the underlying asset
|
1811 |
+
is a 5-year receiver swap with recovery rate of 0.4, strike κ = 60 bps and semiannual payments.
|
1812 |
+
Figure 4 shows the price surface for the forward start credit index swap generated by solving 4.8
|
1813 |
+
|
1814 |
+
16
|
1815 |
+
A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
1816 |
+
assuming N = 50 (left) and N = 100 (right), and with initial condition given by the payoff of the
|
1817 |
+
swap at maturity of the forward contract. The ℓ∞ absolute error is plotted in figure 5(a) for strikes
|
1818 |
+
κ = 50 : 10 : 100 bps. Note that even to compute the analytical solutions certain integrations
|
1819 |
+
were performed numerically. Figure 5 (b) shows the ℓ∞ absolute error of the solution computed via
|
1820 |
+
Montecarlo simulation. For both the PIDE and Montecarlo cases the error is relatively high, and
|
1821 |
+
although one could reduce the error for instance by more accurately computing the gamma time
|
1822 |
+
changed gamma Levy density 4.4 (at the price of higher computational costs), we observe that the
|
1823 |
+
error is less than or at least comparable with the bid-ask spread observed in the option market,
|
1824 |
+
which, in the period considered, is at least 2 bps.
|
1825 |
+
In the case of a forward contract, the actual price of the contract, including the front end
|
1826 |
+
protection, can be analytically computed. In particular, we have
|
1827 |
+
EQ �
|
1828 |
+
e−
|
1829 |
+
� Tℓ
|
1830 |
+
t
|
1831 |
+
r(u)du11{τ i>Tℓ−1}|Ft ∨ Ht
|
1832 |
+
�
|
1833 |
+
= 11{τ i>t}EQ
|
1834 |
+
�
|
1835 |
+
e−
|
1836 |
+
� Tℓ−1
|
1837 |
+
t
|
1838 |
+
r(u)+λ(u)duP(Tℓ−1, Tℓ)|Ft
|
1839 |
+
�
|
1840 |
+
= 11{τ i>t}EQ
|
1841 |
+
�
|
1842 |
+
e−
|
1843 |
+
� Tℓ−1
|
1844 |
+
t
|
1845 |
+
r(u)+λ(u)duer(Tℓ−1)α3|Ft
|
1846 |
+
�
|
1847 |
+
e
|
1848 |
+
� Tℓ
|
1849 |
+
Tℓ−1
|
1850 |
+
� ∞
|
1851 |
+
0
|
1852 |
+
�
|
1853 |
+
e
|
1854 |
+
− 1−e−θr(Tℓ−u)
|
1855 |
+
θr
|
1856 |
+
y−1
|
1857 |
+
�
|
1858 |
+
ϕr(y)dydu
|
1859 |
+
= 11{τ i>t}eξr(Tℓ−1−t,r(t),−1,α3)+ξλ(Tℓ−1−t,λ(t),−1,0)
|
1860 |
+
× e
|
1861 |
+
� Tℓ−1
|
1862 |
+
t
|
1863 |
+
� ∞
|
1864 |
+
0
|
1865 |
+
�
|
1866 |
+
eψr(Tℓ−1,u,−1,−1,α3,0)y−1
|
1867 |
+
�
|
1868 |
+
ϕr(y)dydu+
|
1869 |
+
� Tℓ−1
|
1870 |
+
t
|
1871 |
+
� ∞
|
1872 |
+
0
|
1873 |
+
�
|
1874 |
+
eψλ(Tℓ−1,u,−1,0)y−1
|
1875 |
+
�
|
1876 |
+
ϕλ(y)dydu
|
1877 |
+
× e
|
1878 |
+
� Tℓ
|
1879 |
+
Tℓ−1
|
1880 |
+
� ∞
|
1881 |
+
0
|
1882 |
+
�
|
1883 |
+
e
|
1884 |
+
− 1−e−θr(Tℓ−u)
|
1885 |
+
θr
|
1886 |
+
y−1
|
1887 |
+
�
|
1888 |
+
ϕr(y)dydu
|
1889 |
+
,
|
1890 |
+
and
|
1891 |
+
EQ �
|
1892 |
+
e−
|
1893 |
+
� Tℓ
|
1894 |
+
t
|
1895 |
+
r(u)du11{τ i>Tℓ}|Ft
|
1896 |
+
�
|
1897 |
+
= 11{τ i>t}EQ �
|
1898 |
+
e−
|
1899 |
+
� Tℓ
|
1900 |
+
t
|
1901 |
+
r(u)+λ(u)du|Ft ∨ Ht
|
1902 |
+
�
|
1903 |
+
= 11{τ i>t}eξr(Tℓ−t,r(t),−1,0)+ξλ(Tℓ−t,λ(t),−1,0)
|
1904 |
+
× e
|
1905 |
+
� Tℓ
|
1906 |
+
t
|
1907 |
+
� ∞
|
1908 |
+
0 (eψr(Tℓ,u,−1,−1,0,0)y−1)ϕr(y)dydu+
|
1909 |
+
� Tℓ
|
1910 |
+
t
|
1911 |
+
� ∞
|
1912 |
+
0 (eψλ(Tℓ,u,−1,0)y−1)ϕλ(y)dydu,
|
1913 |
+
where, as before,
|
1914 |
+
α3 = −1 − e−θr(Tℓ−Tℓ−1)
|
1915 |
+
θr
|
1916 |
+
.
|
1917 |
+
Figures 6 shows that the value of the front end protection is relatively small for this set of
|
1918 |
+
parameters, although, as noted for instance in Brigo & Morini (2011), its value can be substantial
|
1919 |
+
for higher values of λ(0).
|
1920 |
+
We now turn our attention to the option contracts on a CDX index. The numerical price is
|
1921 |
+
shown in figure 7 (a), while the ℓ∞ absolute error with respect to the Montecarlo generated surface
|
1922 |
+
for κ = 50 : 10 : 100 and for r(0) = 146 bps is shown in figure 7 (b).
|
1923 |
+
Finally, the question of convergence of the numerical method for the case of an option payoff is
|
1924 |
+
addressed. For r(0) = 146 bps, consider CDX spreads κ in the range 60 to 50 bps. The resulting
|
1925 |
+
prices for various values of N, reported in table 2, show that convergence up to the second decimal
|
1926 |
+
(in bps) is obtained already for N = 50.
|
1927 |
+
|
1928 |
+
A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
1929 |
+
17
|
1930 |
+
(a)
|
1931 |
+
(b)
|
1932 |
+
Figure 4. Numerical price surface (in bps) for a forward-start credit index swap, assuming
|
1933 |
+
N = 50 (a) and N = 100 (b), and for M = 100 and Nsim = 100.
|
1934 |
+
5
|
1935 |
+
5.5
|
1936 |
+
6
|
1937 |
+
6.5
|
1938 |
+
7
|
1939 |
+
7.5
|
1940 |
+
8
|
1941 |
+
8.5
|
1942 |
+
9
|
1943 |
+
9.5
|
1944 |
+
10
|
1945 |
+
10-3
|
1946 |
+
1.485
|
1947 |
+
1.49
|
1948 |
+
1.495
|
1949 |
+
1.5
|
1950 |
+
1.505
|
1951 |
+
1.51
|
1952 |
+
1.515
|
1953 |
+
1.52
|
1954 |
+
1.525
|
1955 |
+
1.53
|
1956 |
+
(a)
|
1957 |
+
5
|
1958 |
+
5.5
|
1959 |
+
6
|
1960 |
+
6.5
|
1961 |
+
7
|
1962 |
+
7.5
|
1963 |
+
8
|
1964 |
+
8.5
|
1965 |
+
9
|
1966 |
+
9.5
|
1967 |
+
10
|
1968 |
+
10-3
|
1969 |
+
4.62
|
1970 |
+
4.64
|
1971 |
+
4.66
|
1972 |
+
4.68
|
1973 |
+
4.7
|
1974 |
+
4.72
|
1975 |
+
4.74
|
1976 |
+
4.76
|
1977 |
+
4.78
|
1978 |
+
4.8
|
1979 |
+
(b)
|
1980 |
+
Figure 5. ℓ∞ absolute error (a) and ℓ∞ difference (in bps) with Montecarlo generated
|
1981 |
+
price surface (b) for strikes κ = 50 : 10 : 100 bps.
|
1982 |
+
(a)
|
1983 |
+
(b)
|
1984 |
+
Figure 6. Price surface (in bps) of a forward-start CDX including FEP on a 50 × 50 grid
|
1985 |
+
(a) and comparison with the numerical solution (excluding FEP) of 9.1 for strike κ = 100
|
1986 |
+
bps.
|
1987 |
+
|
1988 |
+
54
|
1989 |
+
52
|
1990 |
+
50
|
1991 |
+
48
|
1992 |
+
46
|
1993 |
+
44
|
1994 |
+
0
|
1995 |
+
0.002
|
1996 |
+
0.004
|
1997 |
+
0.006
|
1998 |
+
0.008
|
1999 |
+
0.01
|
2000 |
+
0.012
|
2001 |
+
0.08
|
2002 |
+
0.0140
|
2003 |
+
0.02
|
2004 |
+
0.04
|
2005 |
+
.06
|
2006 |
+
r8
|
2007 |
+
-9
|
2008 |
+
4 -
|
2009 |
+
2
|
2010 |
+
0
|
2011 |
+
-2
|
2012 |
+
0
|
2013 |
+
0.002
|
2014 |
+
0.004
|
2015 |
+
0.006
|
2016 |
+
0.008
|
2017 |
+
0.01
|
2018 |
+
0.012
|
2019 |
+
0.08
|
2020 |
+
0.0140
|
2021 |
+
0.02
|
2022 |
+
0.04
|
2023 |
+
.06
|
2024 |
+
r5550
|
2025 |
+
45
|
2026 |
+
40
|
2027 |
+
35
|
2028 |
+
0
|
2029 |
+
0.005
|
2030 |
+
0.01
|
2031 |
+
0.
|
2032 |
+
入
|
2033 |
+
0.04
|
2034 |
+
0.015
|
2035 |
+
0.06
|
2036 |
+
0.08
|
2037 |
+
r0
|
2038 |
+
0255.50
|
2039 |
+
45
|
2040 |
+
40、
|
2041 |
+
35
|
2042 |
+
0
|
2043 |
+
0.005
|
2044 |
+
0.01
|
2045 |
+
0.
|
2046 |
+
0.04
|
2047 |
+
0.015
|
2048 |
+
0.06
|
2049 |
+
0.08
|
2050 |
+
r0
|
2051 |
+
0218
|
2052 |
+
A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
2053 |
+
(a)
|
2054 |
+
50
|
2055 |
+
55
|
2056 |
+
60
|
2057 |
+
65
|
2058 |
+
70
|
2059 |
+
75
|
2060 |
+
80
|
2061 |
+
85
|
2062 |
+
90
|
2063 |
+
95
|
2064 |
+
100
|
2065 |
+
1
|
2066 |
+
1.5
|
2067 |
+
2
|
2068 |
+
2.5
|
2069 |
+
3
|
2070 |
+
3.5
|
2071 |
+
4
|
2072 |
+
4.5
|
2073 |
+
(b)
|
2074 |
+
Figure 7. Receiver CDX option price surface for κ = 60 bps and N = 50 (left) and ℓ∞
|
2075 |
+
absolute difference with the Montecarlo generated price surface for κ = 50 : 10 : 100.
|
2076 |
+
N/κ
|
2077 |
+
60 bps
|
2078 |
+
50 bps
|
2079 |
+
Cpu time
|
2080 |
+
50
|
2081 |
+
53.98734
|
2082 |
+
12.05898
|
2083 |
+
4.656107e+00
|
2084 |
+
100
|
2085 |
+
53.98690
|
2086 |
+
12.05998
|
2087 |
+
1.225736e+01
|
2088 |
+
150
|
2089 |
+
53.98675
|
2090 |
+
12.06023
|
2091 |
+
2.448375e+01
|
2092 |
+
200
|
2093 |
+
53.98669
|
2094 |
+
12.06029
|
2095 |
+
4.205160e+01
|
2096 |
+
250
|
2097 |
+
53.98665
|
2098 |
+
12.06030
|
2099 |
+
6.590890e+01
|
2100 |
+
Table 2. CDXO price (in bps) and cpu time for different values of N and strike price κ
|
2101 |
+
bps and for M = Nsim = 100.
|
2102 |
+
6. Comparison with Market Data
|
2103 |
+
We calibrated the model to the Treasury yield curve (from www.treasury.gov) and to CDX option
|
2104 |
+
prices (provided by Morgan Stanley) as of 2 January 2020 across traded strikes and for each traded
|
2105 |
+
maturity. Strike prices are expressed in terms of CDX spreads, and they range from 42.5 bps up to
|
2106 |
+
120 bps. Traded maturities are 13, 43, 76, 104, 139 and 167 business days. The spot CDX spread as
|
2107 |
+
of 2 January 2020 was 44 bps. We considered strikes that are up to 30% out of the money (OTM)
|
2108 |
+
for receiver and payer contracts for each available maturity.10 Calibrated parameters for the short
|
2109 |
+
rate are the same as those considered above, while those for the default intensity are reported in
|
2110 |
+
table 3. Figure 8 compares the corresponding OTM model and market price.
|
2111 |
+
Term (years)
|
2112 |
+
θλ
|
2113 |
+
ρ
|
2114 |
+
cλ
|
2115 |
+
γλ
|
2116 |
+
cτ
|
2117 |
+
γτ
|
2118 |
+
0.04
|
2119 |
+
0.1562
|
2120 |
+
0.7869
|
2121 |
+
20.3292
|
2122 |
+
4.1223
|
2123 |
+
604.0000
|
2124 |
+
3.3192
|
2125 |
+
0.13
|
2126 |
+
3.3533
|
2127 |
+
0.1548
|
2128 |
+
4.3178
|
2129 |
+
6.0617
|
2130 |
+
190.0001
|
2131 |
+
3.5298
|
2132 |
+
0.21
|
2133 |
+
2.6789
|
2134 |
+
0.1115
|
2135 |
+
6.1313
|
2136 |
+
2.6983
|
2137 |
+
101.2590
|
2138 |
+
3.6123
|
2139 |
+
0.29
|
2140 |
+
0.0026
|
2141 |
+
0.1280
|
2142 |
+
18.7756
|
2143 |
+
5.1836
|
2144 |
+
312.5091
|
2145 |
+
2.5903
|
2146 |
+
0.39
|
2147 |
+
0.0010
|
2148 |
+
0.1000
|
2149 |
+
10.0981
|
2150 |
+
4.4205
|
2151 |
+
818.1465
|
2152 |
+
4.9855
|
2153 |
+
0.46
|
2154 |
+
0.0010
|
2155 |
+
0.1000
|
2156 |
+
82.2892
|
2157 |
+
1.0241
|
2158 |
+
45.8397
|
2159 |
+
8.4584
|
2160 |
+
Table 3
|
2161 |
+
10A receiver option, i.e. an option to sell protection, is OTM if the spot spread is higher than the strike spread,
|
2162 |
+
while a payer option is OTM if the spot spread is lower than the strike spread.
|
2163 |
+
|
2164 |
+
6040
|
2165 |
+
20
|
2166 |
+
0
|
2167 |
+
0
|
2168 |
+
0.02
|
2169 |
+
0.04
|
2170 |
+
0
|
2171 |
+
入
|
2172 |
+
0.2
|
2173 |
+
0.06
|
2174 |
+
0.3
|
2175 |
+
0.4
|
2176 |
+
r0A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
2177 |
+
19
|
2178 |
+
1
|
2179 |
+
Figure 8. OTM mid price (asterisk) and model price (circle) for maturities and strikes
|
2180 |
+
traded on 2 January 2020. Each color corresponds to one of the following strikes: 42.5, 45,
|
2181 |
+
47.5, 50, 52.5, 55, 57.5 (in bps). Maturities are as reported in table 3 and model prices are
|
2182 |
+
computed using the corresponding parameters also reported in table 3.
|
2183 |
+
It is also possible to compare market and model implied summary statistics (variance, skewness,
|
2184 |
+
kurtosis) of credit spreads for a specific maturity. In particular, as shown above, the payoff of a
|
2185 |
+
receiver CDX option maturing at time T0 with strike spread c is given by
|
2186 |
+
π(T0) =
|
2187 |
+
�
|
2188 |
+
cEQ[A(T0)|FT0] − δEQ[Φ(T0)|FT0]
|
2189 |
+
�+
|
2190 |
+
,
|
2191 |
+
while the spot credit spread c(T0) satisfies
|
2192 |
+
c(T0)EQ [A(T0)|FT0] = δEQ [Φ(T0)|FT0] .
|
2193 |
+
Therefore, we have
|
2194 |
+
π(T0) = EQ[A(T0)|FT0] (c − c(T0))+ .
|
2195 |
+
Taking the index annuity as numeraire,11 and letting QA denote the associated probability measure,
|
2196 |
+
the price ur(t, c) and up(t, c) of a receiver/payer option at time t is given by
|
2197 |
+
ur(t, c) = EQ[A(t)|Ft]EQA[(c − c(T0))+ |Ft], up(t, c) = EQ[A(t)|Ft]EQA[(c(T0) − c)+ |Ft].
|
2198 |
+
We can then use a result due to Madan and Carr (see Carr & Madan (2001)), according to which
|
2199 |
+
a twice continuously differentiable payoff function H(c) ∈ C2(R) can be written as
|
2200 |
+
H(c) = H(ˆc) + (c − ˆc)H′(ˆc) +
|
2201 |
+
� ∞
|
2202 |
+
ˆc
|
2203 |
+
H′′(c)(c − c)+dc +
|
2204 |
+
� ˆc
|
2205 |
+
0
|
2206 |
+
H′′(c)(c − c)+dc,
|
2207 |
+
(6.1)
|
2208 |
+
where ˆc ≥ 0 is arbitrary. This allows one to compute model-free summary statistics of the spot
|
2209 |
+
credit spread c(T0) under QA.
|
2210 |
+
To do so, define the volatility, cubic and quartic contracts as
|
2211 |
+
(c − cf)2, (c − cf)3, (c − cf)4, where cf = EQA[c(T0)] is the forward credit spread. Setting ˆc = cf,
|
2212 |
+
11Technically, the index annuity may be null on a set of positive measure.
|
2213 |
+
This happens in the case of an
|
2214 |
+
armageddon event, i.e. all the entities in the index default prior to the option expiration. Our assumption that such
|
2215 |
+
event has approximately zero probability (which is more likely the shorter the maturity of the option) ensures that
|
2216 |
+
the approximation error in taking the index annuity as numeraire is small enough.
|
2217 |
+
|
2218 |
+
0
|
2219 |
+
×10-34
|
2220 |
+
0
|
2221 |
+
米
|
2222 |
+
米
|
2223 |
+
0
|
2224 |
+
3.5 -
|
2225 |
+
米
|
2226 |
+
0
|
2227 |
+
米
|
2228 |
+
3 -
|
2229 |
+
米
|
2230 |
+
米
|
2231 |
+
米
|
2232 |
+
0
|
2233 |
+
米
|
2234 |
+
2.5 -
|
2235 |
+
米
|
2236 |
+
0
|
2237 |
+
2 -
|
2238 |
+
米
|
2239 |
+
米
|
2240 |
+
米
|
2241 |
+
0
|
2242 |
+
*
|
2243 |
+
0
|
2244 |
+
米
|
2245 |
+
0
|
2246 |
+
米
|
2247 |
+
果
|
2248 |
+
1.5 -
|
2249 |
+
0
|
2250 |
+
D
|
2251 |
+
米
|
2252 |
+
米
|
2253 |
+
米
|
2254 |
+
1 -
|
2255 |
+
果
|
2256 |
+
米
|
2257 |
+
米
|
2258 |
+
来
|
2259 |
+
0.5-
|
2260 |
+
米
|
2261 |
+
0
|
2262 |
+
4.2
|
2263 |
+
4.4
|
2264 |
+
4.6
|
2265 |
+
4.8
|
2266 |
+
5
|
2267 |
+
5.2
|
2268 |
+
5.4
|
2269 |
+
5.6
|
2270 |
+
×10-3
|
2271 |
+
Strike米
|
2272 |
+
8
|
2273 |
+
米
|
2274 |
+
0
|
2275 |
+
米
|
2276 |
+
米
|
2277 |
+
0米
|
2278 |
+
0.5
|
2279 |
+
0
|
2280 |
+
5.8
|
2281 |
+
Term20
|
2282 |
+
A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
2283 |
+
6.1 implies
|
2284 |
+
EQ
|
2285 |
+
0 [A(0)]EQA
|
2286 |
+
0 [(c(T0) − cf)2]
|
2287 |
+
= 2EQ
|
2288 |
+
0 [A(0)]EQA
|
2289 |
+
0
|
2290 |
+
�� ∞
|
2291 |
+
cf
|
2292 |
+
(c(T0) − c)+dc +
|
2293 |
+
� cf
|
2294 |
+
0
|
2295 |
+
(c − c(T0))+dc
|
2296 |
+
�
|
2297 |
+
,
|
2298 |
+
EQ
|
2299 |
+
0 [A(0)]EQA
|
2300 |
+
0 [(c(T0) − cf)3]
|
2301 |
+
= 6EQ
|
2302 |
+
0 [A(0)]EQA
|
2303 |
+
0
|
2304 |
+
�� ∞
|
2305 |
+
cf
|
2306 |
+
(c − cf)(c(T0) − c)+dc +
|
2307 |
+
� cf
|
2308 |
+
0
|
2309 |
+
(c − cf)(c − c(T0))+dc
|
2310 |
+
�
|
2311 |
+
,
|
2312 |
+
EQ
|
2313 |
+
0 [A(0)]EQA
|
2314 |
+
0 [(c(T0) − cf)4]
|
2315 |
+
= 12EQ
|
2316 |
+
0 [A(0)]EQA
|
2317 |
+
0
|
2318 |
+
�� ∞
|
2319 |
+
cf
|
2320 |
+
(c − cf)2(c(T0) − c)+dc +
|
2321 |
+
� cf
|
2322 |
+
0
|
2323 |
+
(c − cf)2(c − c(T0))+dc
|
2324 |
+
�
|
2325 |
+
,
|
2326 |
+
which imply, under reasonable assumptions on c(T0),
|
2327 |
+
EQ
|
2328 |
+
0 [A(0)]EQA
|
2329 |
+
0 [(c(T0) − cf)2] = 2
|
2330 |
+
� ∞
|
2331 |
+
cf
|
2332 |
+
up(0, c)dc + 2
|
2333 |
+
� cf
|
2334 |
+
0
|
2335 |
+
ur(0, c)dc,
|
2336 |
+
(6.2)
|
2337 |
+
EQ
|
2338 |
+
0 [A(0)]EQA
|
2339 |
+
0 [(c(T0) − cf)3] = 6
|
2340 |
+
� ∞
|
2341 |
+
cf
|
2342 |
+
(c − cf)up(0, c)dc + 6
|
2343 |
+
� cf
|
2344 |
+
0
|
2345 |
+
(c − cf)ur(0, c)dc,
|
2346 |
+
(6.3)
|
2347 |
+
EQ
|
2348 |
+
0 [A(0)]EQA
|
2349 |
+
0 [(c(T0) − cf)4] = 12
|
2350 |
+
� ∞
|
2351 |
+
cf
|
2352 |
+
(c − cf)2up(0, c)dc + 12
|
2353 |
+
� cf
|
2354 |
+
0
|
2355 |
+
(c − cf)2ur(0, c)dc,
|
2356 |
+
(6.4)
|
2357 |
+
Assuming that non traded deep OTM otions have zero price, one can think of 6.2, 6.3 and 6.4 as
|
2358 |
+
reasonable approximations of the first three moments of c(T0), multiplied by the current value of
|
2359 |
+
the index annuity. Note also that calculation of cf is straightforward since, following the standard
|
2360 |
+
put-call parity argument, the price fp(t, c) at time t of a payer forward with spread c is
|
2361 |
+
fp(T0, cf) =
|
2362 |
+
�
|
2363 |
+
δEQ[Φ(T0)|FT0] − cfEQ[A(T0)|FT0]
|
2364 |
+
�+
|
2365 |
+
−
|
2366 |
+
�
|
2367 |
+
cfEQ[A(T0)|FT0] − δEQ[Φ(T0)|FT0]
|
2368 |
+
�+
|
2369 |
+
= up(T0, cf) − ur(T0, cf)
|
2370 |
+
and so fp(0, cf) = up(0, cf) − ur(0, cf). Since fp(0, 0) = EQ
|
2371 |
+
0 [A(0)]EQA
|
2372 |
+
0 [c(T0)] = EQ
|
2373 |
+
0 [A(0)]cf, the no
|
2374 |
+
arbitrage model-free value of the annuity is
|
2375 |
+
EQ
|
2376 |
+
0 [A(0)] = fp(0, 0)
|
2377 |
+
cf
|
2378 |
+
≈ up(0, 0)
|
2379 |
+
cf
|
2380 |
+
.
|
2381 |
+
(6.5)
|
2382 |
+
|
2383 |
+
A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
2384 |
+
21
|
2385 |
+
Then, the market implied spread’s variance, µ2, skewness µ3 and kurtosis µ4 under QA are:
|
2386 |
+
µ2 := EQA �
|
2387 |
+
(c(T0) − cf)2�
|
2388 |
+
≈
|
2389 |
+
2cf
|
2390 |
+
fp(0, 0)
|
2391 |
+
�� ∞
|
2392 |
+
cf
|
2393 |
+
up(0, c)dc +
|
2394 |
+
� cf
|
2395 |
+
0
|
2396 |
+
ur(0, c)dc
|
2397 |
+
�
|
2398 |
+
µ3 :=
|
2399 |
+
EQA[(c(T0) − cf)3]
|
2400 |
+
EQA [(c(T0) − cf)2]3/2
|
2401 |
+
≈
|
2402 |
+
6cf
|
2403 |
+
µ3/2
|
2404 |
+
2
|
2405 |
+
fp(0, 0)
|
2406 |
+
�� ∞
|
2407 |
+
cf
|
2408 |
+
(c − cf)up(0, c)dc +
|
2409 |
+
� cf
|
2410 |
+
0
|
2411 |
+
(c − cf)ur(0, c)dc
|
2412 |
+
�
|
2413 |
+
µ4 := EQA[(c(T0) − cf)4]
|
2414 |
+
EQA [(c(T0) − cf)2]4
|
2415 |
+
≈
|
2416 |
+
12cf
|
2417 |
+
µ4
|
2418 |
+
2fp(0, 0)
|
2419 |
+
�� ∞
|
2420 |
+
cf
|
2421 |
+
(c − cf)2up(0, c)dc + 12
|
2422 |
+
� cf
|
2423 |
+
0
|
2424 |
+
(c − cf)2ur(0, c)dc
|
2425 |
+
�
|
2426 |
+
As shown in table 4, model and market implied spread statistics under QA are relatively close,
|
2427 |
+
evidencing accuracy of model 4.2 in explaining short rate and default intensity dynamics. Note in
|
2428 |
+
particular that model 4.2 is able to capture the positive skewness and leptokurtic feature of CDX
|
2429 |
+
spreads under the measure QA.
|
2430 |
+
Variance
|
2431 |
+
Term
|
2432 |
+
Market Implied
|
2433 |
+
Model Implied
|
2434 |
+
0.04
|
2435 |
+
1.054514e+01
|
2436 |
+
7.459700e+00
|
2437 |
+
0.13
|
2438 |
+
5.865457e+01
|
2439 |
+
6.210797e+01
|
2440 |
+
0.21
|
2441 |
+
1.100104e+02
|
2442 |
+
2.152011e+02
|
2443 |
+
0.29
|
2444 |
+
1.812481e+02
|
2445 |
+
2.623242e+02
|
2446 |
+
0.39
|
2447 |
+
2.722237e+02
|
2448 |
+
5.018296e+02
|
2449 |
+
0.46
|
2450 |
+
3.496845e+02
|
2451 |
+
2.280879e+02
|
2452 |
+
Skewness
|
2453 |
+
Term
|
2454 |
+
Market Implied
|
2455 |
+
Model Implied
|
2456 |
+
0.04
|
2457 |
+
8.599558e-01
|
2458 |
+
2.520823e-01
|
2459 |
+
0.13
|
2460 |
+
2.463146e+00
|
2461 |
+
2.974330e+00
|
2462 |
+
0.21
|
2463 |
+
2.660094e+00
|
2464 |
+
3.507166e+00
|
2465 |
+
0.29
|
2466 |
+
3.221129e+00
|
2467 |
+
3.992535e+00
|
2468 |
+
0.39
|
2469 |
+
3.144701e+00
|
2470 |
+
3.809120e+00
|
2471 |
+
0.46
|
2472 |
+
2.896963e+00
|
2473 |
+
2.457448e+00
|
2474 |
+
Kurtosis
|
2475 |
+
Term
|
2476 |
+
Market Implied
|
2477 |
+
Model Implied
|
2478 |
+
0.04
|
2479 |
+
2.308373e+00
|
2480 |
+
2.732897e+00
|
2481 |
+
0.13
|
2482 |
+
9.452965e+00
|
2483 |
+
1.140996e+01
|
2484 |
+
0.21
|
2485 |
+
1.066158e+01
|
2486 |
+
1.230041e+01
|
2487 |
+
0.29
|
2488 |
+
1.566200e+01
|
2489 |
+
1.788540e+01
|
2490 |
+
0.39
|
2491 |
+
1.458586e+01
|
2492 |
+
1.487553e+01
|
2493 |
+
0.46
|
2494 |
+
1.212721e+01
|
2495 |
+
1.028359e+01
|
2496 |
+
Table 4. Market and model implied CDX spread statistics for the maturities traded on 2
|
2497 |
+
January 2020.
|
2498 |
+
It is also worth noting that, under 4.2,
|
2499 |
+
EQA
|
2500 |
+
0 [H(c(T0))] =
|
2501 |
+
EQ
|
2502 |
+
0
|
2503 |
+
�
|
2504 |
+
e−
|
2505 |
+
� T0
|
2506 |
+
0
|
2507 |
+
r(u)du �M
|
2508 |
+
ℓ=1(Tℓ − Tℓ−1)EQ
|
2509 |
+
�
|
2510 |
+
e−
|
2511 |
+
� Tℓ
|
2512 |
+
T0 r(u)+λ(u)du|FT0
|
2513 |
+
�
|
2514 |
+
H(c(T0))
|
2515 |
+
�
|
2516 |
+
�M
|
2517 |
+
ℓ=1(Tℓ − Tℓ−1)EQ
|
2518 |
+
0
|
2519 |
+
�
|
2520 |
+
e−
|
2521 |
+
� Tℓ
|
2522 |
+
0
|
2523 |
+
r(u)+λ(u)du�
|
2524 |
+
,
|
2525 |
+
(6.6)
|
2526 |
+
which can be computed via montecarlo simulation.
|
2527 |
+
We performed such computation, but the
|
2528 |
+
resulting model implied variance, skewness and kurtosis are not in line with those computed above,
|
2529 |
+
ultimately because the statistics in table 4 assume that the prices of deep OTM options are zero.
|
2530 |
+
|
2531 |
+
22
|
2532 |
+
A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
2533 |
+
Jan
|
2534 |
+
Feb
|
2535 |
+
Mar
|
2536 |
+
Apr
|
2537 |
+
May
|
2538 |
+
Jun
|
2539 |
+
2020
|
2540 |
+
101
|
2541 |
+
102
|
2542 |
+
103
|
2543 |
+
104
|
2544 |
+
Variance
|
2545 |
+
(a)
|
2546 |
+
Jan
|
2547 |
+
Feb
|
2548 |
+
Mar
|
2549 |
+
Apr
|
2550 |
+
May
|
2551 |
+
Jun
|
2552 |
+
2020
|
2553 |
+
10-2
|
2554 |
+
10-1
|
2555 |
+
100
|
2556 |
+
101
|
2557 |
+
Skewness
|
2558 |
+
(b)
|
2559 |
+
Jan
|
2560 |
+
Feb
|
2561 |
+
Mar
|
2562 |
+
Apr
|
2563 |
+
May
|
2564 |
+
Jun
|
2565 |
+
2020
|
2566 |
+
100
|
2567 |
+
101
|
2568 |
+
Kurtosis
|
2569 |
+
(c)
|
2570 |
+
Figure 9. Daily market implied and model statistics under QA.
|
2571 |
+
Table 5 shows model and market implied moments significant correlation between 2 January
|
2572 |
+
2020 and 5 June 2020. Calibration was performed each day using Nelder-Mead algorithm with
|
2573 |
+
starting point the optimal parameters for the previous day and maximum 100 iterations.
|
2574 |
+
Statistics
|
2575 |
+
Correlation Coefficient
|
2576 |
+
Variance
|
2577 |
+
0.8892
|
2578 |
+
Skewness
|
2579 |
+
0.3238
|
2580 |
+
Kurtosis
|
2581 |
+
0.4759
|
2582 |
+
Table 5. Correlation coefficient for the time series of market and model implied spread
|
2583 |
+
statistics.
|
2584 |
+
Finally, figure 10 shows the daily realized short rate, default intensity and the parameter ρ.
|
2585 |
+
7. Conclusions
|
2586 |
+
We introduced a pure-jump dynamics for the simultaneous modelling of the short rate and default
|
2587 |
+
intensity of a pool of entities with similar credit qualities, with the former being a gamma process
|
2588 |
+
and the latter also a gamma process but subordinated to another (independent) gamma process. We
|
2589 |
+
tested a simple finite difference scheme for the valuation PIDE for forward and option contracts on
|
2590 |
+
|
2591 |
+
A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
2592 |
+
23
|
2593 |
+
Jan
|
2594 |
+
Feb
|
2595 |
+
Mar
|
2596 |
+
Apr
|
2597 |
+
May
|
2598 |
+
Jun
|
2599 |
+
2020
|
2600 |
+
0
|
2601 |
+
0.002
|
2602 |
+
0.004
|
2603 |
+
0.006
|
2604 |
+
0.008
|
2605 |
+
0.01
|
2606 |
+
0.012
|
2607 |
+
0.014
|
2608 |
+
0.016
|
2609 |
+
0.018
|
2610 |
+
0.02
|
2611 |
+
(a)
|
2612 |
+
Jan
|
2613 |
+
Feb
|
2614 |
+
Mar
|
2615 |
+
Apr
|
2616 |
+
May
|
2617 |
+
Jun
|
2618 |
+
2020
|
2619 |
+
0
|
2620 |
+
0.1
|
2621 |
+
0.2
|
2622 |
+
0.3
|
2623 |
+
0.4
|
2624 |
+
0.5
|
2625 |
+
0.6
|
2626 |
+
0.7
|
2627 |
+
0.8
|
2628 |
+
0.9
|
2629 |
+
(b)
|
2630 |
+
Jan
|
2631 |
+
Feb
|
2632 |
+
Mar
|
2633 |
+
Apr
|
2634 |
+
May
|
2635 |
+
Jun
|
2636 |
+
2020
|
2637 |
+
0
|
2638 |
+
5
|
2639 |
+
10
|
2640 |
+
15
|
2641 |
+
20
|
2642 |
+
25
|
2643 |
+
(c)
|
2644 |
+
Figure 10. Daily realized short rate, hazard rate and parameter ρ.
|
2645 |
+
derivatives determined by short rate and default intensity, and showed that the numerical solution
|
2646 |
+
approximates the exact one or a simulated one with reasonable margin of errors. We calibrated the
|
2647 |
+
model to the CDX option price surface. For January 2 2020, the calibration error is generally low,
|
2648 |
+
but it can be substantial and especially as maturity increases. Finally, we derived a market implied
|
2649 |
+
formula for variance, skewness and kurtosis of the credit spread under the Annuity measure, and
|
2650 |
+
reported that market and model implied statistics over the year 2020 are of similar magnitude and
|
2651 |
+
positively correlated.
|
2652 |
+
8. Aknowledgement
|
2653 |
+
This paper is a revised version of a chapter of the author’s doctoral dissertation, conducted under
|
2654 |
+
the supervision of Prof. Dilip B. Madan at the University of Maryland, College Park.
|
2655 |
+
9. Appendix: The Finite Difference Scheme for the Valuation PIDE
|
2656 |
+
In our finite difference approximation, we treat the integral term fully explicitly. Consider the
|
2657 |
+
following mesh on the region [0, T] × [0, rmax] × [0, λmax]:
|
2658 |
+
D =
|
2659 |
+
�
|
2660 |
+
�
|
2661 |
+
�
|
2662 |
+
�
|
2663 |
+
�
|
2664 |
+
tj = j∆t; ∆t = T
|
2665 |
+
M ; j = 0, 1, ..., M
|
2666 |
+
ri = i∆r; ∆r = rmax
|
2667 |
+
N ; i = 0, 1, ..., N
|
2668 |
+
λk = k∆λ; ∆λ = λmax
|
2669 |
+
L ; k = 0, 1, ..., L
|
2670 |
+
|
2671 |
+
24
|
2672 |
+
A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
2673 |
+
We pick λmax = ρrmax and L = N. We denote by (tj, ri, λk) ∈ R3
|
2674 |
+
+ the grid points in D, and
|
2675 |
+
let uj
|
2676 |
+
i,k = u(tj, ri, λk). Assuming that the (N + 1)2 values uj
|
2677 |
+
i,k are known for fixed tj, we need
|
2678 |
+
to construct the difference equation for each point (tj+1, ri, λk). Space and time derivatives are
|
2679 |
+
approximated using central and forward differences respectively, i.e.
|
2680 |
+
ur(tj+1, ri, λk) =
|
2681 |
+
uj+1
|
2682 |
+
i+1,k − uj+1
|
2683 |
+
i−1,k
|
2684 |
+
2∆r
|
2685 |
+
+ O(∆r2), uλ(tj+1, ri, λk) =
|
2686 |
+
uj+1
|
2687 |
+
i,k+1 − uj+1
|
2688 |
+
i,k−1
|
2689 |
+
2∆λ
|
2690 |
+
+ O(∆λ2),
|
2691 |
+
ut(tj+1, ri, λk) =
|
2692 |
+
uj+1
|
2693 |
+
i,k − uj
|
2694 |
+
i,k
|
2695 |
+
∆t
|
2696 |
+
+ O(∆t2).
|
2697 |
+
We then obtain the following equation for the point (tj+1, ri, λk):
|
2698 |
+
uj+1
|
2699 |
+
i,k − uj
|
2700 |
+
i,k
|
2701 |
+
∆t
|
2702 |
+
+ riuj+1
|
2703 |
+
i,k − α1
|
2704 |
+
uj+1
|
2705 |
+
i+1,k − uj+1
|
2706 |
+
i−1,k
|
2707 |
+
2∆r
|
2708 |
+
− α2
|
2709 |
+
uj+1
|
2710 |
+
i,k+1 − uj+1
|
2711 |
+
i,k−1
|
2712 |
+
2∆λ
|
2713 |
+
≈
|
2714 |
+
� ∞
|
2715 |
+
0
|
2716 |
+
Dtj,ri,λk
|
2717 |
+
u
|
2718 |
+
(yr, 0)ϕr(yr)dyr +
|
2719 |
+
� ∞
|
2720 |
+
0
|
2721 |
+
Dtj,ri,λk
|
2722 |
+
u
|
2723 |
+
(0, yλ)ϕλ(yλ)dyλ,
|
2724 |
+
Equivalently, we have
|
2725 |
+
uj+1
|
2726 |
+
i,k (1 + ∆tri) − ∆tα1
|
2727 |
+
2∆r
|
2728 |
+
�
|
2729 |
+
uj+1
|
2730 |
+
i+1,k − uj+1
|
2731 |
+
i−1,k
|
2732 |
+
�
|
2733 |
+
− ∆tα2
|
2734 |
+
2∆λ
|
2735 |
+
�
|
2736 |
+
uj+1
|
2737 |
+
i,k+1 − uj+1
|
2738 |
+
i,k−1
|
2739 |
+
�
|
2740 |
+
≈ uj
|
2741 |
+
i,k +
|
2742 |
+
� ∞
|
2743 |
+
0
|
2744 |
+
Dtj,ri,λk
|
2745 |
+
u
|
2746 |
+
(yr, 0)ϕr(yr)dyr +
|
2747 |
+
� ∞
|
2748 |
+
0
|
2749 |
+
Dtj,ri,λk
|
2750 |
+
u
|
2751 |
+
(0, yλ)ϕλ(yλ)dyλ.
|
2752 |
+
(9.1)
|
2753 |
+
The integral terms in 9.1 can be treated easily via montecarlo simulation. Specifically, at the grid
|
2754 |
+
point (tj, ri, λk), having generated Nsim exponentially distributed random variables {Y r
|
2755 |
+
s }s=1,...,Nsim
|
2756 |
+
with parameter cr, we have
|
2757 |
+
� ∞
|
2758 |
+
0
|
2759 |
+
Dtj,ri,λk
|
2760 |
+
u
|
2761 |
+
(yr, ρyr)ϕr(yr)dyr ≈
|
2762 |
+
1
|
2763 |
+
Nsim
|
2764 |
+
Nsim
|
2765 |
+
�
|
2766 |
+
s=1
|
2767 |
+
Dtj,ri,λk
|
2768 |
+
u
|
2769 |
+
(Y r
|
2770 |
+
s , ρY r
|
2771 |
+
s ) γr
|
2772 |
+
crY rs
|
2773 |
+
,
|
2774 |
+
and similarly for the second integral. For every s = 1, ..., Nsim, we compute Dtj,ri,λk
|
2775 |
+
u
|
2776 |
+
(Y r
|
2777 |
+
s , ρY r
|
2778 |
+
s ) and
|
2779 |
+
Dtj,ri,λk
|
2780 |
+
u
|
2781 |
+
(0, Y λ
|
2782 |
+
s ) by linearly interpolating uj, and obtain the following difference equation
|
2783 |
+
−Si,kuj+1
|
2784 |
+
i,k−1 − Wi,kuj+1
|
2785 |
+
i−1,k + Ci,kuj+1
|
2786 |
+
i,k − Ei,kuj+1
|
2787 |
+
i+1,k − Ni,kuj+1
|
2788 |
+
i,k+1 = uj
|
2789 |
+
i,k + ∆tRj
|
2790 |
+
i,k,
|
2791 |
+
(9.2)
|
2792 |
+
where
|
2793 |
+
Si,k = −∆tα2
|
2794 |
+
2∆λ , Wi,k = −∆tα1
|
2795 |
+
2∆r , Ei,k = ∆tα1
|
2796 |
+
2∆r , Ci,k = 1 + ∆tri, Ni,k = ∆tα2
|
2797 |
+
2∆λ ,
|
2798 |
+
Rj
|
2799 |
+
i,k =
|
2800 |
+
1
|
2801 |
+
Nsim
|
2802 |
+
Nsim
|
2803 |
+
�
|
2804 |
+
s=1
|
2805 |
+
Dtj,ri,λk
|
2806 |
+
u
|
2807 |
+
(Y r
|
2808 |
+
s , ρY r
|
2809 |
+
s ) γr
|
2810 |
+
crY rs
|
2811 |
+
+
|
2812 |
+
1
|
2813 |
+
Nsim
|
2814 |
+
Nsim
|
2815 |
+
�
|
2816 |
+
s=1
|
2817 |
+
Dtj,ri,λk
|
2818 |
+
u
|
2819 |
+
(Y λ
|
2820 |
+
s , 0) γλ
|
2821 |
+
cλY λ
|
2822 |
+
s
|
2823 |
+
.
|
2824 |
+
Implementation of Boundary Conditions. We impose homogeneous Neumann boundary conditions
|
2825 |
+
for each time t ∈ [0, T]:
|
2826 |
+
uλλ(t, r, λL) = 0 − uλλ(t, r, 0) = 0, −urr(t, 0, λ) = 0, urr(t, rN, λ) = 0.
|
2827 |
+
(9.3)
|
2828 |
+
We thus solve
|
2829 |
+
(9.4)
|
2830 |
+
�
|
2831 |
+
ut + ru − ∇u · α =
|
2832 |
+
� ∞
|
2833 |
+
0 Dt,r,λ
|
2834 |
+
u
|
2835 |
+
(yr, ρyr)ϕr(yr)dyr +
|
2836 |
+
� ∞
|
2837 |
+
0 Dt,r,λ
|
2838 |
+
u
|
2839 |
+
(0, yλ)ϕλ(yλ)dyλ
|
2840 |
+
uλλ(t, r, λL) = 0, uλλ(t, r, 0) = 0, urr(t, 0, λ) = 0, urr(t, rN, λ) = 0
|
2841 |
+
We implement 9.3 at the points (tj, r1, λk), (tj, rN−1, λk), (tj, ri, λ1), i.e. we set
|
2842 |
+
uj+1
|
2843 |
+
0,k = 2uj+1
|
2844 |
+
1,k − uj+1
|
2845 |
+
2,k , uj+1
|
2846 |
+
N,k = 2uj+1
|
2847 |
+
N−1,k − uj+1
|
2848 |
+
N−2,k, uj+1
|
2849 |
+
i,L+1 = 2uj+1
|
2850 |
+
i,L − uj+1
|
2851 |
+
i,L−1.uj+1
|
2852 |
+
i,0
|
2853 |
+
= 2uj+1
|
2854 |
+
i,1 − uj+1
|
2855 |
+
i,2 .
|
2856 |
+
|
2857 |
+
A PURE JUMP MODEL FOR THE VALUATION OF OPTIONS ON A CREDIT INDEX
|
2858 |
+
25
|
2859 |
+
References
|
2860 |
+
Armstrong, A., & Rutkowski, M. 2009. Valuation of Credit Default Index Swaps and Swaptions.
|
2861 |
+
International Journal of Theoretical and Applied Finance, 12(7), 1027–1053.
|
2862 |
+
Barndorff-Nielsen, O. E. 1998. Processes of Normal Inverse Gaussian Type. Finance and Stochastics,
|
2863 |
+
2, 41–68.
|
2864 |
+
Bielecki, T. R., & Rutkowski, M. 2002. Credit Risk: Modeling, Valuation and Hedging. Springer
|
2865 |
+
Finance.
|
2866 |
+
Brigo, D., & Morini, M. 2011. No-armageddon measure for arbitrage-free pricing of index options
|
2867 |
+
in a credit crisis. Mathematical Finance, 21, 583–593.
|
2868 |
+
Carr, P., & Madan, D. 2001. Optimal positioning in derivative securities. Quantitative Finance, 1,
|
2869 |
+
19–37.
|
2870 |
+
Doctor, S., & Goulden, J. 2007. An introduction to credit index options and credit volatility. J.P.
|
2871 |
+
Morgan Credit Derivatives Research, 2007.
|
2872 |
+
Duffee, D., & Kan, R. 6. A yield-factor model of interest rates. Mathematical Finance, 4, 921–950.
|
2873 |
+
Duffie, D., & Garleanu, N. 2001. Risk and Valuation of Collateralized Debt Obligations. Financial
|
2874 |
+
Analysts Journal, 57(1), 41–59.
|
2875 |
+
Duffie, D., & Singleton, K. J. 1999. Modeling term structures of defaultable bonds. Review of
|
2876 |
+
Financial Studies, 12(4), 687–720.
|
2877 |
+
Duffie, D., Pan, J., & Singleton, K. J. 2000.
|
2878 |
+
Transform Analysis and Asset Pricing for Affine
|
2879 |
+
Jump-Diffusions. Econometrica, 68(6), 1343–1376.
|
2880 |
+
Eberlein, E., Madan, D., Pistorius, M., & Yor, M. 2013. A Simple Stochastic Rate Model for Rate
|
2881 |
+
Equity Hybrid Products. Applied Mathematical Finance, 20(5), 461–488.
|
2882 |
+
Elliot, R., Jeanblanc, M., & Yor, M. 2000. On models of default risk. Mathematical Finance, 10,
|
2883 |
+
179–195.
|
2884 |
+
Hurd, T., & Zhou, Z. 2010. A Fourier transform method for spread option pricing. Siam Journal
|
2885 |
+
of Financial Mathematics, 1, 142–157.
|
2886 |
+
Jarrow, R., Lando, D., & Turnbull, S. 1997. A Markov model for the term structure of credit risk
|
2887 |
+
spreads. Review of Financial Studies, 10(2), 481–523.
|
2888 |
+
Kusuoka, S. 1999. A remark on Default risk models. Advances in Mathematical Economics, 1,
|
2889 |
+
69–81.
|
2890 |
+
Lando, D. 1998. On Cox processes and credit risky securities. Review of Derivatives Research,
|
2891 |
+
2(2-3), 99–120.
|
2892 |
+
Madan, D., Pistorius, M., & Stadje, M. 2017. On dynamic spectral risk measures, a limit theorem
|
2893 |
+
and optimal portfolio allocation. Finance and Stochastics, 21, 1073–1102.
|
2894 |
+
Madan, D., Schoutens, W., & Wang, K. 2020. Bilateral Multiple Gamma Returns: Their Risks
|
2895 |
+
and Rewards. International Journal of Financial Engineering, 7(1), 1–27.
|
2896 |
+
Madan, Dilip, & Unal, Haluk. 1998. Pricing the risks of default. Review of Derivatives Research,
|
2897 |
+
2, 121–160.
|
2898 |
+
McNeil, A., Frey, R., & Embrecht, P. 2005. Quantitative Risk Management-Concepts, Techniques
|
2899 |
+
and Tools. Princeton University Press.
|
2900 |
+
Pedersen, C. 2003. Valuation of Portfolio Credit Default Swaptions. Lehman Brothers Quantitative
|
2901 |
+
Credit Research, 2003.
|
2902 |
+
|
ENE4T4oBgHgl3EQf6g5Z/content/tmp_files/load_file.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
G9FJT4oBgHgl3EQfty2e/content/2301.11619v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:13c53af5045b7254f2c94326c1fb306e6ea7f78b0fbc54e1ace00fc04a9e1696
|
3 |
+
size 1909640
|
G9FJT4oBgHgl3EQfty2e/vector_store/index.pkl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:33d4c730ec87cf3ef353fbdb177d0db5b9e6d79b1816695835115e4b4d0a06e0
|
3 |
+
size 162713
|
G9FLT4oBgHgl3EQfHS_T/content/2301.11996v1.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2efd35b172435a869993faf0b5be2eaeb1acb89c1e48d85a7260d49a9f7a9ef9
|
3 |
+
size 259357
|