diff --git "a/A9AyT4oBgHgl3EQfRvd3/content/tmp_files/load_file.txt" "b/A9AyT4oBgHgl3EQfRvd3/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/A9AyT4oBgHgl3EQfRvd3/content/tmp_files/load_file.txt" @@ -0,0 +1,1207 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf,len=1206 +page_content='LeaFTL: A Learning-based Flash Translation Layer for Solid-State Drives Jinghan Sun UIUC js39@illinois.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='edu Shaobo Li UIUC shaobol2@illinois.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='edu Yunxin Sun∗ ETH Zurich yunsun@student.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='ethz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='ch Chao Sun Western Digital Research chao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='sun@wdc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='com Dejan Vucinic Western Digital Research dejan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='vucinic@wdc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='com Jian Huang UIUC jianh@illinois.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='edu ABSTRACT In modern solid-state drives (SSDs), the indexing of flash pages is a critical component in their storage controllers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' It not only affects the data access performance, but also determines the efficiency of the precious in-device DRAM resource.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' A variety of address mapping schemes and optimizations have been proposed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' However, most of them were developed with human-driven heuristics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In this paper, we present a learning-based flash translation layer (FTL), named LeaFTL, which learns the address mapping to tolerate dynamic data access patterns via linear regression at runtime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' By grouping a large set of mapping entries into a learned segment, it significantly reduces the memory footprint of the address mapping table, which further benefits the data caching in SSD controllers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL also employs various optimization techniques, including out-of-band metadata verification to tolerate mispredictions, opti- mized flash allocation, and dynamic compaction of learned index segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We implement LeaFTL with both a validated SSD sim- ulator and a real open-channel SSD board.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Our evaluation with various storage workloads demonstrates that LeaFTL saves the memory consumption of the mapping table by 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='9× and improves the storage performance by 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='4× on average, in comparison with state-of-the-art FTL schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' CCS CONCEPTS Hardware → External storage;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' • Computer systems orga- nization → Architectures;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' • Computing methodologies → Learning linear models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' KEYWORDS Learning-Based Storage, Flash Translation Layer, Solid-State Drive 1 INTRODUCTION Flash-based SSDs have become an indispensable part in modern storage systems, as they outperform conventional hard-disk drives (HDDs) by orders of magnitude, and their cost is close to that of HDDs [22, 30, 51, 62].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The SSD capacity continues to boost by increasing the number of flash channels and chips with the rapidly shrinking process and manufacturing technology [22, 25, 41, 46].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The flash translation layer (FTL) is the core component of man- aging flash memory in SSDs, including address translation, garbage collection (GC), and wear leveling [20, 66].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The FTL maintains meta- data structures for different functions such as address translation ∗Work done when visiting the Systems Platform Research Group at UIUC as a research intern.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' and valid page tracking, and caches them in the in-device DRAM (SSD DRAM) for improved performance [7, 12, 25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Among these data structures, the address mapping table has the largest memory footprint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In general, the address mapping table can be categorized in three types: page-level mapping, block- level mapping, and hybrid mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Modern SSDs usually use the page-level mapping, as it offers the best performance for the flash page lookup, and incurs minimal GC overhead, in comparison with the other two mapping schemes [20, 66].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' However, the page-level mapping table size is large, as it stores the entry for the LPA-to-PPA address translation for each flash page.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The address mapping table significantly affects the performance of SSDs, as it not only determines the efficiency of indexing flash pages, but also affects the utilization of SSD DRAM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Moreover, due to the limitations of the cost and power budget in SSD controllers, it is challenging for SSD vendors to scale the in-device DRAM capacity [12, 41].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' This challenge becomes even worse with the increasing flash memory capacity in an SSD, as larger capacity usually requires a larger address mapping table for indexing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' To improve the address mapping and translation for SSDs, vari- ous optimization schemes have been developed [9, 25, 29, 38, 39, 66].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' However, most of them were developed based on human-driven heuristics [25], and cannot capture dynamic data access patterns at runtime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Employing more semantic knowledge into the FTL, such as GraphSSD [44], can improve the data indexing and address translation, however, it is application specific and complicates the management of address mappings [7], which does not scale for the development of generic SSDs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In this work, we do not expect that we can obtain application semantics from the host and the SSD con- troller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Instead, we focus on utilizing simple yet effective machine learning (ML) techniques to automate the address mapping table management in the SSDs, with the capability of learning diverse and dynamic data access patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' To this end, we propose a learning-based FTL, named LeaFTL, by utilizing the piecewise linear regression technique to learn the LPA- PPA mappings, and automatically exploiting the data locality of various data access patterns at runtime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Unlike the state-of-the-art page-level mapping, the key idea of LeaFTL is that it can learn the correlation between a set of LPAs and their mapped PPAs, based on which it can build a space-efficient index segment, as presented in A in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Since the learned index segment can be simply represented with (𝑆, 𝐿, 𝐾, 𝐼), where [𝑆,𝑆 + 𝐿] denotes the interval of LPAs, 𝐾 is the slope of the segment, and 𝐼 is the intercept of the segment (see the last diagram in Figure 1), each segment will take arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='00072v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='OS] 30 Dec 2022 Jinghan Sun, Shaobo Li, Yunxin Sun, Chao Sun, Dejan Vucinic, and Jian Huang 30 LPA PPA 31 32 33 34 155 156 157 158 159 60 62 64 66 68 200 201 203 204 205 80 82 83 84 87 304 305 306 307 308 Index Segment A Index Segment B Index Segment C LPA PPA A B C error bound 1 1 1 1 2 2 2 2 2 1 1 3 Figure 1: An illustrative example of learning LPA-PPA mappings using piecewise linear regression in LeaFTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' It can learn various patterns of LPA-PPA mappings with guaranteed error bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Each learned index segment can be represented with (𝑆, 𝐿, 𝐾, 𝐼), where [𝑆,𝑆 + 𝐿] denotes the interval of LPAs, 𝐾 is the slope, and 𝐼 is the intercept of the index segment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' only 8 bytes (1 byte for 𝑆 and 𝐿, 2 bytes for 𝐾, and 4 bytes for 𝐼) with our optimizations (see the details in §3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Compared to the on- demand page-level mapping [20], the learned segment reduces the mapping table size by a factor of 𝑚 ∗ 𝑎𝑣𝑔(𝐿)/8, where 𝑚 is the size (8 bytes) of each entry in the on-demand page-level mapping table, and 𝑎𝑣𝑔(𝐿) is the average number of LPA-PPA mappings that can be represented in a learned index segment, 𝑎𝑣𝑔(𝐿) is 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='3 according to our study of various storage workloads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Beyond learning contiguous LPA-PPA mappings, LeaFTL also learns different correlation patterns, such as regular and irregular strided data accesses as shown in B and C , respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Unlike existing indexing optimizations based on human-driven heuristics, LeaFTL can learn more irregular patterns of LPA-PPA mappings with guaranteed error bound, as shown in C .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' This enables LeaFTL to further condense the address mapping table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Therefore, given a limited DRAM capacity in the SSD controller, LeaFTL can maximally utilize the DRAM caching and improve the storage performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' For the worst case like random I/O accesses, LeaFTL will transfer the mapping into single-point linear segments (𝐿 = 0, 𝐾 = 0, and 𝐼 = 𝑃𝑃𝐴 in Figure 1), and its memory consumption will be no more than that of the page-level mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' With the learned index segments, LeaFTL may occasionally re- turn an inaccurate PPA (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=', address misprediction), which incurs additional flash accesses until the correct PPA is identified.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' To over- come this challenge, we develop an error-tolerant mechanism in LeaFTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' For each flash page access, we use the reverse mapping stored in the out-of-band (OOB) metadata of each flash page to verify the correctness of the data access.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Since the OOB usually has 64–256 bytes [20, 23], we use it to store the accurate LPAs mapped to the neighbor PPAs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Thus, upon an address misprediction, we use the stored reverse mappings to find the correct PPA, avoiding addi- tional flash accesses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL leverages the intrinsic OOB structure to handle address mispredictions and make SSD perfectly-suited for practical learned indexing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Due to the intrinsic out-of-place write property of SSDs (see §2), the learned index segments will be disrupted by writes and GC, and the segments need to be relearned with new LPA-PPA mappings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' To tolerate these disruptions, the learned segments are organized within multiple levels to maintain the temporal order in a log-structured manner: the topmost level has the most recent segments, and the lower level stores older segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The segments at the same level are sorted without overlapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' If the new segment has a conflict with an existing segment, the old segment will be moved to the lower level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Therefore, LeaFTL can always identify the latest version of the corresponding LPA-PPA mapping in a top level of learned index segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL will compact the learned segments periodically to reduce its memory footprint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' To further maximize the efficiency of LeaFTL, we coordinate its learning procedure with flash block allocation in the SSD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As flash block allocation decides the distribution of mapped PPAs, LeaFTL will allocate consecutive PPAs to contiguous LPAs at its best effort, for increasing the possibility of learning a space-efficient index seg- ment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Similar to existing page-level mapping [20, 23], LeaFTL stores the learned index segments in flash blocks for recovery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Overall, we make the following contributions: We present a learning-based FTL, it can learn various data access patterns and turn them into index segments for reducing the storage cost of the mapping table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We develop an error-tolerant address translation mechanism to handle address mispredictions caused by the learned indexes, with minimal extra flash accesses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We preserve the core FTL functions, and enable the coordination between the learning procedure of the address mapping table with the flash block allocation and GC to maximize the efficiency of the learned FTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We manage the learned segments in an optimized log-structured manner, and enable compaction to further improve the space efficiency for the address mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We implement LeaFTL with a validated SSD simulator Wisc- Sim [27] and evaluate its efficiency with a variety of popular storage workloads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We also develop a system prototype with a real 1TB open-channel SSD to verify the functions of LeaFTL and validate its efficiency with real data-intensive applications, such as the key- value store and transactional database.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Our evaluation with the real SSD shows similar benefits as that of the SSD simulator imple- mentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We demonstrate that LeaFTL reduces the storage cost of the address mapping in the FTL by 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='9× on average.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The saved memory space benefits the utilization of the precious SSD DRAM, and further improves the storage performance by 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='4× on average.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We also show that LeaFTL does not affect the SSD lifetime, and its LeaFTL: A Learning-based Flash-Translation Layer for Solid-State Drives flash flash flash flash Flash Flash Flash Flash DRAM Flash Controller SSD Controller/Firmware PCIe Interface Embedded Processor Internal Bus DRAM Controller Block I/O Figure 2: The internal system architecture of SSDs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' learning procedure introduces negligible performance overhead to the storage processor in the SSD controllers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The codebase of LeaFTL is available at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='com/platformxlab/LeaFTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2 BACKGROUND AND MOTIVATION Flash-Based Solid-State Drive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' An SSD has three major parts (see Figure 2): a set of flash memory packages, an SSD controller with embedded processors, and a set of flash controllers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' With the nature of NAND Flash, when a free page is written, the page cannot be written again until that page is erased.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' However, erase operation is performed only at a block granularity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As the erase operation is expensive, writes are issued to free flash pages erased in advance (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=', out-of-place write).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' GC will be performed to clean the stale data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As each flash block has limited endurance, it is important for them to age uniformly (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=', wear leveling).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' SSDs have a logical- to-physical address mapping table to index flash pages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' All these functions are managed by the FTL in the SSD firmware.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Modern SSD controllers have general-purpose embedded pro- cessors (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=', ARM processors).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The processors help with issuing I/O requests, translating LPAs to PPAs, and handling GC and wear- leveling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' SSDs also have limited DRAM capacities to cache the mapping table and the application data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Address Mapping Table in the FTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The address mapping table in FTL generally has three types: page-level mapping, block-level mapping, and hybrid mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The page-level mapping enables di- rect LPA-PPA mapping for fast lookup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' However, each entry usually takes 8 bytes (4 bytes for LPA, 4 bytes for PPA), and the entire map- ping table requires large storage space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The block-level mapping significantly reduces the mapping table size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' However, it introduces additional overhead for the page lookup in the flash block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The hy- brid mapping takes advantages of both page-level and block-level mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' It uses log blocks to store new writes, and index them with the page-level mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The log blocks will be moved into data blocks that are indexed with block-level mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' This incurs significant GC overhead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Therefore, modern SSDs commonly use the page-level mapping scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Metadata Structures for Flash Management.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The FTL usually employs four metadata structures (see Figure 3): (1) the address mapping cache ( 1 AMC) for caching the address mapping table in the SSD DRAM;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' (2) the global mapping directory ( 2 GMD) for tracking the locations of the address mapping table pages in the Address Mapping Cache (AMC) 1 Global Mapping Directory (GMD) 2 Block Validity Counter (BVC) 3 Page Validity Table (PVT) 4 LPA PPA .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LX PY .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LPA PPA .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' VX PZ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' PBA Counter .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' PBA Bitmap .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' PB .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Data Structures in the FTL of Modern SSDs Flash Memory Data Blocks Address Mapping Blocks Validity Blocks Figure 3: The common data structures in the FTL of SSDs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' SSD;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' (3) the block validity counter ( 3 BVC) for tracking the number of valid pages for each flash block for assisting the GC in the SSD;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' and (4) the page validity table ( 4 PVT), which uses bitmaps to track the valid pages in each flash block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' During the GC, the FTL will check the 3 BVC to select candidate flash blocks, and migrate their valid pages to free flash blocks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' After that, it will erase these selected flash blocks, and mark them as free blocks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Limited DRAM Capacity in SSD Controllers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' It is hard to provi- sion large DRAM inside SSD controllers, due to their hardware con- straints and limited budgets for power and hardware cost [12, 41, 60].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Thus, SSD controllers often use on-demand caching to maintain the recently accessed metadata and data in the SSD DRAM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Among all the metadata structures, the address mapping table has the largest memory footprint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As discussed, 1 AMC caches the recently accessed mapping table entries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' If a mapping entry is not cached, the FTL will locate the corresponding address mapping ta- ble pages stored in the flash blocks, and place the mapping entry in the 1 AMC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As we scale the SSD capacity, the DRAM challenge will become even worse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' To overcome this challenge, various optimiza- tions on the mapping table have been proposed [9, 25, 29, 31, 38, 39] to improve the utilization of the SSD DRAM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' However, most of them cannot automatically capture diverse data access patterns at runtime, leaving a large room for improvement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 3 DESIGN AND IMPLEMENTATION To develop LeaFTL in the SSD controller, we have to overcome the following research challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL should be able to automatically capture diverse data access patterns, and generate memory-efficient address mapping (§3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='1, §3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='2, §3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='3, and §3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL may incur address mispredictions, which could incur additional flash accesses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL should be tolerant of errors and have low misprediction penalty (§3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL should work coordinately with other core FTL functions that include GC and wear leveling (§3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL should be lightweight and not incur much extra overhead to storage operations (§3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='7, §3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='8 and §3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Jinghan Sun, Shaobo Li, Yunxin Sun, Chao Sun, Dejan Vucinic, and Jian Huang (a) Precise Linear Approximation (b) Inaccurate Linear Approximation Figure 4: Visualization of learned index segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 1 2 4 8 16 32 64 128 256 512 1024 2048 Length of Learned Segments 0 20 40 60 80 100 Percentage of Segments (%) =0, #Segments=5540 =4, #Segments=4267 =8, #Segments=3718 Figure 5: Aggregated distribution of learned segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='1 Key Ideas of LeaFTL Instead of using the space-consuming one-to-one mapping in the page-level mapping, the key idea of LeaFTL is to exploit learning techniques to identify various LPA-PPA mapping patterns and build efficient learned address mapping entries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Modern SSD controllers usually have a data buffer for grouping writes and write the large data chunk at once for exploiting the internal flash parallelisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL utilizes this data buffer to collect LPA-to-PPA mappings for learning index segments for free, and does not introduce extra data collection overhead (see the details in §3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As shown in Figure 4 (a), the PPA of an LPA can be obtained with the expression: 𝑃𝑃𝐴 = 𝑓 (𝐿𝑃𝐴) = ⌈𝐾 ∗ 𝐿𝑃𝐴 + 𝐼⌉, 𝐿𝑃𝐴 ∈ [𝑆𝐿𝑃𝐴,𝑆𝐿𝑃𝐴 + 𝐿], where [𝑆𝐿𝑃𝐴,𝑆𝐿𝑃𝐴 + 𝐿] denotes the interval (𝐿) of LPAs, 𝐾 is the slope, and 𝐼 is the intercept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As discussed in §1, each learned index segment can be represented in 8 bytes: 1 byte for 𝑆𝐿𝑃𝐴 and 𝐿, respectively;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2 bytes for 𝐾, and 4 bytes for 𝐼.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The size of 𝑆𝐿𝑃𝐴 is reduced from 4 bytes to 1 byte with our optimizations on the segment management (see §3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We can relax the linear regression to capture more flash access patterns, which further reduces the learned address mapping table size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As shown in Figure 4 (b), the linear regression can learn a pattern with guaranteed error bound [−𝛾,𝛾].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As we increase 𝛾, we can cover more flash access patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We applied the relaxed linear regression with different 𝛾 values to a variety of storage workloads (see §4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='1), our experimental results demonstrate that the number of learned index segments is gradually decreased, as we increase 𝛾.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Figure 5 shows that 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='2–99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='2% of the learned index segments cover Segment SLPA L K I 1B 1B 2B 4B Type LPAs PPAs Index Segment Accurate [0, 1, 2, 3] [32, 33, 34, 35] Approximate [0, 1, 4, 5] [64, 65, 66, 67] 0 3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='00 32 0 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='56 64 Figure 6: Types of learned segments in LeaFTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' up to 128 LPA-PPA mapping entries, demonstrating the potential advantages of the learning-based approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As for random access patterns, LeaFTL will transfer the learned segments into single-point segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' And these linear segments do not require more storage space than the page-level mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='2 Learned Index Segment Types of Learned Index Segment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The mapping table of LeaFTL is built with learned index segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' It has two types of segments: accurate and approximate segments, as shown in Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Both of them are learned with piecewise linear regression technique [64].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As for the accurate index segments, given an LPA, we can pre- cisely get the corresponding PPA with 𝑓 (𝐿𝑃𝐴) = ⌈𝐾 ∗ 𝐿𝑃𝐴 + 𝐼⌉.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' For example, when the LPA is 2 in Figure 6, we can directly get the PPA value of 34 with ⌈1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='00 ∗ 2 + 32⌉.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In this example, the learned segment has 𝐿 = 3 and it indexes 4 LPA-PPA mappings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' If 𝐿 = 0, the learned segment will become a single-point segment, the slope 𝐾 = 0, and we will get its PPA with 𝑃𝑃𝐴 = 𝐼.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As for approximate index segments, we use the same formula 𝑓 (𝐿𝑃𝐴) = ⌈𝐾 ∗𝐿𝑃𝐴+𝐼⌉ to calculate the PPA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' However, the returned PPA may not be the exact corresponding PPA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' It has an error bound [−𝛾,𝛾] guaranteed by the linear regression, and 𝛾 is configurable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' For example, given 𝐿𝑃𝐴 = 4 in Figure 6, the value of the PPA is 67, according to the calculation ⌈4 ∗ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='56 + 64⌉.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' However, the real PPA should be 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We define this as address misprediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We will discuss how we handle the address misprediction with reduced miss penalty in §3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Size of Learned Index Segment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As discussed in §3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='1, each seg- ment can be expressed in (𝑆𝐿𝑃𝐴, 𝐿, 𝐾, 𝐼).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The starting LPA will take 4 bytes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We can further reduce this size by partitioning a range of LPAs into small groups, and each LPA group represents a certain number of contiguous LPAs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Therefore, we can index an LPA with its offset in a corresponding group.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In LeaFTL, each group repre- sents 256 contiguous LPAs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Thus, 𝑆𝐿𝑃𝐴 can be indexed by the offset (28 = 256) in the group, which takes only 1 byte.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We use 256 as the group size, because the length of the learned segments is usually less than 256 (see Figure 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Given an LPA, we can get its offset in the group with (𝐿𝑃𝐴 𝑚𝑜𝑑 256).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In LeaFTL, we set the 𝐿 as 1 byte.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Thus, each segment can index 256 LPA-PPA mappings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We use a 16-bit floating point to store the value of the slope 𝐾.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' And the intercept 𝐼 of a segment can be represented in 4 bytes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Therefore, in combination with 𝑆𝐿𝑃𝐴, both accurate and approximate segments can be encoded with 8 bytes (see Figure 6), which are memory aligned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL: A Learning-based Flash-Translation Layer for Solid-State Drives (a) Unoptimized learned segments (b) Optimized learned segments with sorting Learned Segments 78 32 33 76 Flush Data Buffer 115 34 38 Flash Block 78 32 33 76 115 34 38 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LPA 78 32 33 76 115 34 38 Learned Segments Flush Data Buffer Flash Block 32 33 34 38 76 78 115 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LPA 78 32 33 76 115 34 38 115 32 33 34 38 76 78 Figure 7: An example of reducing the number of learned seg- ments via exploiting the flash block allocation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL uses the least significant bit of the 𝐾 to indicate segment types (0 for accurate segments, 1 for approximate segments).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' This has negligible impact on the address translation accuracy, because 𝐾 ∈ [0, 1], which will only affect the tenth digit after decimal point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='3 Improve the Learning Efficiency To further reduce the number of learned segments, LeaFTL performs optimizations to improve its learning efficiency of address mappings by exploiting the flash block allocation in SSD controllers, as shown in Figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Flash pages are usually buffered in the SSD controller and written to flash chips at a flash block granularity, for utilizing the internal bandwidth and avoiding the open-block problem [6, 22, 37, 48].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' This allows LeaFTL to learn more space-efficient index segments (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=', index segments can cover more LPA-PPA mappings) by reordering the flash pages with their LPAs in the data buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As shown in Figure 7 (a), LeaFTL learns 5 index segments (78), (32, 33), (76), (115), and (34, 38) with 𝛾 = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' After sorting the pages in the data buffer shown in Figure 7 (b), LeaFTL generates 3 index segments (32, 33, 34, 38), (76, 78), and (115).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' To develop the optimized learned segments, LeaFTL sorts the flash pages in ascending order of their LPAs in the data buffer (8MB by default).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' When pages in the data buffer is flushed to the flash chips, their PPAs are in ascending order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' This ensures a mono- tonic address mapping between LPAs and PPAs, which reduces the number of index segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='4 Manage Learned Index Segments Upon new data updates or GC in the SSD, the learned index seg- ments need to be updated, due to the intrinsic property (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=', out-of- place update) of SSDs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Unfortunately, the direct updates to learned index segments are expensive, since we have to relearn the in- dex segments with new PPAs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' This relearning procedure not only consumes extra compute cycles, but also involves additional flash accesses, since we have to access the corresponding flash pages to obtain accurate PPAs for some of the LPAs in the index segment being updated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' For instance, for in-place update to an approximate Level 0 Level 1 0 63 100 200 230 255 16 127 206 240 non-overlapping at each level segments can overlap across levels Figure 8: The learned index segments are managed in a log- structured manner in LeaFTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' segment, it can incur 21 flash accesses on average when relearn- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In-place update also breaks the existing LPA-to-PPA mapping patterns, which results in 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='2× additional segments and memory footprint, according to our experiments with various workloads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' To address this challenge, we manage the learned index segments in a log-structured manner, as shown in Figure 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Therefore, the newly learned index segments will be appended to the log structure (level 0 in Figure 8) and used to index the updated LPA-PPA map- pings, while the existing learned segments (level 1 and lower levels in Figure 8) can still serve address translations for LPAs whose map- pings have not been updated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Such a structure supports concurrent lookups as enabled in the traditional log-structured merge tree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As we insert the newly learned index segments at the top level of the log-structured tree, this minimizes the impact on other segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Log-Structured Mapping Table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The log-structured mapping ta- ble has multiple levels to maintain the temporal order of index seg- ments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As discussed, the topmost level has the most recent learned index segments, and the lower level stores the older segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' For the segments on the same level, LeaFTL ensures that they are sorted and do not have overlapped LPAs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' This is for fast location of the corresponding learned index segments in each level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' For the seg- ments across the levels, they may have overlapped LPAs, due to the nature of the log-structured organization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' And the segments with overlapped LPA-PPA mappings will be compacted periodically for space reclamation (see its detailed procedure in §3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Manage Two Types of Index Segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL manages the ac- curate and approximate index segments in the same log-structured mapping table, as they can be encoded in the same format.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' For each accurate segment, we can directly infer its indexed LPAs with the 𝑆𝐿𝑃𝐴, 𝐾, and 𝐿, since it has a regular pattern.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' However, for approx- imate index segments, we only have the knowledge of the starting LPA and the end LPA with 𝑆𝐿𝑃𝐴 + 𝐿.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Its encoded LPAs cannot be directly inferred from their metadata (𝑆𝐿𝑃𝐴, 𝐿, 𝐾, 𝐼), since they are learned from irregular access patterns and may have mispredictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' If two approximate segments have overlapping LPA ranges, we could obtain inaccurate PPAs from the learned index segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As shown in Figure 9 (a), given an LPA with the value 105, we will check the segment at Level 0 and may get an inaccurate PPA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' This will also affect the efficiency of the segment compaction, with which we eliminate duplicated entries between segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' To address this challenge, LeaFTL uses a Conflict Resolution Buffer (CRB) for each LPA group to store the LPAs indexed by each approximate segment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The main purpose of CRB is to help LeaFTL check whether a given LPA belongs to one approximate segment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The CRB is a nearly-sorted list [10] by the starting LPAs of its ap- proximate segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' To be specific, the CRB ensures the following Jinghan Sun, Shaobo Li, Yunxin Sun, Chao Sun, Dejan Vucinic, and Jian Huang 100 6 K1 I1 [100, 101, 103, 104, 106] 102 6 K2 I2 [102, 105, 107, 108] L0 L1 LPAs Lookup (LPA = 105) (a) Approximate index segments that index overlapped LPAs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Conflict Resolution Buffer 100 101 103 104 106 null 102 105 107 108 null .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Lookup (LPA = 105) 102 6 K2 I2 (b) Resolve the conflict between approximate segments with CRB Figure 9: A case study of conflict resolution buffer for ap- proximate learned index segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' properties: (1) the LPAs belong to the same approximate segment are stored contiguously;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' (2) different approximate segments are sorted by their starting LPA, and CRB uses a 𝑛𝑢𝑙𝑙 byte to separate these segments;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' (3) it does not have redundant LPAs, which means an LPA will appear at most once in the CRB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' This is achieved by removing existing same LPAs when we insert new approximate segments into the CRB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' However, if the 𝑆𝐿𝑃𝐴 of a new approximate segment is the same as any starting LPAs that have been stored in the CRB, LeaFTL will update the 𝑆𝐿𝑃𝐴 of the old segment with the adjacent LPA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Take Figure 9 (b) as an example, upon a new approximate segment with 𝑆𝐿𝑃𝐴 = 100, we will update the 𝑆𝐿𝑃𝐴 of the existing segment to 101, and then insert the new segment into the CRB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In this case, LeaFTL will ensure each approximate segment will have its unique 𝑆𝐿𝑃𝐴.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' This will facilitate the approximate LPA-PPA address translation with high accuracy confidence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Since CRB is nearly sorted, its insertion, deletion, and lookup operations are fast.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The CRB is also space efficient, as each LPA (the offset in its corresponding LPA group) will take only one byte, and it guarantees that there are no redundant LPAs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Therefore, the CRB will maximally store 256 LPAs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Our experiments with a variety of storage workloads show that the CRB will take 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='9 bytes on average, as shown in Figure 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Given an LPA, in order to identify which approximate index segment it belongs to, LeaFTL will check the CRB with binary search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Once the LPA is found, LeaFTL will search to its left until identifying the 𝑆𝐿𝑃𝐴, and this 𝑆𝐿𝑃𝐴 will be the starting LPA of the corresponding approximate segment, as shown in Figure 9 (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Therefore, CRB can assist LeaFTL to resolve the LPA lookups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5 Handle Address Misprediction As discussed in §3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='2, the mapping table entries encoded with ap- proximate segments may occasionally incur mispredictions and return an approximated PPA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' These approximate segments have a guaranteed error bound [−𝛾,𝛾], where 𝛾 is a constant value that can be specified in the linear regression algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' To verify the correctness of the address translation, a simple method is to access MSR-hm MSR-src2 MSR-prxy MSR-prn MSR-usr FIU-home FIU-mail 0 100 200 300 CRB Size (in Bytes) Average 99 Percentile Figure 10: The distribution of CRB sizes for different storage workloads, when we set 𝛾 = 4 in LeaFTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' PPA1 PPA2 PPA3 PPA4 PPA5 Data Blocks Data OOB Flash Page LPA2 LPA4 LPA Reverse Mapping Figure 11: The out-of-band (OOB) metadata organization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' It stores the reverse mapping for its neighbor PPAs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' the flash page with the predicted PPA, and use the reverse mapping (its corresponding LPA) stored in the OOB metadata of the flash page to check whether the LPA matches or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In this case, upon a PPA misprediction, we need log(𝛾) flash accesses on average to identify the correct PPA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' To avoid extra flash accesses for address mispredictions, LeaFTL leverages the OOB of the flash page to store the reverse mappings of its neighbor PPAs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' This is developed based on the insight that: with a 𝑃𝑃𝐴𝑙𝑒𝑎𝑟𝑛𝑒𝑑 obtained from an approximate segment, its er- ror bound [−𝛾,𝛾] guarantees that the correct PPA is in the range of [𝑃𝑃𝐴𝑙𝑒𝑎𝑟𝑛𝑒𝑑 − 𝛾, 𝑃𝑃𝐴𝑙𝑒𝑎𝑟𝑛𝑒𝑑 + 𝛾], as discussed in Figure 4 (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Thus, upon a misprediction, LeaFTL will read the flash page with 𝑃𝑃𝐴𝑙𝑒𝑎𝑟𝑛𝑒𝑑, and use its OOB to find the correct PPA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In this case, LeaFTL ensures that it will incur only one extra flash access for address mispredictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' This is a feasible approach, as the OOB size is usually 128–256 bytes in modern SSDs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As each LPA takes 4 bytes, we can store 32–64 reverse mapping entries in the OOB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We show the OOB organization of LeaFTL in Figure 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' For the flash page 𝑃𝑃𝐴𝑋 , the first 2𝛾 + 1 entries in its OOB correspond to the LPAs for the flash pages [𝑃𝑃𝐴𝑋 − 𝛾, 𝑃𝑃𝐴𝑋 + 𝛾].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' For the flash pages at the beginning and end of a flash block, we may not be able to obtain the reverse mapping of their neighbor PPAs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We place the 𝑛𝑢𝑙𝑙 bytes in the corresponding entry of the OOB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='6 Preserve Other Core FTL Functions LeaFTL preserves the core functions such as GC and wear leveling in an FTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' It follows the same GC and wear leveling policies in modern SSDs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' When the number of free blocks in an SSD is below a threshold (usually 15-40% of the total flash blocks), the SSD con- troller will trigger the GC execution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL employs the greedy algorithm [5] to select the candidate blocks which have the minimal LeaFTL: A Learning-based Flash-Translation Layer for Solid-State Drives ALGORITHM 1: LeaFTL operations Input: 𝑔𝑟𝑜𝑢𝑝𝑠 ← 𝐿𝑒𝑎𝐹𝑇𝐿 𝑔𝑟𝑜𝑢𝑝 𝑝𝑎𝑟𝑡𝑖𝑡𝑖𝑜𝑛𝑠 // Insert/Update Segment in the LeaFTL 1 Function 𝑠𝑒𝑔_𝑢𝑝𝑑𝑎𝑡𝑒(𝑠𝑒𝑔𝑚𝑒𝑛𝑡,𝑙𝑒𝑣𝑒𝑙): 2 𝑠𝑒𝑔_𝑝𝑜𝑠 = 𝑏𝑖𝑛𝑎𝑟𝑦_𝑠𝑒𝑎𝑟𝑐ℎ(𝑙𝑒𝑣𝑒𝑙,𝑠𝑒𝑔𝑚𝑒𝑛𝑡.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝑆𝐿𝑃𝐴) 3 𝑙𝑒𝑣𝑒𝑙.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝑖𝑛𝑠𝑒𝑟𝑡 (𝑠𝑒𝑔𝑚𝑒𝑛𝑡,𝑠𝑒𝑔_𝑝𝑜𝑠) 4 if 𝑛𝑜𝑡 𝑠𝑒𝑔𝑚𝑒𝑛𝑡.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝑎𝑐𝑐𝑢𝑟𝑎𝑡𝑒 then 5 Insert LPAs into CRB and remove redundant LPAs 6 if 𝑠𝑒𝑔𝑚𝑒𝑛𝑡.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝑆𝐿𝑃𝐴 exists in CRB then 7 Update the 𝑆𝐿𝑃𝐴 of the old segment 8 𝑣𝑖𝑐𝑡𝑖𝑚_𝑠𝑒𝑔𝑚𝑒𝑛𝑡𝑠 ← All segments that overlap the 𝑠𝑒𝑔𝑚𝑒𝑛𝑡 starting with 𝑠𝑒𝑔_𝑝𝑜𝑠 9 foreach 𝑣𝑖𝑐𝑡𝑖𝑚 ∈ 𝑣𝑖𝑐𝑡𝑖𝑚_𝑠𝑒𝑔𝑚𝑒𝑛𝑡𝑠 do 10 𝑠𝑒𝑔_𝑚𝑒𝑟𝑔𝑒 (𝑠𝑒𝑔𝑚𝑒𝑛𝑡, 𝑣𝑖𝑐𝑡𝑖𝑚) // if marked as removable by seg_merge() 11 if 𝑣𝑖𝑐𝑡𝑖𝑚.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝐿 = −1 then 12 𝑙𝑒𝑣𝑒𝑙.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝑟𝑒𝑚𝑜𝑣𝑒 (𝑣𝑖𝑐𝑡𝑖𝑚) 13 if 𝑠𝑒𝑔𝑚𝑒𝑛𝑡.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝑜𝑣𝑒𝑟𝑙𝑎𝑝𝑠 (𝑣𝑖𝑐𝑡𝑖𝑚) then 14 Pop 𝑣𝑖𝑐𝑡𝑖𝑚 to the next level 15 if 𝑣𝑖𝑐𝑡𝑖𝑚 has overlaps in the next level then 16 Create level for 𝑣𝑖𝑐𝑡𝑖𝑚 to avoid recursion // Lookup LPA in the LeaFTL 17 Function 𝑙𝑜𝑜𝑘𝑢𝑝(𝑙𝑝𝑎): 18 foreach 𝑙𝑒𝑣𝑒𝑙 ∈ 𝑔𝑟𝑜𝑢𝑝𝑠 [𝑙𝑝𝑎 𝑚𝑜𝑑 256] do 19 𝑠𝑒𝑔_𝑝𝑜𝑠 = 𝑏𝑖𝑛𝑎𝑟𝑦_𝑠𝑒𝑎𝑟𝑐ℎ(𝑙𝑒𝑣𝑒𝑙,𝑙𝑝𝑎) 20 𝑠𝑒𝑔𝑚𝑒𝑛𝑡 = 𝑙𝑒𝑣𝑒𝑙.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝑔𝑒𝑡_𝑠𝑒𝑔𝑚𝑒𝑛𝑡 (𝑠𝑒𝑔_𝑝𝑜𝑠) 21 if ℎ𝑎𝑠_𝑙𝑝𝑎(𝑠𝑒𝑔𝑚𝑒𝑛𝑡, 𝑙𝑝𝑎) then 22 return 𝑠𝑒𝑔𝑚𝑒𝑛𝑡.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝑡𝑟𝑎𝑛𝑠𝑙𝑎𝑡𝑒𝑃𝑃𝐴(𝑙𝑝𝑎) // LeaFTL Compaction 23 Function 𝑠𝑒𝑔_𝑐𝑜𝑚𝑝𝑎𝑐𝑡(): 24 foreach 𝑔𝑟𝑜𝑢𝑝 ∈ 𝑔𝑟𝑜𝑢𝑝𝑠 do 25 foreach 𝑢𝑝𝑝𝑒𝑟_𝑙𝑒𝑣𝑒𝑙,𝑙𝑜𝑤𝑒𝑟_𝑙𝑒𝑣𝑒𝑙 ∈ 𝑔𝑟𝑜𝑢𝑝 do 26 foreach 𝑠𝑒𝑔𝑚𝑒𝑛𝑡 ∈ 𝑢𝑝𝑝𝑒𝑟_𝑙𝑒𝑣𝑒𝑙 do 27 𝑠𝑒𝑔_𝑢𝑝𝑑𝑎𝑡𝑒 (𝑠𝑒𝑔𝑚𝑒𝑛𝑡,𝑙𝑜𝑤𝑒𝑟_𝑙𝑒𝑣𝑒𝑙) 28 if 𝑢𝑝𝑝𝑒𝑟_𝑙𝑒𝑣𝑒𝑙 is empty then 29 𝑔𝑟𝑜𝑢𝑝.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝑟𝑒𝑚𝑜𝑣𝑒 (𝑢𝑝𝑝𝑒𝑟_𝑙𝑒𝑣𝑒𝑙) number of valid pages, for reducing the data movement overhead at GC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As the GC move the valid pages from the candidate blocks to the free blocks, LeaFTL places these valid pages into the DRAM buffer, sort them by their LPAs, and learn a new index segment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The learning procedure is the same as we build index segments for new flash writes/updates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Thus, the address mapping of the valid pages is updated after the GC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL also ensures all the flash blocks age at the same rate (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=', wear leveling).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' It uses the throttling and swapping mechanism developed in existing GC, in which the cold data blocks (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=', blocks not frequently accessed) will be migrated to hot blocks (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=', blocks that experience more wear).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL will learn new indexes for these swapped blocks and insert them into the mapping table to update their address mappings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='7 LeaFTL Operations Now we describe the LeaFTL operations, including segment cre- ation, insert/update, LPA lookup, and compaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We discuss their procedures, and use examples to illustrate each of them, respec- tively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We present their detailed procedures in Algorithm 1 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' ALGORITHM 2: Segment Merge // Check if Segment Contains LPA 1 Function ℎ𝑎𝑠_𝑙𝑝𝑎(𝑠𝑒𝑔, 𝑙𝑝𝑎): 2 𝑎𝑐𝑐 ← 𝑠𝑒𝑔.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝑎𝑐𝑐𝑢𝑟𝑎𝑡𝑒 3 if 𝑙𝑝𝑎 ∉ [𝑠𝑒𝑔.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝑆𝐿𝑃𝐴,𝑠𝑒𝑔.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝑆𝐿𝑃𝐴 + 𝑠𝑒𝑔.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝐿] 𝑜𝑟 (𝑛𝑜𝑡 𝑎𝑐𝑐 & 𝑐ℎ𝑒𝑐𝑘 (𝐶𝑅𝐵) 𝑓 𝑎𝑖𝑙𝑒𝑑) 𝑜𝑟 (𝑎𝑐𝑐 & (𝑙𝑝𝑎 − 𝑠𝑒𝑔.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝑆𝐿𝑃𝐴) 𝑚𝑜𝑑 ⌈ 1 𝑠𝑒𝑔.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝐾 ⌉ ≠ 0) then 4 𝑟𝑒𝑡𝑢𝑟𝑛 𝐹𝑎𝑙𝑠𝑒 5 𝑟𝑒𝑡𝑢𝑟𝑛 𝑇𝑟𝑢𝑒 // Convert Segment into a Temporary Bitmap 6 Function 𝑔𝑒𝑡_𝑏𝑖𝑡𝑚𝑎𝑝(𝑠𝑒𝑔, 𝑠𝑡𝑎𝑟𝑡, 𝑒𝑛𝑑): 7 𝑏𝑚 ← 𝑏𝑖𝑡𝑚𝑎𝑝 𝑜𝑓 𝑙𝑒𝑛𝑔𝑡ℎ (𝑒𝑛𝑑 − 𝑠𝑡𝑎𝑟𝑡 + 1) 8 foreach 𝑙𝑝𝑎 ∈ [𝑠𝑡𝑎𝑟𝑡,𝑒𝑛𝑑] do 9 if ℎ𝑎𝑠_𝑙𝑝𝑎(𝑠𝑒𝑔, 𝑙𝑝𝑎) then 10 𝑏𝑚[𝑙𝑝𝑎 − 𝑠𝑡𝑎𝑟𝑡 ] = 1 11 else 12 𝑏𝑚[𝑙𝑝𝑎 − 𝑠𝑡𝑎𝑟𝑡 ] = 0 13 return 𝑏𝑚 // Merge a New Segment with an Old Segment 14 Function 𝑠𝑒𝑔_𝑚𝑒𝑟𝑔𝑒(𝑛𝑒𝑤, 𝑜𝑙𝑑): 15 𝑠𝑡𝑎𝑟𝑡 ← 𝑚𝑖𝑛(𝑛𝑒𝑤.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝑆𝐿𝑃𝐴, 𝑜𝑙𝑑.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝑆𝐿𝑃𝐴) 16 𝑒𝑛𝑑 ← 𝑚𝑎𝑥 (𝑛𝑒𝑤.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝑆𝐿𝑃𝐴 + 𝑛𝑒𝑤.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝐿, 𝑜𝑙𝑑.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝑆𝐿𝑃𝐴 + 𝑜𝑙𝑑.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝐿) 17 𝑏𝑚𝑛𝑒𝑤 ← 𝑔𝑒𝑡_𝑏𝑖𝑡𝑚𝑎𝑝 (𝑛𝑒𝑤, 𝑠𝑡𝑎𝑟𝑡, 𝑒𝑛𝑑) 18 𝑏𝑚𝑜𝑙𝑑 ← 𝑔𝑒𝑡_𝑏𝑖𝑡𝑚𝑎𝑝 (𝑜𝑙𝑑, 𝑠𝑡𝑎𝑟𝑡, 𝑒𝑛𝑑) 19 𝑏𝑚𝑜𝑙𝑑 ← 𝑏𝑚𝑜𝑙𝑑 & ¬𝑏𝑚𝑛𝑒𝑤 20 𝑓 𝑖𝑟𝑠𝑡, 𝑙𝑎𝑠𝑡 ← the first and last valid bit of 𝑏𝑚𝑜𝑙𝑑 21 𝑜𝑙𝑑.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝑆𝐿𝑃𝐴, 𝑜𝑙𝑑.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝐿 ← 𝑓 𝑖𝑟𝑠𝑡 + 𝑠𝑡𝑎𝑟𝑡, 𝑙𝑎𝑠𝑡 − 𝑓 𝑖𝑟𝑠𝑡 22 if no valid bits in 𝑜𝑙𝑑 then 23 𝑜𝑙𝑑.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝐿 ← −1 // mark it as removable 24 if 𝑛𝑜𝑡 𝑜𝑙𝑑.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='𝑎𝑐𝑐𝑢𝑟𝑎𝑡𝑒 then 25 Remove outdated LPAs in CRB Creation of Learned Segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Once the data buffer of the SSD controller is filled, LeaFTL takes the LPAs and PPAs of the flash pages in the buffer as the input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' It sorts the LPA-PPA mappings by reordering the flash pages with their LPAs (see §3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='3), and uses greedy piecewise linear regression [64] to learn the index segment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Insert/Update of Learned Segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' When we insert or update a new learned index segment, we will place it in the topmost level of the log-structured mapping table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Since each level of the map- ping table is sorted, we can quickly identify its insert location via a binary search (line 2 in Algorithm 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' If the new segment is ap- proximate, LeaFTL will update the CRB for future lookups (line 4-7 in Algorithm 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' After that, LeaFTL will check whether the new segment overlaps with existing segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' If yes, LeaFTL will identify the overlapped LPAs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The overlap detection is performed by the comparison between the LPA range of the new segment and [𝑆𝐿𝑃𝐴,𝑆𝐿𝑃𝐴 +𝐿] of the adjacent segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We group these overlap- ping segments as a list of victim segments (line 8 in Algorithm 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL will merge segments to remove outdated LPAs (line 10 in Algorithm 1 and line 14-25 in Algorithm 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' To fulfill the segment merge, LeaFTL will use the 𝑆𝐿𝑃𝐴, 𝐿, and 𝐾 to reconstruct the list of the encoded LPAs in the victim segment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' And it will create a bitmap to index these encoded LPAs (line 6-13 in Algorithm 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Given an accurate segment with 𝑆𝐿𝑃𝐴 = 100, 𝐾 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5, 𝐿 = 6, we can infer that its encoded LPAs are [100, 102, 104, 106].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We can transfer the LPA list to the bitmap [1010101].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' If the victim Jinghan Sun, Shaobo Li, Yunxin Sun, Chao Sun, Dejan Vucinic, and Jian Huang MSR-hm MSR-src2 MSR-prxy MSR-prn MSR-usr FIU-home FIU-mail 0 5 10 15 20 # of Levels in Each Group Average 99 Percentile Figure 12: A study of the number of levels in the log- structured mapping table for different storage workloads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' L0 0 63 T0 Initial Snapshot T1 Update LPAs 200 - 255 L0 0 63 200 255 T2 Update LPAs 16 - 31 L0 16 31 200 255 L1 0 63 T4 Update [72,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 73,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 80] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='L0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='16 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='31 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='200 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='255 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='L1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='63 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='T6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='Lookup LPA 78 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='L0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='L1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='T8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='Compaction ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='Timeline ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='Segments ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='CRB ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='T7 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='Update LPAs 32 - 90 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='75 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='82 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='72 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='16 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='31 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='200 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='255 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='63 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='75 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='82 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='72 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='T5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='Lookup LPA 50 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='L0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='L1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='16 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='31 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='200 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='255 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='63 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='75 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='82 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='72 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='L0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='L1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='16 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='31 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='200 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='255 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='63 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='75 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='82 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='32 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='90 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='L0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='16 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='31 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='200 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='255 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='15 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='32 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='90 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='Start ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='End ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='Accurate Segment ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='Start ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='End ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='Approximate Segment ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='72 73 80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='/ 75 78 82 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='72 73 80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='/ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='75 78 82 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='72 73 80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='/ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='75 78 82 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='75 78 82 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='T3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='Update [75,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 78,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 82] L0 16 31 200 255 L1 0 63 75 82 75 78 82 Figure 13: Examples that involve update/insert,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' lookup,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' and compaction operations in LeaFTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' segment is an approximate segment, LeaFTL will leverage the 𝑆𝐿𝑃𝐴, 𝐿, and the LPAs stored in the CRB to reconstruct the encoded LPAs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Afterwards, LeaFTL will conduct a comparison between the bitmaps to identify the overlapped LPAs (line 15-19 in Algorithm 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' During the segment merge, LeaFTL will update the 𝑆𝐿𝑃𝐴 and 𝐿 of the old segments accordingly, remove the outdated LPAs from CRB for approximate segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Note that we do not update the 𝐾 and 𝐼 for the victim segments during the merge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' After the merge, (1) if the victim segment does not contain any valid LPA (𝐿 is negative), it will be removed from the mapping table (line 11-12 in Algorithm 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' (2) If the victim segment has valid LPAs but their range still overlaps with the new segment, the victim segment will be moved to the next level in the log- structured mapping table (line 13-16 in Algorithm 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' To avoid recursive updates across the levels, we create a new level for the victim segment if it also overlaps with segments in the next level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' According to our study of diverse workloads, this will not create many levels in the mapping table (see Figure 12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' (3) If the victim segment has valid LPAs and they do not overlap with the new segment, we do not need to perform further operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' This is because the victim segment is updated with new 𝑆𝐿𝑃𝐴 and 𝐿 during segment merge (line 20-25 in Algorithm 2), and the new segment insertion keeps each level sorted (line 3 in Algorithm 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' To facilitate our discussion, we present a few examples in Fig- ure 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' At the initial stage, the mapping table has one segment that indexes the LPA range [0, 63].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' At 𝑇1, the new segment [200, 255] is directly inserted into the topmost level, as it does not overlap with existing segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' At 𝑇2, we insert a new segment [16, 31] that has overlaps with the old segment [0, 63], LeaFTL conducts the segment merge procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' After that, the old segment still has valid LPAs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Thus, it moves to level 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' At 𝑇3 and 𝑇4, we insert two approximate segments [75, 82] and [72, 80], LeaFTL will also insert their encoded LPAs into the CRB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The segment [75, 82] will be moved to the next level as it overlaps with the new segment [72, 80].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LPA Lookup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL conducts an LPA lookup from the top- most level of the mapping table with binary searches (line 19 in Algorithm 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We will check whether the LPA is represented by the matched segment (line 21 in Algorithm 1, line 1-5 in Algorithm 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' If the 𝐿𝑃𝐴 ∈ [𝑆𝐿𝑃𝐴,𝑆𝐿𝑃𝐴 + 𝐿] of the segment, LeaFTL will check the least bit of its 𝐾.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' If the least bit of 𝐾 is 0, it is an accurate segment, and LeaFTL will use 𝑓 (𝐿𝑃𝐴) = ⌈𝐾 ∗ 𝐿𝑃𝐴 + 𝐼⌉ to get the accurate PPA (see §3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Otherwise, it is an approximate segment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL will check the CRB to identify the 𝑆𝐿𝑃𝐴 of the segment, following the approach described in Figure 9 and §3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL will use the same 𝑓 (𝐿𝑃𝐴) formula to obtain the PPA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' If the LPA is not found in the top level of the mapping table, LeaFTL will search the lower levels until a segment is identified.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We use Figure 13 to illustrate the lookup procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' At 𝑇5, we conduct the address translation for 𝐿𝑃𝐴 = 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' However, none of the segments in the level 0 covers this LPA, LeaFTL will continue the search in the level 1 and find the accurate segment [0, 63].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' At 𝑇6, we do the address translation for 𝐿𝑃𝐴 = 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL finds that the LPA 78 is in the LPA range of the segment [72, 80].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Since this is an approximate segment, LeaFTL checks the CRB and finds this LPA is actually indexed by the segment [75, 82].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' With the PPA, LeaFTL will read the corresponding flash page and use the reversed mapping (its corresponding LPA) in its OOB to ver- ify the correctness of the address translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Upon mispredictions, we will use the approach discussed in §3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5 to handle it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Segment Compaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The purpose of the compaction is to merge segments with overlapped LPAs across different levels, which further saves memory space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL will iteratively move the upper- level segments into the lower level, until the mapping table is fully compacted (line 27 in Algorithm 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' When an approximate segment is removed, its corresponding CRB entries will also be deleted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As shown in 𝑇7 of Figure 13, we insert a new segment [32, 90] which fully covers the LPA range of the segment [72, 80].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' After merge, LeaFTL removes the old segment [72, 80].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' However, some segments LeaFTL: A Learning-based Flash-Translation Layer for Solid-State Drives Conflict Resolution Buffer (CRB) Key Data Structures in LeaFTL 6 Log-Structured Mapping Table 5 L0 L1 L2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Group 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' CRB .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 0 63 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 16 31 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 64 95 Figure 14: Key data structures used in LeaFTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' in the level 0 still overlap with the segments in the level 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' After 𝑇8, LeaFTL will remove outdated segments and LPAs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL performs segment compaction after each 1 million writes by default.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' According to our experiments with various storage work- loads, the segment compaction of the entire mapping table will take 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='1 milliseconds (the time of 20-40 flash writes) on average.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Consider the low frequency (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=', once per 1 million writes), the compaction incurs trivial performance overhead to storage operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='8 Put It All Together LeaFTL is compatible with existing FTL implementations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As shown in Figure 14, it uses the log-structured mapping table ( 5 ) to replace the address mapping cache ( 1 in Figure 3), and employs CRB ( 6 ) for assisting the address translation of approximate segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The CRB requires trivial storage space in the SSD DRAM (see Figure 10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Read Operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' For a read request, LeaFTL will first check the data cache.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' For a cache hit, LeaFTL serves the read request with the cached flash page.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Otherwise, LeaFTL will perform address translation with 5 (see §3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' If there is a misprediction of PPA, LeaFTL checks the OOB of the mispredicted flash page, read the correct page (§3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5), and updates the data cache with the page.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Write Operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' For a write request, LeaFTL buffers it in the data cache.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Once the buffered writes reach the size of a flash block, LeaFTL will allocate a free block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' It will sort the writes in the buffer based on their LPAs, and learn new index segments with the PPAs of the allocated flash block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' This enables LeaFTL to group more LPA- PPA mappings in the same index segment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' After that, LeaFTL will insert the new index segment in the mapping table, and flush the buffered data to the flash blocks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' For those writes, LeaFTL will also check whether their LPAs exist in the mapping table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' If yes, LeaFTL will update their corresponding entries in 3 BVC and 4 PVT to indicate that they become invalid and can be garbage collected in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Otherwise, the new learned segments will have their LPA-PPA mappings for future address translations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL caches the mapping table in SSD DRAM for fast lookup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The table will also be stored in the flash blocks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL utilizes the existing 2 GMD to index the translation pages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' If a segment is not found in the cached mapping table, LeaFTL will fetch it from the translation blocks and place it in the cached mapping table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Crash Consistency and Recovery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Upon system crashes or power failures, LeaFTL guarantees the crash consistency of learned in- dexes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In order to ensure the data durability of DRAM buffer in SSD controllers, modern SSDs today have employed battery-backed DRAM and power loss protection mechanisms [1, 2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' With battery- backed DRAM, LeaFTL has sufficient time to persist the up-to-date mapping table to the flash blocks and record their PPAs in the GMD Table 1: SSD configurations in our simulator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Parameter Value Parameter Value Capacity 2TB #Channels 16 Page size 4KB OOB size 128B DRAM size 1GB Pages/block 256 Read latency 20𝜇s Write latency 200𝜇s Erase 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5 millisecs Overprovisioning ratio 20% ( 2 in Figure 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' During the data recovery, LeaFTL reads the GMD to locate its mapping table and place it into the DRAM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Without battery-backed DRAM, LeaFTL periodically flushes the learned mapping table and the Block Validity Counter ( 3 BVC in Figure 3) into the flash blocks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' When GC is triggered, LeaFTL also flushes the updated mapping table and BVC into the flash blocks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Upon crashes, LeaFTL will scan all the flash blocks at the channel- level parallelism, and reconstruct an up-to-date BVC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL is able to identify the flash blocks allocated since the last mapping table flush, by comparing the up-to-date BVC with the stored BVC in the SSD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Therefore, LeaFTL only needs to relearn the index segments for these recently allocated flash blocks and add them into the mapping table (see §3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='9 Implementation Details SSD Simulator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We implement LeaFTL based on a trace-driven simulator WiscSim [27], which has provided an event simulation environment for the end-to-end performance analysis of SSDs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We extend WiscSim by implementing an LRU-based read-write cache.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL also preserves the functions of existing FTL, such as GC and wear-leveling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' To support the learned indexing, LeaFTL employs a simple linear regression algorithm [65], which incurs negligible computation overhead with modern storage processors (see §4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The error bound 𝛾 for learned segments is configurable, and we set it to 0 by default in LeaFTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' SSD Prototype.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We also develop a real system prototype with an open-channel SSD to validate the functions and efficiency of LeaFTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The SSD has 1TB storage capacity with 16 KB flash page size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' It has 16 channels, each channel has 16K flash blocks, and each flash block has 256 pages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' It enables developers to implement their own FTL in the host by providing basic I/O commands such as read, write, and erase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We implement LeaFTL with 4,016 lines of code using C programming language with the SDK library of the device.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 4 EVALUATION Our evaluation shows that: (1) LeaFTL significantly reduces the address mapping table size, and the saved memory brings perfor- mance benefits (§4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='2);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' (2) the benefits of LeaFTL are validated on a real SSD device (§4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='3);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' (3) LeaFTL can achieve additional memory savings and performance benefits with larger error-tolerance, and it demonstrate generality for different SSD configurations (§4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='4);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' (4) Its learning procedure does not introduce much extra overhead to the SSD controller (§4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' (5) It has minimal negative impact on the SSD lifetime (§4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Jinghan Sun, Shaobo Li, Yunxin Sun, Chao Sun, Dejan Vucinic, and Jian Huang Table 2: Real workloads used in our real SSD evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Workload Description OLTP [59] Transactional benchmark in the FileBench.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' CompFlow (CompF) [59] File accesses in a computation flow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' TPCC [13] Online transaction queries in warehouses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' AuctionMark (AMark) [13] Activity queries in an auction site.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' SEATS [13] Airline ticketing system queries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' MSR-hm MSR-src2 MSR-prxy MSR-prn MSR-usr FIU-home FIU-mail 50x 20x 10x 5x 2x 1x Memory Footprint Reduction DFTL SFTL LeaFTL Figure 15: The reduction on the mapping table size of LeaFTL, in comparison with DFTL and SFTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='1 Experiment Setup We examine the efficiency of LeaFTL with both the SSD simula- tor and real SSD prototype.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As for the evaluation with the SSD simulator, we configure a 2TB SSD with 4KB flash pages and 1GB DRAM in the SSD controller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We list the core SSD parameters in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' For other parameters, we use the default setting in the WiscSim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We use a variety of storage workloads that include the block I/O traces from enterprise servers from Microsoft Research Cambridge [45] and workload traces from computers at FIU [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As for the evaluation with the real SSD prototype (see §3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='9), we validate the benefits of LeaFTL using a set of real-world file system benchmarks and data intensive applications as shown in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Before we measure the performance, we run a set of workloads consisting of various real-world and synthetic storage workload traces to warm up the SSD and make sure the GC will be executed during the experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We compare LeaFTL with state-of-the-art page-level mapping schemes described as follows 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' DFTL (Demand-based FTL) [20]: it uses a page-level mapping scheme, and caches the most recently used address translation entries in the SSD DRAM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' SFTL (Spatial-locality-aware FTL) [25]: it is a page-level map- ping that exploits the spatial locality and strictly sequential access patterns of workloads to condense mapping table entries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='2 Memory Saving and Performance We first evaluate the benefits of LeaFTL on the memory saving and storage performance with the SSD simulator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As shown in Figure 15, LeaFTL reduces the mapping table size by 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5–37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='7×, compared to the page-level mapping scheme DFTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' This is because LeaFTL can group a set of page-level mapping entries into an 8- byte segment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In comparison with SFTL, LeaFTL achieves up to 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='3× (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='9× on average) reduction on the address mapping table for different storage workloads, when we set its 𝛾 = 0 (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=', the learned 1We do not compare LeaFTL with block-level and hybrid-level mappings, as they perform dramatically worse than the page-level mapping [20, 25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' MSR-hm MSR-src2 MSR-prxy MSR-prn MSR-usr FIU-home FIU-mail 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='0 Normalized Perf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' DFTL SFTL LeaFTL (a) SSD performance when using its DRAM mainly for the address mapping table (lower is better).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' MSR-hm MSR-src2 MSR-prxy MSR-prn MSR-usr FIU-home FIU-mail 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='0 Normalized Perf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' DFTL SFTL LeaFTL (b) SSD performance when using its DRAM partially (up to 80%) for the address mapping table (lower is better).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Figure 16: Performance improvement with LeaFTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' SEATS AMark TPCC OLTP CompF 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='0 Normalized Perf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' DFTL SFTL LeaFTL Figure 17: Performance on the real SSD prototype.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='9% 99% 90% 60% 30% 0% Percentage of Storage Accesses 100 101 102 103 Latency ( s) DFTL SFTL LeaFTL Figure 18: The latency distribution of storage accesses when running OLTP workload on the real SSD prototype.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' segments are 100% accurate).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' This is because LeaFTL captures more LPA-PPA mapping patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We now evaluate the performance benefit of LeaFTL from its saved memory space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We evaluate LeaFTL with two experimental settings: (1) the SSD DRAM is mainly used (as much as possible) for the mapping table;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' (2) the SSD DRAM is partially used for the mapping table, in which we ensure at least 20% of the DRAM will be used for the data caching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In the first setting, DRAM is almost used for mapping table in DFTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As shown in Figure 16 (a), LeaFTL reduces the storage access latency by 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='6× on average (up to 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='7×), compared to SFTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' This is because LeaFTL saves more memory from the mapping table LeaFTL: A Learning-based Flash-Translation Layer for Solid-State Drives MSR-hm MSR-src2 MSR-prxy MSR-prn MSR-usr FIU-home FIU-mail SEATS AMark TPCC OLTP CompF 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='0 Memory Footprint Reduction =0 =1 =4 =16 SSD Simulator Real SSD Figure 19: The reduction of the mapping table size of LeaFTL with different 𝛾 (lower is better).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' =0 =1 =4 =16 0% 20% 40% 60% 80% 100% Percentage of Segments Accurate Approximate Figure 20: The distribution of learned segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' than SFTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' SFTL slightly outperforms DFTL, because it reduces the mapping table size by compressing mapping entries with grouping strictly sequential data accesses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In the second setting, as shown in Figure 16 (b), LeaFTL obtains 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='4× (up to 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='4×) and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='6× (up to 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='9×) performance speedup, compared to SFTL and DFTL, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='3 Benefits on the Real SSD Prototype We validate the benefits of LeaFTL on the real SSD prototype with real workloads (see Table 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' They include filesystem benchmark suite FileBench [59], and transactional database workloads from BenchBase [13, 61].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' All these workloads run on the ext4 file system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' With FileBench, we run OLTP and CompFlow (CompF) workloads to read/write 10GB files.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' With BenchBase, we run TPCC, Auction- Mark (AMark), and SEATS workloads on MySQL, and their data- base sizes are 10–30GB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' These database workloads will generate 37–230GB read traffic and 26–59GB write traffic to the SSD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We allo- cate 256MB DRAM to host the mapping table (for different DRAM sizes, see our sensitivity analysis in §4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We present the performance benefit of LeaFTL in Figure 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Across all workloads, LeaFTL obtains 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='4× performance speedup on average (up to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5×), compared to SFTL and DFTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Similar to our evaluation with the SSD simulator implementation, the per- formance benefit of LeaFTL comes from the memory saving from the address mapping table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' And LeaFTL demonstrates comparable performance improvement on real SSD devices, in comparison with the SSD simulator in §4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We also show the latency distribution of storage accesses in Figure 18, when running the OLTP workload on the real SSD prototype.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In comparison with existing FTL schemes, LeaFTL does not increase the tail latency of storage accesses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' And the higher cache hit ratio of LeaFTL brings latency reduction for many storage accesses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='4 Sensitivity Analysis Vary the value of 𝛾.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As we increase the value of 𝛾 from 0 to 16, the size of the learned mapping table is reduced, as shown in MSR-hm MSR-src2 MSR-prxy MSR-prn MSR-usr FIU-home FIU-mail SEATS AMark TPCC OLTP CompF 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='0 Normalized Perf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' =0 =1 =4 =16 SSD Simulator Real SSD Figure 21: Performance with various 𝛾 (lower is better).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 256MB 512MB 1024MB (a) Various DRAM size 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='0 Normalized Perf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 4KB 8KB 16KB (b) Various flash page size 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='0 Normalized Perf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' DFTL SFTL LeaFTL Figure 22: SSD performance with different DRAM capacity and flash page size (lower is better).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Figure 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL achieves 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='3× reduction on average (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='2× on the real SSD) with 𝛾 = 16, compared to that of 𝛾 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The saved memory with a larger 𝛾 is achieved by learning a wider range of LPAs into approximate segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' To further understand this, we profile the distribution of segments learned by LeaFTL with different values of 𝛾, as shown in Figure 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' When 𝛾 = 0, all the segments are accurate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' When 𝛾 = 16, 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5% of the learned segments are approximate on average, and LeaFTL delivers 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='3× improvement on storage performance (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='2× with workloads on the real SSD), in comparison with the case of 𝛾 = 0 (see Figure 21).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Vary the SSD DRAM capacity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We now conduct the sensitivity analysis of SSD DRAM by varying its capacity from 256MB to 1GB on the real SSD prototype.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As shown in Figure 22 (a), LeaFTL always outperforms DFTL and SFTL as we vary the SSD DRAM capacity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As we increase the DRAM capacity, the storage workloads are still bottlenecked by the available memory space for the data caching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL can learn various data access patterns and significantly reduce the address mapping table size, the saved memory further benefits data caching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Vary the flash page size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In this experiment, we fix the number of flash pages, and vary the flash page size from 4KB to 16KB in the SSD simulator, as SSD vendors usually use larger flash pages for increased SSD capacity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We use the simulator for this study, since the flash page size of the real SSD is fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As shown in Figure 22 (b), LeaFTL always performs the best in comparison with DFTL and SFTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As we increase the flash page size to 16KB, we can cache less number of flash pages with limited DRAM capacity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Thus, LeaFTL experiences a slight performance drop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As we fix the total SSD Jinghan Sun, Shaobo Li, Yunxin Sun, Chao Sun, Dejan Vucinic, and Jian Huang 1 5 10 15 20 25 30 35 (a) Number of Levels 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='99% 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='9% 99% 90% 0% Percentage of Lookups MSR-prn MSR-usr MSR-src2 MSR-hm MSR-prxy FIU-home FIU-mail 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5 (b) LPA Lookup Overhead (%) 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='99% 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='9% 99% 90% 0% Percentage of Lookups SEATS CompF OLTP TPCC AMark Figure 23: Performance overhead of the LPA lookup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' MSR-hm MSR-src2 MSR-prxy MSR-prn MSR-usr FIU-home FIU-mail SEATS AMark TPCC OLTP CompF 0 5 10 15 20 Misprediction (%) =0 =1 =4 =16 SSD Simulator Real SSD Figure 24: Misprediction ratio of flash pages access.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' capacity and vary the page size, LeaFTL outperforms SFTL by 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='2× and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='1× for the page size of 8KB and 16KB, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5 Overhead Source in LeaFTL We evaluate the overhead sources in LeaFTL in three aspects: (1) the performance overhead of the learning procedure in LeaFTL;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' (2) the LPA lookup overhead in the learned segments;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' and (3) the overhead caused by the address misprediction in LeaFTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We evaluate the performance of segment learning and address lookup on an ARM Cortex-A72 core.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' This core is similar to the storage processor used in modern SSDs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The learning time for a batch of 256 mapping entries is 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='8–10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='8 𝜇s (see Table 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As we learn one batch of index segments for every 256 flash writes, the learning overhead is only 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='02% of their flash write latency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In LeaFTL, the LPA lookup is 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='2–67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5 ns, as the binary search of segments is fast and some segments can be cached in the processor cache.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The lookup time is slightly higher as we increase𝛾, due to the additional CRB accesses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We also profile the cumulative distribution function (CDF) of the number of levels to lookup for each LPA lookup, and present the results in Figure 23 (a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' For most of the tested workloads, 90% of the mapping table lookup can be fulfilled at the topmost level, and 99% of the lookups are within 10 levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Although MSR-prn workload requires more lookups than other workloads, it only checks 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='4 levels on average.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We also evaluate the performance overhead of the LPA lookup on the real SSD, and show the results in Figure 23 (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The extra lookup overhead for each flash read is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='21% on average.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' And for 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='99% of all the lookups, the additional overhead is less than 1% of the flash access latency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Table 3: Overhead source of LeaFTL with an ARM core.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 𝛾 0 1 4 Learning (256 LPAs) 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='8 𝜇s 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='8 𝜇s 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='8 𝜇s Lookup (per LPA) 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='2 ns 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5 ns 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5 ns LeaFTL also has low misprediction ratios with approximate seg- ments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' This is because LeaFTL can still learn accurate segments even if 𝛾 > 0, and not all entries in the approximate segments will result in misprediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As shown in Figure 24, most of the workloads achieve less than 10% misprediction ratio when 𝛾 = 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We obtain similar misprediction ratio on the real SSD prototype.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Note that each misprediction only incurs one flash read access with the help of our proposed OOB verification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='6 Impact on SSD Lifetime The flash blocks of an SSD can only undergo a certain amount of writes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In this experiment, we use the write amplification factor (WAF, the ratio between the actual and requested flash writes) to evaluate the SSD lifetime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The SSD will age faster if the WAF is larger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As shown Figure 25, the WAF of LeaFTL is comparable to DFTL and SFTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' DFTL has larger WAF in most workloads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' SFTL and LeaFTL occasionally flush translation pages to the flash blocks, but the cost is negligible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 5 DISCUSSION Why Linear Regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Unlike deep neural networks, the lin- ear regression used in LeaFTL is simple and lightweight, which takes only a few microseconds to learn an index segment with embedded ARM processors available in modern SSD controllers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In addition, the linear regression algorithm has been well studied, and offers guaranteed error bounds for its learned results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL is the first work that uses learning techniques to solve a critical system problem (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=', address mapping) in SSDs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Adaptivity of LeaFTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL focuses on the page-level address translation, its design and implementation will not be affected by the low-level flash memory organization (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=', TLC/QLC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As we use TLC/QLC technique to further increase the SSD capacity, the address mapping issue will become more critical, since the SSD DRAM capacity does not scale well and becomes the bottleneck for caching address mappings and user data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Recovery of Learned Index Segments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As discussed in §3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='8, us- ing a battery or large capacitor to preserve and persist the cached segments upon failures or crashes will simplify the recovery pro- cedure significantly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In our real SSD prototype, we do not assume the battery-backed DRAM is available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Thus, we follow the conven- tional recovery approach in modern SSDs [20, 23], and scan flash blocks in parallel by utilizing the channel-level parallelism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' When we run real workloads like TPCC on the SSD prototype, we intentionally reboot the system after running the workload for a period of time (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5-3 hours).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We find that the system can recover in 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='8 minutes on average whenever the reboot happens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' This is similar to the time of recovering the conventional page-level mapping table in DFTL [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' This is mostly caused by scanning the blocks in a channel (70MB/s per channel in our SSD prototype), and the time for reconstructing recently learned segments is rela- tively low (101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='3 milliseconds on average).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We believe the recovery LeaFTL: A Learning-based Flash-Translation Layer for Solid-State Drives MSR-hm MSR-src2 MSR-prxy MSR-prn MSR-usr FIU-home FIU-mail SEATS AMark TPCC OLTP CompF 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='5 Write Amplification DFTL SFTL LeaFTL SSD Simulator Real SSD Figure 25: Write amplification factor of LeaFTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' time is not much of a concern as the recovery does not happen frequently in reality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' And the recovery can be accelerated as we increase the channel-level bandwidth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In addition, if an SSD can tolerate more data losses, we can still ensure the crash consistency by only loading the stored index segments from flash chips, which requires minimum recovery time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 6 RELATED WORK Address Translation for SSDs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' A variety of FTL optimizations have been proposed [8, 12, 20, 25, 28, 34, 49, 50].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' These works ex- ploited the data locality of flash accesses to improve the cache efficiency of the mapping table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' However, most of them were devel- oped with human-driven heuristics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' An alternative approach is to integrate application semantics into the FTL, such as content-aware FTL [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' However, they were application specific and required signif- icant changes to the FTL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL is a generic solution and does not require application semantics in its learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Researchers proposed to integrate the FTL mapping table into the host [18, 23, 26, 66].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Typi- cal examples include DFS [26], Nameless writes [66], FlashMap [23], and FlatFlash [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL is orthogonal to them and can be applied to further reduce their memory footprint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Machine Learning for Storage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Recent studies have been using learning techniques to build indexes such as B-trees, log-structured merge tree, hashmaps, and bloom filters [11, 14, 15, 32, 33, 42] for in-memory datasets, identify optimal cache replacement and prefetching policies [40, 53, 56, 57], facilitate efficient storage har- vesting [52], and drive the development of software-defined stor- age [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL applies learning techniques to optimize the address mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' However, unlike existing optimizations [43, 63] such as learned page table for virtual memory that used deep neural net- works to learn the patterns, LeaFTL provides a lightweight solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' SSD Hardware Development.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' For the recent SSD innovations [3, 17, 19, 47] like Z-SSD [55], KVSSD [35], and ZNS SSD [21], DRAM capacity and storage processor are still the main constraints in SSD controllers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' As we scale the storage capacity, the challenge with the address translation becomes only worse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Researchers recently deployed hardware accelerators inside SSD controllers for near- data computing [36, 41, 54, 58].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' We wish to extend LeaFTL with in-storage accelerators to deploy more powerful learning models as the future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 7 CONCLUSION We present a learning-based flash translation layer, named LeaFTL for SSDs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LeaFTL can automatically learn different flash access patterns and build space-efficient indexes, which reduces the ad- dress mapping size and improves the caching efficiency in the SSD controller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Our evaluation shows that LeaFTL improves the SSD performance by 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='4× on average for a variety of storage workloads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' ACKNOWLEDGMENTS We thank the anonymous reviewers for their helpful comments and feedback.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' This work is partially supported by the NSF CAREER Award 2144796, CCF-1919044, and CNS-1850317.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' REFERENCES [1] 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' A Closer Look At SSD Power Loss Protection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='kingston.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='com/ en/blog/servers-and-data-centers/ssd-power-loss-protection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [2] 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Harnessing Microcontrollers to Deliver Intelligent SSD Power Management and PLP Capabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='atpinc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='com/de/about/stories/microcontroller- SSD-power-loss-protection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [3] 3D NAND – An Overview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='simms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='co.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='uk/tech-talk/3d-nand-overview/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [4] Ahmed Abulila, Vikram Sharma Mailthoday, Zaid Qureshi, Jian Huang, Nam Sung Kim, Jin jun Xiong, and Wen mei Hwu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' FlatFlash: Exploiting the Byte- Accessibility of SSDs within A Unified Memory-Storage Hierarchy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the 24th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS’19).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Providence, RI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [5] Nitin Agrawal, Vijayan Prabhakaran, Ted Wobber, John D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Davis, Mark Manasse, and Rina Panigrahy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Design Tradeoffs for SSD Performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the USENIX 2008 Annual Technical Conference (ATC’08).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Boston, Massachusetts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [6] Yu Cai, Saugata Ghose, Erich F Haratsch, Yixin Luo, and Onur Mutlu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Error characterization, mitigation, and recovery in flash-memory-based solid-state drives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' IEEE 105, 9 (2017), 1666–1704.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [7] Feng Chen, Tian Luo, and Xiaodong Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' CAFTL: A Content-Aware Flash Translation Layer Enhancing the Lifespan of Flash Memory based Solid State Drives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the 9th USENIX Conference on File and Storage Technologies (FAST’11).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' San Jose, CA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [8] Renhai Chen, Zhiwei Qin, Yi Wang, Duo Liu, Zili Shao, and Yong Guan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' On- demand block-level address mapping in large-scale NAND flash storage systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 64, 6 (2014), 1729–1741.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [9] Tae-Sun Chung, Dong-Joo Park, and Jongik Kim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LSTAFF*: An Efficient Flash Translation Layer for Large Block Flash Memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the 2011 ACM Symposium on Applied Computing (SAC’11).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' TaiChung Taiwan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [10] Curtis R Cook and Do Jin Kim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 1980.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Best sorting algorithm for nearly sorted lists.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' ACM 23, 11 (1980), 620–624.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [11] Yifan Dai, Yien Xu, Aishwarya Ganesan, Ramnatthan Alagappan, Brian Kroth, Andrea Arpaci-Dusseau, and Remzi Arpaci-Dusseau.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' From WiscKey to Bourbon: A Learned Index for Log-Structured Merge Trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI’20).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Virtual Event.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [12] Niv Dayan, Philippe Bonnet, and Stratos Idreos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' GeckoFTL: Scalable Flash Translation Techniques For Very Large Flash Devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the Inter- national Conference on Management of Data (SIGMOD’16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' San Francisco, CA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [13] Djellel Eddine Difallah, Andrew Pavlo, Carlo Curino, and Philippe Cudré- Mauroux.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' OLTP-Bench: An Extensible Testbed for Benchmarking Relational Databases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' PVLDB 7, 4 (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [14] Paolo Ferragina, Fabrizio Lillo, and Giorgio Vinciguerra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Why Are Learned Indexes So Effective?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='. In Proceedings of the 37th International Conference on Machine Learning (ICML’20).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Virtual Event.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [15] Paolo Ferragina and Giorgio Vinciguerra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The PGM-Index: A Fully-Dynamic Compressed Learned Index with Provable Worst-Case Bounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Proceedings of the VLDB Endowment 13, 8 (April 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [16] FIU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' FIU Server Traces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [17] Flash Memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' https://en.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='wikipedia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='org/wiki/Flash_memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [18] Fusion-io Directcache: Transparent Storage Accelerator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' http://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='fusionio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='com/systems/directcache/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [19] Gartner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Forecast Overview: NAND Flash, Worldwide, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' https: //www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='gartner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='com/doc/3745121/forecast-overview-nand-flash-worldwide [20] Aayush Gupta, Youngjae Kim, and Bhuvan Urgaonkar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' DFTL: A Flash Translation Layer Employing Demand-based Selective Caching of Page-level Address Mappings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the 14th International Conference on Archi- tectural Support for Programming Languages and Operating Systems (ASPLOS’09).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Washington, DC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [21] Kyuhwa Han, Hyunho Gwak, Dongkun Shin, and Joo-Young Hwang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' ZNS+: Advanced Zoned Namespace Interface for Supporting In-Storage Zone Com- paction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In 15th {USENIX} Symposium on Operating Systems Design and Imple- mentation (OSDI’21).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 147–162.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [22] Jian Huang, Anirudh Badam, Laura Caulfield, Suman Nath, Sudipta Sengupta, Bikash Sharma, and Moinuddin K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Qureshi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' FlashBlox: Achieving Both Performance Isolation and Uniform Lifetime for Virtualized SSDs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings Jinghan Sun, Shaobo Li, Yunxin Sun, Chao Sun, Dejan Vucinic, and Jian Huang of the 15th Usenix Conference on File and Storage Technologies (FAST’17).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Santa clara, CA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [23] Jian Huang, Anirudh Badam, Moinuddin K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Qureshi, and Karsten Schwan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Unified Address Translation for Memory-mapped SSDs with FlashMap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Pro- ceedings of the 42nd Annual International Symposium on Computer Architecture (ISCA’15).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Portland, OR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [24] Jian Huang, Daixuan Li, and Jinghan Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Learning to Drive Software- Defined Storage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Workshop on Machine Learning for Systems at NIPS’22 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [25] Song Jiang, Lei Zhang, XinHao Yuan, Hao Hu, and Yu Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' S-FTL: An Efficient Address Translation for Flash Memory by Exploiting Spatial Locality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the 2011 IEEE 27th Symposium on Mass Storage Systems and Technologies (MSST’11).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' IEEE Computer Society.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [26] William K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Josephson, Lars A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Bongo, Kai Li, and David Flynn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' DFS: A File System for Virtualized Flash Storage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' on Storage 6, 3 (2010), 14:1–14:25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [27] Jun He, Sudarsun Kannan, Andrea C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Arpaci-Dusseau, Remzi H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Arpaci-Dusseau.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The Unwritten Contract of Solid State Drives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the Twelfth European Conference on Computer Systems (EuroSys’17).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Belgrade, Serbia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [28] Dawoon Jung, Jeong-UK Kang, Heeseung Jo, Jin-Soo Kim, and Joonwon Lee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Superblock FTL: A superblock-based flash translation layer with a hybrid address translation scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' ACM Transactions on Embedded Computing Systems (TECS) 9, 4 (2010), 1–41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [29] Jeong-Uk Kang, Heeseung Jo, Jinsoo Kim, and Joonwon Lee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' A Superblock- Based Flash Translation Layer for NAND Flash Memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the 6th International Conference on Embedded Software (EMSOFT’06).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Seoul, South Korea.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [30] Luyi Kang, Yuqi Xie, Weiwei Jia, Xiaohao Wang, Jongryool Kim, Changhwan Youn, Myeong Joon Kang, Jin Lim, Bruce Jacob, and Jian Huang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' IceClave: A Trusted Execution Environment for In-Storage Computing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the 54th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO’21).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Virtual Event.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [31] Jesung Kim, Jong Min Kim, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Noh, Sang Lyul Min, and Yookun Cho.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' A space-efficient flash translation layer for CompactFlash systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' IEEE Transac- tions on Consumer Electronics 48, 2 (2002).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [32] Andreas Kipf, Ryan Marcus, Alexander van Renen, Mihail Stoian, Alfons Kemper, Tim Kraska, and Thomas Neumann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' RadixSpline: A Single-Pass Learned Index.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the Third International Workshop on Exploiting Artificial Intelligence Techniques for Data Management (aiDM ’20).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Portland, Oregon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [33] Tim Kraska, Alex Beutel, Ed H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Chi, Jeffrey Dean, and Neoklis Polyzotis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The Case for Learned Index Structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the 2018 International Conference on Management of Data (SIGMOD’18).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Houston, TX, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [34] Hunki Kwon, Eunsam Kim, Jongmoo Choi, Donghee Lee, and Sam H Noh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Janus-FTL: Finding the optimal point on the spectrum between page and block mapping schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the tenth ACM international conference on Embedded software.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 169–178.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [35] Samsung Memory Solutions Lab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Samsung Key Value SSD enables High Per- formance Scaling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='samsung.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='com/semiconductor/global.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='semi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='static/ Samsung_Key_Value_SSD_enables_High_Performance_Scaling-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='pdf (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [36] Joo Hwan Lee, Hui Zhang, Veronica Lagrange, Praveen Krishnamoorthy, Xi- aodong Zhao, and Yang Seok Ki.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' SmartSSD: FPGA accelerated near-storage data analytics on SSD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' IEEE Computer architecture letters 19, 2 (2020), 110–113.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [37] Sungjin Lee, Ming Liu, Sangwoo Jun, Shuotao Xu, Jihong Kim, and Arvind.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Application-managed flash.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the 14th USENIX Conference on File and Storage Technologies (FAST’16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 339–353.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [38] Sungjin Lee, Dongkun Shin, Young-Jin Kim, and Jihong Kim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' LAST: Locality- Aware Sector Translation for NAND Flash Memory-Based Storage Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the SIGOPS Operating Systems Review (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [39] Sang-Won Lee, Dong-Joo Park, Tae-Sun Chung, Dong-Ho Lee, Sangwon Park, and Ha-Joo Song.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' A Log Buffer-Based Flash Translation Layer Using Fully-Associative Sector Translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' ACM Transactions on Embedded Computing Systems 6, 3 (2007), 18:1–18:27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [40] Evan Liu, Milad Hashemi, Kevin Swersky, Parthasarathy Ranganathan, and Jun- whan Ahn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' An imitation learning approach for cache replacement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' PMLR, 6237–6247.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [41] Vikram Sharma Mailthoday, Zaid Qureshi, Weixin Liang, Ziyan Feng, Simon Gar- cia de Gonzalo, Youjie Li, Hubertus Franke, Jinjun Xiong, Jian Huang, and Wen mei Hwu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' DeepStore: In-Storage Acceleration for Intelligent Queries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the 52nd IEEE/ACM International Symposium on Microarchitecture (MICRO’19).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Columbus, OH.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [42] Ryan Marcus, Emily Zhang, and Tim Kraska.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' CDFShop: Exploring and Optimizing Learned Index Structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data (SIGMOD’20).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Portland, OR, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='1145/3318464.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='3384706 [43] Artemiy Margaritov, Dmitri Ustiugov, Edouard Bugnion, and Boris Grot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Virtual Address Translation via Learned Page Table Indexes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the Workshop on ML for Systems at NeurIPS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Montreal, Canada.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [44] Kiran Kumar Matam, Gunjae Koo, Haipeng Zha, Hung-Wei Tseng, and Murali Annavaram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' GraphSSD: Graph Semantics Aware SSD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the 46th International Symposium on Computer Architecture (ISCA’19).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Phoenix, Arizona.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [45] Microsoft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' MSR Cambridge Traces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [46] Jian Ouyang, Shiding Lin, Song Jiang, Yong Wang, Wei Qi, Jason Cong, and Yuanzheng Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' SDF: Software-Defined Flash for Web-Scale Internet Storage Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of 19th International Conference on Architectural Support for Programming Language and Operating Systems (ASPLOS’14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Salt Lake City, UT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [47] Over 50 years of development history of Flash Memory Technology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='elinfor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='com/knowledge/over-50-years-of-development-history- of-flash-memory-technology-p-11271.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [48] Nikolaos Papandreou, Haralampos Pozidis, Nikolas Ioannou, Thomas Parnell, Roman Pletka, Milos Stanisavljevic, Radu Stoica, Sasa Tomic, Patrick Breen, Gary Tressler, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Open block characterization and read voltage calibration of 3D QLC NAND flash.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In 2020 IEEE International Reliability Physics Symposium (IRPS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' IEEE, 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [49] Chanik Park, Wonmoon Cheon, Jeonguk Kang, Kangho Roh, Wonhee Cho, and Jin-Soo Kim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' A reconfigurable FTL (flash translation layer) architecture for NAND flash-based applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' ACM Transactions on Embedded Computing Systems (TECS) 7, 4 (2008), 1–23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [50] Zhiwei Qin, Yi Wang, Duo Liu, and Zili Shao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Demand-based block-level address mapping in large-scale NAND flash storage systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the eighth IEEE/ACM/IFIP international conference on Hardware/software codesign and system synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [51] Benjamin Reidys, Peng Liu, and Jian Huang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' RSSD: Defend against Ran- somware with Hardware-Isolated Network-Storage Codesign and Post-Attack Analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the 27th ACM International Conference on Architec- tural Support for Programming Languages and Operating Systems (ASPLOS’22).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Lausanne, Switzerland.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [52] Benjamin Reidys, Jinghan Sun, Anirudh Badam, Shadi Noghabi, and Jian Huang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' BlockFlex: Enabling Storage Harvesting with Software-Defined Flash in Modern Cloud Platforms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI’22).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Carlsbad, CA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [53] Liana V Rodriguez, Farzana Yusuf, Steven Lyons, Eysler Paz, Raju Rangaswami, Jason Liu, Ming Zhao, and Giri Narasimhan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Learning Cache Replacement with CACHEUS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In 19th USENIX Conference on File and Storage Technologies (FAST’21).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 341–354.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [54] Zhenyuan Ruan, Tong He, and Jason Cong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' INSIDER: Designing In-Storage Computing System for Emerging High-Performance Drive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the 2019 USENIX Annual Technical Conference (USENIX ATC’19).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Renton, WA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [55] Samsung Z-NAND.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='samsung.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='com/semiconductor/ssd/z-ssd/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [56] Subhash Sethumurugan, Jieming Yin, and John Sartori.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Designing a Cost- Effective Cache Replacement Policy using Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In 2021 IEEE Inter- national Symposium on High-Performance Computer Architecture (HPCA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' IEEE, 291–303.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [57] Zhan Shi, Xiangru Huang, Akanksha Jain, and Calvin Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Applying deep learning to the cache replacement problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 413–425.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [58] smartssd 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' SmartSSD Computational Storage Drive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='xilinx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='com/ applications/data-center/computational-storage/smartssd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [59] Vasily Tarasov, Erez Zadok, and Spencer Shepler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Filebench: A flexible framework for file system benchmarking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The USENIX Magazine 41, 1 (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [60] Usman Saleem, Advanced SSD Buying Guide - NAND Types, DRAM Cache, HMB Explained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' https://appuals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='com/ssd-buying-guide/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [61] Dana Van Aken, Djellel E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Difallah, Andrew Pavlo, Carlo Curino, and Philippe Cudré-Mauroux.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' BenchPress: Dynamic Workload Control in the OLTP- Bench Testbed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data (SIGMOD’15).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [62] Xiaohao Wang, Yifan Yuan, You Zhou, Chance C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Coats, and Jian Huang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Project Almanac: A Time-Traveling Solid-State Drive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the 14th European Conference on Computer Systems (EuroSys’19).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Dresden, Germany.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [63] Nan Wu and Yuan Xie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' A Survey of Machine Learning for Computer Architecture and Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' CoRR abs/2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='07952 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='org/abs/ 2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content='07952 [64] Qing Xie, Chaoyi Pang, Xiaofang Zhou, Xiangliang Zhang, and Ke Deng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Maximum Error-Bounded Piecewise Linear Representation for Online Stream Approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Proceedings of the VLDB Journal 23, 6 (Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [65] Qing Xie, Chaoyi Pang, Xiaofang Zhou, Xiangliang Zhang, and Ke Deng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Maximum error-bounded piecewise linear representation for online stream ap- proximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' The VLDB journal 23, 6 (2014), 915–937.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' [66] Yiying Zhang, Leo Prasath Arulraj, Andrea C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Arpaci-Dusseau, and Remzi H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' Arpaci-Dusseau.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' De-indirection for Flash-based SSDs with Nameless Writes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' In Proceedings of the 10th USENIX Conference on File and Storage Technologies (FAST’12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'} +page_content=' San Jose, CA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9AyT4oBgHgl3EQfRvd3/content/2301.00072v1.pdf'}