AdamLucek commited on
Commit
2dd06fc
·
verified ·
1 Parent(s): 50b6d1a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +17 -23
README.md CHANGED
@@ -1,36 +1,17 @@
1
  ---
2
  language:
3
  - en
4
- pretty_name: quickb-kb
5
  tags:
6
  - quickb
7
  - text-chunking
8
- - unknown
9
  task_categories:
10
  - text-generation
11
  - text-retrieval
12
  task_ids:
13
- - document-retrieval
14
  library_name: quickb
15
- configs:
16
- - config_name: default
17
- data_files:
18
- - split: train
19
- path: data/train-*
20
- dataset_info:
21
- features:
22
- - name: id
23
- dtype: string
24
- - name: text
25
- dtype: string
26
- - name: source
27
- dtype: string
28
- splits:
29
- - name: train
30
- num_bytes: 1135297
31
- num_examples: 2807
32
- download_size: 553288
33
- dataset_size: 1135297
34
  ---
35
 
36
  # quickb-kb
@@ -39,7 +20,20 @@ Generated using [QuicKB](https://github.com/AdamLucek/quickb), a tool developed
39
 
40
  QuicKB optimizes document retrieval by creating fine-tuned knowledge bases through an end-to-end pipeline that handles document chunking, training data generation, and embedding model optimization.
41
 
42
-
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
 
45
 
 
1
  ---
2
  language:
3
  - en
4
+ pretty_name: "quickb-kb"
5
  tags:
6
  - quickb
7
  - text-chunking
8
+ - 1K<n<10K
9
  task_categories:
10
  - text-generation
11
  - text-retrieval
12
  task_ids:
13
+ - document-retrieval
14
  library_name: quickb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ---
16
 
17
  # quickb-kb
 
20
 
21
  QuicKB optimizes document retrieval by creating fine-tuned knowledge bases through an end-to-end pipeline that handles document chunking, training data generation, and embedding model optimization.
22
 
23
+ ### Chunking Configuration
24
+ - **Chunker**: RecursiveTokenChunker
25
+ - **Parameters**:
26
+ - **chunk_size**: `400`
27
+ - **chunk_overlap**: `0`
28
+ - **length_type**: `'character'`
29
+ - **separators**: `['\n\n', '\n', '.', '?', '!', ' ', '']`
30
+ - **keep_separator**: `True`
31
+ - **is_separator_regex**: `False`
32
+
33
+ ### Dataset Statistics
34
+ - Total chunks: 2,807
35
+ - Average chunk size: 50.7 words
36
+ - Source files: 10
37
 
38
 
39