Dataset Viewer
text
stringclasses 7
values |
---|
id,package_name,review,date,star,version_id
|
7bd227d9-afc9-11e6-aba1-c4b301cdf627,com.mantz_it.rfanalyzer,We present QLORA, an efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning task performance. QLORA backpropagates gradients through a frozen, 4-bit quantized pretrained language model into Low Rank Adapters (LoRA). Our best model family, which we name Guanaco, outperforms all previous openly released models on the Vicuna benchmark, reaching 99.3% of the performance level of ChatGPT while only requiring 24 hours of finetuning on a single GPU. QLORA introduces a number of innovations to save memory without sacrificing performance: (a) 4-bit NormalFloat (NF4), a new data type that is information theoretically optimal for normally distributed weights (b) Double Quantization to reduce the average memory footprint by quantizing the quantization constants, and (c) Paged Optimizers to manage memory spikes. We use QLORA to finetune more than 1,000 models, providing a detailed analysis of instruction following and chatbot performance across 8 instruction datasets, multiple model types (LLaMA, T5), and model scales that would be infeasible to run with regular finetuning (e.g. 33B and 65B parameter models). Our results show that QLoRA finetuning on a small high-quality dataset leads to state-of-the-art results, even when using smaller models than the previous SoTA. We provide a detailed analysis of chatbot performance based on both human and GPT-4 evaluations showing that GPT-4 evaluations are a cheap and reasonable alternative to human evaluation. Furthermore, we find that current chatbot benchmarks are not trustworthy to accurately evaluate the performance levels of chatbots. A lemon-picked analysis demonstrates where Guanaco fails compared to ChatGPT. We release all of our models and code, including CUDA kernels for 4-bit training. 2 * Equal contribution. 2\nIntroduction\nFinetuning large language models (LLMs) is a highly effective way to improve their performance, [40,62,43,61,59,37] and to add desirable or remove undesirable behaviors [43,2,4]. However, finetuning very large models is prohibitively expensive; regular 16-bit finetuning of a LLaMA 65B parameter model [57] requires more than 780 GB of GPU memory. While recent quantization methods can reduce the memory footprint of LLMs [14,13,18,66], such techniques only work for inference and break down during training [65].\n[40,\n62,\n43,\n61,\n59,\n37]\n[43,\n2,\n4]\n[57]\n[14,\n13,\n18,\n66]\n[65]\nWe demonstrate for the first time that it is possible to finetune a quantized 4-bit model without any performance degradation. Our method, QLORA, uses a novel high-precision technique to quantize a pretrained model to 4-bit, then adds a small set of learnable Low-rank Adapter weights [28] Table 1: Elo ratings for a competition between models, averaged for 10,000 random initial orderings. The winner of a match is determined by GPT-4 which declares which response is better for a given prompt of the the Vicuna benchmark. 95% confidence intervals are shown (卤). After GPT-4, Guanaco 33B and 65B win the most matches, while Guanaco 13B scores better than Bard.\n[28]\n1\nModel\nSize QLORA reduces the average memory requirements of finetuning a 65B parameter model from >780GB of GPU memory to <48GB without degrading the runtime or predictive performance compared to a 16bit fully finetuned baseline. This marks a significant shift in accessibility of LLM finetuning: now the largest publicly available models to date finetunable on a single GPU. Using QLORA, we train the Guanaco family of models, with the second best model reaching 97.8% of the performance level of ChatGPT on the Vicuna [10] benchmark, while being trainable in less than 12 hours on a single consumer GPU; using a single professional GPU over 24 hours we achieve 99.3% with our largest model, essentially closing the gap to ChatGPT on the Vicuna benchmark. When deployed, our smallest Guanaco model (7B parameters) requires just 5 GB of memory and outperforms a 26 GB Alpaca model by more than 20 percentage points on the Vicuna benchmark (Table 6).\n[10]\n6\nQLORA introduces multiple innovations designed to reduce memory use without sacrificing performance: (1) 4-bit NormalFloat, an information theoretically optimal quantization data type for normally distributed data that yields better empirical results
|
id,package_name,review,date,star,version_id
|
7bd227d9-afc9-11e6-aba1-c4b301cdf627,com.mantz_it.rfanalyzer,Great app! The new version now works on my Bravia Android TV which is great as it's right by my rooftop aerial cable. The scan feature would be useful...any ETA on when this will be available? Also the option to import a list of bookmarks e.g. from a simple properties file would be useful.,October 12 2016,4,1487
|
7bd22905-afc9-11e6-a5dc-c4b301cdf627,com.mantz_it.rfanalyzer,Great It's not fully optimised and has some issues with crashing but still a nice app especially considering the price and it's open source.,August 23 2016,4,1487
|
7bd2299c-afc9-11e6-85d6-c4b301cdf627,com.mantz_it.rfanalyzer,Works on a Nexus 6p I'm still messing around with my hackrf but it works with my Nexus 6p Trond usb-c to usb host adapter. Thanks!,August 04 2016,5,1487
|
7bd22a26-afc9-11e6-9309-c4b301cdf627,com.mantz_it.rfanalyzer,The bandwidth seemed to be limited to maximum 2 MHz or so. I tried to increase the bandwidth but not possible. I purchased this is because one of the pictures in the advertisement showed the 2.4GHz band with around 10MHz or more bandwidth. Is it not possible to increase the bandwidth? If not it is just the same performance as other free APPs.,July 25 2016,3,1487
|
7bd22aba-afc9-11e6-8293-c4b301cdf627,com.mantz_it.rfanalyzer,Works well with my Hackrf Hopefully new updates will arrive for extra functions,July 22 2016,5,1487
|
No dataset card yet
- Downloads last month
- 14