diff --git "a/89A0T4oBgHgl3EQfOv-8/content/tmp_files/load_file.txt" "b/89A0T4oBgHgl3EQfOv-8/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/89A0T4oBgHgl3EQfOv-8/content/tmp_files/load_file.txt" @@ -0,0 +1,509 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf,len=508 +page_content='Identification of lung nodules CT scan using YOLOv5 based on convolution neural network Haytham Al Ewaidat ID 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='*,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Youness El Brag ID 2 1Jordan University of Science and Technology,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Faculty of Applied Medical Sciences,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Department of Allied Medical Sciences-Radiologic Technology,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Irbid,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Jordan,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 22110 2Abdelmalek Essaˆadi University of Science and Technology,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Faculty of Multi-Disciplinary Larache,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Department of Computer Sciences,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' ksar el kebir ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Morocco,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 92150 Correspondence author: Dr Haytham Al Ewaidat,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Department of Allied Medical Sciences-Radiologic Technology,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Faculty of Applied Medical Sciences,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Jordan University of Science and Technology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' PO Box 3030, Irbid 22110, Jordan Tel: (+962)27201000-26939;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Fax: (+962)27201087;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' E-mail: haewaidat@just.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='jo arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='02166v1 [eess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='IV] 31 Dec 2022 Abstract Purpose: The lung nodules localization in CT scan images is the most difficult task due to the complexity of the arbitrariness of shape, size, and texture of lung nodules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' This is a challenge to be faced when coming to developing different solutions to improve detection systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' the deep learning approach showed promising results by using convolutional neural network (CNN), especially for image recognition and it’s one of the most used algorithm in computer vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Approach: we use (CNN) building blocks based on YOLOv5 (you only look once) to learn the features representations for nodule detection labels, in this paper, we introduce a method for detecting lung cancer localization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Chest X-rays and low-dose computed tomography are also possible screening methods, When it comes to recognizing nodules in radiography, computer-aided diagnostic (CAD) system based on (CNN) have demonstrated their worth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' One- stage detector YOLOv5 trained on 280 annotated CT SCAN from a public dataset LIDC-IDRI based on segmented pulmonary nodules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Results: we analyze the predictions performance of the lung nodule locations, and demarcates the relevant CT scan regions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' In lung nodule localization the accuracy is measured as mean average precision (mAP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' the mAP takes into account how well the bounding boxes are fitting the labels as well as how accurate the predicted classes for those bounding boxes, the accuracy we got 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='27% .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Conclusion: this study was to identify the nodule that were developing in the lungs of the participants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' It was difficult to find information on lung nodules in medical literature, Keywords: computer-aided diagnostic, deep learning, Convolutional Neural Networks ,Lung Nodule.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Address all correspondence to Haytham Al Ewaidat , haewaidat@just.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='jo 1 introduction As far as noninvasive therapy and clinical assessment are concerned, medical image analysis offers a tremendous advantage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' X-rays, CTs, MRIs, and ultrasounds are utilized to make precise diag- noses based on the obtained restorative images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' By using attractive fields, CT can capture pictures on film in medical imaging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' One-of-a-kind lung cancer is responsible for 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='61 million fatalities per year.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Most of the cases of lung cancer in Indonesia are observed in the MIoT centers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' If the tumor is identified early, the survival percentage is better then.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' It’s not an easy task to find lung cancer in its early stages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Approximately 80% of cancer patients are diagnosed at the core or accelerated phase of the disease.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Lung cancer is the second most common cancer among men and the tenth most common among women worldwide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' After breast and colorectal cancer, lung cancer is the thirdly most common cancer among women.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Features extraction in image processing is one of the simplest and most efficient dimensionality reduction approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' The non-invasive nature of CT imaging is one of its most notable characteristics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' It’s surprising to see angles increasing when compared to other imaging modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Computed tomography imaging is the best technique for examining lung disorders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' CT scans, on the other hand, have a high probability of false-positive results and are associated with cancer- causing radiation exposure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' When compared to standard-dose CT, low-dose CT utilizes a lot less radiation contact power.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' The findings reveal that the detection sensitivity of low-dose and standard- dose CT images is not significantly different.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' A well know database the National Lung Screening Trial database shows that cancer-related fatalities were considerably decreased in the group that was subjected to low-dose CT scans rather than chest radiography.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' The sensitivity of lung nodule 1 identification may be improved by the use of more detailed anatomical information, and better image registration methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' As a result, the datasets have grown enormously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Up to 500 seg- ments/slice may be generated from a single scan, depending on how thick the slice is.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' A single slice is examined by a competent radiologist in 2–3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='5 minutes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' A radiologist’s workload keeps rising while screening a Ct for the presence of a suspicious nodule.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' The detection sensitivity of nodules is influenced by a variety of factors, including the size, location, form, nearby structures, edges, and density, in addition to the CT slice section thickness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Only 68 percent of lung cancer nodules are properly identified when only one radiologist doctor views the scan, and up to 82% of the time when two radiologists check the scan, according to the study results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Early diagnosis of malignant lung nodules by radiologists is a tough, time- consuming, and laborious process in and of itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' The radiologist needs a lot of time to carefully screen a large number of images, but this method is prone to mistake when looking for microscopic nodules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' An aid for radiologists is required in this case to speed up readings, catch any missing nodules, and enable improved localization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' A primary goal of computer-aided detection systems was to minimize radiologists’ labor and boost the detection rate of nodules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Newer CAD systems, on the other hand, can distinguish between benign, and malignant nodules, which is helpful in the screening process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' CAD systems frequently beat professional radiologists in nodule identification and localization tasks because of recent breakthroughs in deep learning models, particularly in image processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' CAD systems, on the other hand, have an FP rate of 1–8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='2 per scan and a detection range of 38–100%, according to different studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' As a result of their likeness to one other, benign, and malignant nodule remain a difficult challenge to solve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' During the screening process, a variety of mistakes might occur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' For example, if a scan fails to capture or recognize a specific region of the lesion or fails to distinguish between benign, and malignant lesions in a patient’s body, the patient may be at risk of misdiagnosis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Most people die as a result of misdiagnoses and delays in treatment because of these mistakes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' In radiology, over 4% of reports include diagnostic mistakes on a daily, and about 30% of aberrant radiological diagnoses are ignored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Early-stage lung nodules may be detected and classified more accurately using different methodologies such as deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Lung nodule identification using deep learning with a specific methodology is presented in this research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' lung CT images, physiological symptoms, and clinical indicators, the suggested ap- proach reduces false-positive findings and eventually prevents invasive procedures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' YOLOv5 is used which has convolutional networks were built to identify and classify nodules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' For nodule identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Nodule identification and classification using the publicly accessible data set LIDC- IDRI surpasses state-of-the-art deep learning techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Using a variety of techniques, we were able to reduce the number of false positives in the learning algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Lung nodule computer-aided detection (CAD) systems were originally developed in the late 1980s, but the processing resources required for sophisticated image analysis methods at the time made these efforts unattractive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' For image analysis, and decision support systems based on computers, the graphics processing unit and convolutional neural networks revolutionized their performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Some of the most important lung nodule identification and classification approaches have been suggested by researchers in deep learning based medical images analysis models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' For lung nodule 2 classification, Yutong Xie et al1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' proposed a method that utilizes Texture, Shape, and Deep Model learned Data at the choice level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Nodule heterogeneity may be shown with the use of this algorithm’s GLCM-based surface de- scriptor, Fourier-shape descriptor, and a DCNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Based on CNNs Chougrad et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='2 studied the classification of breast cancer using a CAD framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Transfer learning, on the other hand, takes just a small number of medical images to train a system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' With the use of the transfer learning approach, the CNNs were taught to their fullest potential.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' In terms of accuracy, CNN came out on top with a score of 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='94 percent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Using the wavelet transform and principal component analysis, Heba Mohsen et al3 developed a DNN classifier for brain tumor classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' A technique of reg- ularized linear discriminant analysis was developed in 2015 by Sharma et al,4 and it used a regular- ization parameter to perform a standard cross-validation methodology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' An appropriate collection of characteristics is needed to evaluate medical data for illness prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Several evolutionary algorithms have been used to find the best possible traits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Gravitational search and Elephant Herd optimizations have recently been used to choose the best features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='5 Another deep learning-based model created by Kuruvilla, and Gunavathi, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' in 2014, an ANN-based cancer classification for CT scans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Development of the statistical model used to classify the data was completed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Compared to feed-forward networks feed-forward backpropagation networks are more accurate, according to research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Classifier accuracy may be improved even more by using the skewness feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='6 Lung cancer detection categorization is becoming more and more popular due to the rapid advance- ment of pattern recognition and image processing methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Textural evaluation of thin-section CT images has been used in the literature to help distinguish various obstructive lung disorders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' At- tenuation distribution statistics, acquisition-length parameter, and co-occurrence descriptor are all included in 13-dimensional vectors of local textures information developed by Chabat et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='7 A Bayesian classifier is used for feature segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' These five scalar metrics, max, entropy, en- ergy, contrast, and homogeneity were recovered per each co-occurrence matrix to minimize the feature vector’s dimensionality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' The textural characteristics of Solitary Pulmonary Nodules dis- covered by CT have been described and assessed by Yanjie Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='8 It took 300 generations for 67 characteristics to be retrieved, however, only 25 features were picked.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' SVM-based classifiers are used for classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' For Interstitial Lung Disease, Sang Cheol Park and colleagues9 used a genetic algorithm to identify the best picture attributes (ILD).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Hiram et al10 used the frequency do- main, and SVM with RBF to classify lung nodule classifications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Solitary pulmonary nodules may be automatically detected using an algorithm provided by Hong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='11 True nodules are identified and labeled on original images using an SVM classifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' The LIDC-IDRI images database was used by Antonio et al,12 to classify lung nodules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Ecological taxonomic diversity and taxonomic distinctness indexes are used for classification using SVM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='13 Results show a 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='11% accuracy rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' The mesh grid region growth approach was used in CT to select and analyze just the pixels that were most likely to be relevant to the diagnosis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' The ILD status of all unselected pixels was deter- mined to be negative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' To recognize lung cancer cells, Zhi-Hua et al14 presented Neural Ensemble- based Detection (NED), which makes use of an artificial neural network ensemble.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Using this technology, it is possible to accurately identify cancer cells.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' An algorithm developed by Hui Chen et al,15 uses a Neural Networks Ensemble to construct the categorization of a lung nodule on a thin section CT image (NNE).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' A model suggested by Aggarwal, Furquan, and Kalra16 is characterized by normal lung architecture by which segmentation is done using the best possible thresholds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Ge- 3 ometric, statistical, and grey level properties are used to extract features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Classification is done using LDA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' The accuracy is 84%, the sensitivity is 97%, and the specificity is 53%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' An inference- based approach to identify lung cancer nodules has been developed by Roy, Sirohi, and Patle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='17 To improve contrast, this technique employs grey transformations using an active contour model, the image is segmented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Training the classifier is done by extracting features such as area, mean, major axis, and minor axis length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Overall, the system’s accuracy rate is 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='12%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' This approach has a disadvantage in that it does not distinguish between benign cancers and those that are malignant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Authors have used wavelet feature descriptors to classify lung nodules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='18 One and two-level de- compositions of wavelet transformations are used in this example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' A total of 19 characteristics are derived from each wavelet sub-band.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' SVM is used to distinguish between CT images that include malignant nodules and those that do not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 2 Material and Methods In this section, we introduce our methods for Lung Nodules localization We use a one-stage- method based on YOLOv5 detection , the methodology has been split into the following Subsec- tions to explain the whole process of our method .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='1 Dataset For this research, the dataset has been collected from LIDC-IDRI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' In LIDC-IDRI image collection, thoracic CT scans with marked-up annotated lesions are included.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' For the development, training, and assessment of computer-assisted diagnostic (CAD) approaches for the detection and diagnosis of lung cancer is a worldwide web-accessible resource One example of a public-private partnership founded on consensus-based decision-making is this collaboration between the National Cancer Institute, the Foundation for the National Institute of Health, and Food and Drug Administration (FDA), which was spearheaded by NCI and supported by the FDA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' This data collection, which includes 239 Ct images for training and 41 images for validation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' is a subset of the original dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Some of the samples are given below in the following Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 1: Samples from dataset LIDC-IDRI Lung Cancer 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='2 Pre-Processing Data Real-world data tends to be fragmentary, noisy, and inconclusive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' This may lead to low-quality data collection, which in turn can lead to low-quality models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Data Preprocessing offers procedures that 4 LIDC-IDRI-0001 LightSpeed Plus 1-January-2000 ST:2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='50SL ST LittleEndianExplicit Images:1/1 400mA120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='00kV Series: 3000566 WL: 600WW:1600LIDC-IDRI-0003 LightSpeed16 1-January-2008 ST:2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='50S T LittleEndianExplicit Images:1/1 300mA120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='00kV Series: 3000611 WL: 600WW:1600may properly organize the data for better comprehension in the deep learning process to solve these challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Data Preprocessing steps that have been used in this research study are given in the following Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 2: Preprocessing steps for images 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='3 Model architecture As discussed in the introduction in this research YOLOv5 model is used for feature extraction and detection of lung nodules in CT scans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Let have a brief discussion about Yolov5 and its architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='1 YOLOv5 for lung nodules localization the whole structure of Yolov4 Optimal speed and accuracy of object detection19 is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='3 and YOLOv5 illustration representation shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' the YOLO family of models consists of three main components to every single-stage object detector, and YOLOv5 has its own three main modules Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 3: Overview of YOLOv5 building blocks model architecture (1) Backbone Figure 3:it’s mostly used to extract the elements of the most significant feature from the images that have been provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Cross Stage Partial Networks(CSP) is the back- bone of YOLOv5’s feature extraction,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' which uses them to extract an image’s most informa- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='tive details ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='input image ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='output ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='image ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Gray ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Noise ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Edge ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Filter ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='scale ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Removal ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='DetectionBackbone: CSPDarknet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Neck: PANet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Head: Yolo Layer ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='BottleNeckCSP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Concat ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='BottleNeckCSP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Convlx1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='input image ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='UpSample ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Conv3×3 S2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Conv1×1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Concat ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='BottleNeckCSP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Final ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='BottleNeckCSP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Concat ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='BottleNeckCSP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Output ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Convl×1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='UpSample ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Conv3x3 S2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Convl×1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Concat ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='SPP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='BottleNeckCSP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='BottleNeckCSP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Convl×1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='CSP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Cross Stage Partial Netword ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Conv ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Convolutional Layer ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='SPP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Spatial pyramid pooling ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Concat ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='Concatenate Function(2) Neck Figure 3: it used to create feature pyramids.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Feature pyramids aid models in generaliz- ing successfully when it comes to object scaling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' It aids in the identification of the same item at various scales and dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Feature pyramids are quite valuable and can help models perform effectively on data that has never been examined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' It’s not only FPN, BiFPN, and PANet that are used in feature pyramid models (3) Head Figure 3: it has layers that generate predictions from anchor boxes on features and generated final output vectors with probabilities, object classes scores, and bounding boxes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=', YOLOv5 uses the following choices for training20 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 4: Model detection can be considered a regression problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' The image is divided into S * S grids in which bounding boxes are predicted for each grid cell, along with their confidence value 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='2 Training Model During the training and validation process, a total of 270 CT Scan images are used of which 239 CT Scans are used for training and 41 are used for validations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' For training, the Google Colab is used which is an online platform for training models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Which provides 16GB GPU free for training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' The batch size was kept to 16 and the number of epochs was kept to 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Splitting of data can be seen in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='5 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 6 Head Bounding Boxes + confidence Score Backbone Neck images Extraction of Elaboration in informative Featuer Labels features Pyramids S x S Grid on input image Bounding Boxes + confidence Score Localization of Lung Nodule Class Probability MapFig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 5: Dataset Splitting Diagram CT Scan images 3 Results the model had initial leverage to train faster and predict the location of lung nodules and demarcates the relevant CT scan regions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' before diving into the analysis of the results is necessary to explain the statistical machine learning knowledge behind those results, the explanations have been split into the following Subsections to explain the whole analysis of the method we use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='1 Evaluation Metrics In this section, we describe Charts of evaluation metrics that got from our experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' It is known to us that, in the computer Aide system, the main part is detecting the object inside the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Common metrics for measuring the performance of classification algorithms such as YOLOv5 that are based on CNN include, Recall, precision, F-score, mAP, PR curve, F1 curve , IOU,21 overlapping error, and boundary-based evaluation, the evaluation metrics we used is the mean Average Precision (mAP),22 the precision, and F1-Curve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' We will briefly explain them in the following part.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' According to the theory of the statistical machine learning , precision is a two- category statistical indicator whose formula is .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Precision : measures how accurate is our predictions was.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' the percentage of our predictions are correct as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='8,and following equation1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Precision = TP TP + FP (1) Recall: measures how much of the true bbox were correctly predicted as shown in the following equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Recall = TP TP + FN (2) 7 Total Data Distrubtion CT Scan 270Samples Traning Validation images 239 Samples images 41 Samplesmoreover, it is necessary to know TP, FP, and FN in the localization Nodules task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' (1) True positive (TP): IoU>[formula] (in this work, [formula] takes 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='2) the number of Local- ization frames (the same Ground Truth is only calculated once23) (2) False positive (FP): the number of check boxes for IoU<=[formula] or the number of re- dundant check boxes that detect the same Ground Truth (3) False negative (FN): the number of Ground Truths not detected the IoU is a measures of the degree of overlap between two boundaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' We use that to measure how much our predicted frame overlaps with the ground truth (the actual ground frame) ,the IOU is shown with Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='7 as follows, and the formula is as following Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='6: Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 6: Graphical representation of the Intersection over Union (IoU=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='2) calculation on a narrow-band imaging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' The light blue rectangle represents the ground truth bounding box, while the red rectangle repre- sents the model prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' The IoU is calculated by dividing the overlap area by the total area of union after getting familiar with these definitions of statistical learning formulas, we introduce the mAP (mean Average Precision).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' The mAP compares the ground-truth bounding box to the detected box and returns a score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' The higher the score, the more accurate the model is in its detections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' F1-score is defined as the harmonic average of precision and recall as shown in figure 10a: F1 Score = 2 ∗ Precision ∗ Recall Precision + Recall (3) 8 LIDC-IDRI-0003 LightSpeed16 1-January-20e6 Ground truth intersection 0 area of overlap Prediction Iou = area of union Ground truth Ground truth Prediction ST:2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='58 Prediction ittleEndianExplicit ges: 1/1 300mA120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='00kl WL:-600hW:1606(a) the Predicted location of a Single Nodule (b) the Predicted location of tow Nodules in region Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 7: Example of output Results 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='2 Experiment’s Setting the set Hyper-parameters of our fine tuning model are shown in Table 1, Our experiment uses Pytorch framework deep learning on GPU Tesla K80 by Google open Platform Colab-research .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Parameters Value Batch size 16 Image size 416 Epoch 145 Learning rate 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='01 Optimizer SGD Table 1: Parameters and their value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='3 Experiment’s Result and Analysis To check the model’s predictions, and generalizations a few evaluation parameters must be tracked during training and validation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' There are several criteria to keep in mind while evaluating a box loss, Precision, and recall values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' The variable box benefits from objectivity and categorization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='9 shows all of the graphs that were used for this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' And Figure 10a shows the F1 indicator training process for a single category that we want to be detected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' The F1 score tends to be 0 with increasing confidence .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Training and validation box losses are reduced Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='9, suggesting that the model is sound good.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' the mAP is the abbreviation of median accuracy performances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' The high number indicates that this parameter is correct 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='27% as shown blow in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 9 THORAXW/OCONTRAST LightSpeed1e 1-January-2000 9:01:09 Nodules 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='76 ST:2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='502 ST LittleEndianExplicit Images:1/1 265mA120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='00kV Series: WL: 600WW:1600LIDC-IDRI-0011 LightSpeed16 1-January-200e Nodules 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='33 Nodules 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='78 ST:2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='50SL: ST LittleEndianExplicit Images:1/1 265mA120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='00kV Series: 3000559 WL: 600wW:1600(a) mean Average Precision Evaluation (b) Precision Evaluation Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 8: the important Evaluation Metrics (a) the training confidence of object pres- ence loss (b) the validation confidence of object pres- ence loss (c) the training bounding box regression loss (d) the validation bounding box regression loss Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 9: Results of feature extraction training and validation Precision is needed to determine how accurate the model forecasts are 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='82% following the figure 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Only excellent results may be achieved by using the recall method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' the model performance showed a good benefit of using Hyper-parameter tuning to make better Learning from data samples and generalize good knowledge from distribution can be seen the following Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='9 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Due to the importance of both precision and recall, there is a precision-recall curve the shows the tradeoff between the precision and recall values for different thresholds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' This curve helps to select the best threshold to maximize both metrics, tin the following Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='10b 10 metrics/mAP 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='2 Epoch 0 0 20 40 60 80 100 120 140metrics/precision 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='2 Epoch 0 0 20 40 60 80 100 120 140train/obj_loss 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='012 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='008 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='006 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='004 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='002 Epoch 0 0 20 40 60 80 100 120 140val/obj_loss 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='012 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='008 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='006 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='004 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='002 Epoch 0 20 40 60 80 100 120 0 140train/box_loss 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='02 Epoch 0 0 20 40 60 80 100 120 140val/box_loss 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='02 Epoch 0 0 20 40 60 80 100 120 140(a) F1 indicator training process for single category (b) Precision — Recall Curve of the valida- tion data Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 10: the important Evaluation Metrics 4 Discussion Ana Conclusion This research examined at how an AI model can help readers detect viewable lung cancer in Ct images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Residents identified more viewable lung cancer when AI was being used as a second reader.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='In this research, the dataset has been collected from LIDC-IDRI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' In LIDC-IDRI image col- lection, thoracic CT scans with marked-up annotated lesions are included.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Yolov5 model is used for feature extraction and detection of lung nodules in CT scans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='During the training and validation process, a total of 270 CT Scan images are used of which 239 CT Scans are used for training and 41 are used for validations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' In this study, the model’s performance was assessed using accuracy, precision, and recall.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' The accuracy metric indicates how well the model recognised both positive and negative instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' The precision metric measures how well the model predicts both negative and positive cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' The model’s high accuracy, precision, and recall imply that it has a small error possibility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Our findings imply that the AI technique assists low experienced individuals in terms of recall while benefiting more-experienced audience in terms of precision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Previous research has revealed that inexperienced readers are more likely to overlook lung malignancies, particularly le- sions with a limited visibility score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' In this research LIDC-IDRI dataset is used which have lung nodules in it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' The purpose of this study was to identify the nodule that were developing in the lungs of the participants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' It was difficult to find information on lung nodules in medical literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Research in the medical field often use deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Deep learning will be utilised to develop an algorithm with the support of previous medical imaging research, according to the findings of a literature review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Using over 270 CT images , we were able to classify and identify nodules using a deep learning algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Using medical images analysis based on deep neural networks, this study found that as much as 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='27% of cancer could be detected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Nodules on radiographs are easier to see with its help.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Using this technology in the future will help treat illnesses including brain tumours and breast cancer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 5 Disclosures The authors declare that they have no conflict of interest 6 Acknowledgments We would like to thank our respectful research assistant Moath Alawaqla, for his distinguished role of data collection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 11 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='0 Nodules all classes 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='91 at 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='437 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='0 + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='0 Confidence1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='0 Nodules 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='923 all classes 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='923 mAP@0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='6 Precision 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='0 Recall7 Funding This work supported by Jordan University of Science and Technology, Irbid-Jordan, References 1 Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Xie, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Xia, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=', “Fusing texture, shape and deep model-learned information at decision level for automated classification of lung nodules on chest ct,” Information Fusion 42, 102–110 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 2 H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Chougrad, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Zouaki, and O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Alheyane, “Deep convolutional neural networks for breast cancer screening,” Computer methods and programs in biomedicine 157, 19–30 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 3 H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Mohsen, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' El-Dahshan, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' El-Horbaty, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=', “Classification using deep learning neural networks for brain tumors,” Future Computing and Informatics Journal 3(1), 68–71 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 4 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Sharma and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Paliwal, “A deterministic approach to regularized linear discriminant analysis,” Neurocomputing 151, 207–214 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 5 S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Nagpal, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Arora, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Dey, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=', “Feature selection using gravitational search algorithm for biomedical data,” Procedia Computer Science 115, 258–265 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 6 J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Kuruvilla and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Gunavathi, “Lung cancer classification using neural networks for ct im- ages,” Computer methods and programs in biomedicine 113(1), 202–209 (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 7 F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Chabat, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='-Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Yang, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Hansell, “Obstructive lung diseases: texture classification for differentiation at ct,” 8 Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Zhu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Tan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Hua, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=', “Feature selection and performance evaluation of support vector machine (svm)-based classifier for differentiating benign and malignant pulmonary nodules by computed tomography,” Journal of digital imaging 23(1), 51–65 (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 9 S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Park, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Tan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Wang, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=', “Computer-aided detection of early interstitial lung dis- eases using low-dose ct images,” Physics in Medicine & Biology 56(4), 1139 (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 10 H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Orozco, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Villegas, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Maynez, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=', “Lung nodule classification in fre- quency domain using support vector machines,” in 2012 11th international conference on in- formation science, signal processing and their applications (ISSPA), 870–875, IEEE (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 11 H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Shao, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Cao, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Liu, “A detection approach for solitary pulmonary nodules based on ct images,” in Proceedings of 2012 2nd International Conference on Computer Science and Network Technology, 1253–1257, IEEE (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 12 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' de Carvalho Filho, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Silva, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' de Paiva, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=', “Lung-nodule classification based on computed tomography using taxonomic diversity indexes and an svm,” Journal of Signal Processing Systems 87(2), 179–196 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 13 D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Kumar, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Wong, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Clausi, “Lung nodule classification using deep features in ct images,” in 2015 12th conference on computer and robot vision, 133–138, IEEE (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 14 Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Zhou, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Jiang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Yang, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=', “Lung cancer cell identification based on artificial neural network ensembles,” Artificial intelligence in medicine 24(1), 25–36 (2002).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 15 H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Xu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Ma, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=', “Neural network ensemble-based computer-aided diagnosis for differentiation of lung nodules on ct images: clinical evaluation,” Academic Radiology 17(5), 595–602 (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 16 T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Aggarwal, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Furqan, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Kalra, “Feature extraction and lda based classification of lung nodules in chest ct scan images,” in 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI), 1189–1193, IEEE (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 17 T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Roy, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Sirohi, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Patle, “Classification of lung image and nodule detection us- ing fuzzy inference system,” in International Conference on Computing, Communication & Automation, 1204–1207, IEEE (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 12 18 H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Madero Orozco, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Vergara Villegas, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Cruz S´anchez, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=', “Automated system for lung nodules classification based on wavelet feature descriptor and support vector machine,” Biomedical engineering online 14(1), 1–20 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 19 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Bochkovskiy, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Wang, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content='10934 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 20 M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Kasper-Eulaers, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Hahn, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Berger, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=', “Detecting heavy goods vehicles in rest areas in winter conditions using yolov5,” Algorithms 14(4), 114 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 21 D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Zhou, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Fang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Song, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=', “Iou loss for 2d/3d object detection,” in 2019 International Conference on 3D Vision (3DV), 85–94, IEEE (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 22 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Henderson and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Ferrari, “End-to-end training of object class detectors for mean average precision,” in Asian conference on computer vision, 198–213, Springer (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 23 Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Luo, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' Sun, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=', “Intelligent solutions in chest abnormality detection based on yolov5 and resnet50,” Journal of Healthcare Engineering 2021 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'} +page_content=' 13' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89A0T4oBgHgl3EQfOv-8/content/2301.02166v1.pdf'}