Skip to main content
The British Journal of Radiology logoLink to The British Journal of Radiology
. 2021 Apr 8;94(1121):20201263. doi: 10.1259/bjr.20201263

Automated detection of pneumonia cases using deep transfer learning with paediatric chest X-ray images

Mohammad Salehi 1,2,1,2, Reza Mohammadi 1,2,1,2, Hamed Ghaffari 1, Nahid Sadighi 3, Reza Reiazi 1,2,4,1,2,4,1,2,4,
PMCID: PMC8506182  PMID: 33861150

Abstract

Objective:

Pneumonia is a lung infection and causes the inflammation of the small air sacs (Alveoli) in one or both lungs. Proper and faster diagnosis of pneumonia at an early stage is imperative for optimal patient care. Currently, chest X-ray is considered as the best imaging modality for diagnosing pneumonia. However, the interpretation of chest X-ray images is challenging. To this end, we aimed to use an automated convolutional neural network-based transfer-learning approach to detect pneumonia in paediatric chest radiographs.

Methods:

Herein, an automated convolutional neural network-based transfer-learning approach using four different pre-trained models (i.e. VGG19, DenseNet121, Xception, and ResNet50) was applied to detect pneumonia in children (1–5 years) chest X-ray images. The performance of different proposed models for testing data set was evaluated using five performances metrics, including accuracy, sensitivity/recall, Precision, area under curve, and F1 score.

Results:

All proposed models provide accuracy greater than 83.0% for binary classification. The pre-trained DenseNet121 model provides the highest classification performance of automated pneumonia classification with 86.8% accuracy, followed by Xception model with an accuracy of 86.0%. The sensitivity of the proposed models was greater than 91.0%. The Xception and DenseNet121 models achieve the highest classification performance with F1-score greater than 89.0%. The plotted area under curve of receiver operating characteristics of VGG19, Xception, ResNet50, and DenseNet121 models are 0.78, 0.81, 0.81, and 0.86, respectively.

Conclusion:

Our data showed that the proposed models achieve a high accuracy for binary classification. Transfer learning was used to accelerate training of the proposed models and resolve the problem associated with insufficient data. We hope that these proposed models can help radiologists for a quick diagnosis of pneumonia at radiology departments. Moreover, our proposed models may be useful to detect other chest-related diseases such as novel Coronavirus 2019.

Advances in knowledge:

Herein, we used transfer learning as a machine learning approach to accelerate training of the proposed models and resolve the problem associated with insufficient data. Our proposed models achieved accuracy greater than 83.0% for binary classification.

Introduction

Pneumonia is a lung infection and causes inflammation of the small air sacs (Alveoli) in one or both lungs.1 A variety of pathogens such as bacteria, viruses, and fungi can cause it.2 As estimated by the World Health Organisation (WHO), pneumonia has an annual fatality rate of approximately 4 million; therefore, it is known as one of the leading cause of death for both children and elderly people throughout the world.3 Antibiotics and antivirals drugs are administrated to treat bacterial or viral pneumonia. Nevertheless, proper and faster diagnosis of pneumonia at an early stage is imperative for optimal patient care.4 Although there are three types of radiological imaging techniques than can be used for lung disease, including chest X-ray, CT, and MRI, chest X-ray is considered as the most common imaging modality to detect pneumonia owing to the low cost and availability.5 It should be noted, however, that the use of chest X-ray for detecting pneumonia can be challenging even for expert radiologists owing to similarity between these X-rays and those of other lung diseases such as cancer, bleeding, and fluid overload.2,6 Furthermore, the interpretation of chest X-ray images is time-consuming and less accurate in comparison with high-resolution CT (HRCT) owing to the similarity between opacities on X-ray images in lung pneumonia and other lung abnormalities (e.g. lung cancer and excess liquid).6 Taken all together, the current detection of pneumonia using chest X-rays can resulted in delayed diagnosis and proper treatment approach. Therefore, in recent years, there has been a growing interest for automated diagnosis of pneumonia using computer-aided detection (CAD) systems that can help the radiologists to overcome the aforementioned issues.

Currently, machine learning is used to automatically classify and recognise many clinical conditions such as brain tumour, breast cancer, etc.7–9 Machine learning is one of the techniques within artificial intelligence (AI) that enables the computers to learn patterns from samples without being explicitly programmed.10–12 In fact, machine learning can extract meaningful patterns from data (e.g. images). Deep learning or deep machine learning is considered as a subset of machine learning which applies multiple layers. Deep learning algorithms using multiple layers can extract higher-level features from raw input. Convolutional neural networks (CNNs) are one of the deep learning models that have indicated great promise in medical image analysis.11,12

Over the last decade, a number of studies have used machine learning and deep neural networks to automatically identify lung diseases from chest X-ray images.13–15 Of note, training CNN models need a large volume of data with corresponding labels. In addition, training a CNN model has a high computational cost and requires powerful hardware. Therefore, it has been suggested that using transfer learning as a machine learning approach can overcome the above-mentioned problems. Transfer learning utilises knowledge learned from a previous source learner in the new problem. The idea of transfer learning is use of a pre-trained CNN model to solve a different problem that was never seen before.16,17 Accordingly, we aimed to utilise an automated CNN-based transfer-learning approach using four well-known pre-trained models (i.e. VGG19, DenseNet121, Xception, and ResNet50) to detect pneumonia in paediatric chest radiographs.

Methods and materials

Data set

In the present work, data set of Kermany and colleagues was used for training and evaluation of the proposed models.18 This data set is based on an X-ray radiography database from paediatric patients (1–5 years) at the Guangzhou Females and Children’s Medical Center. The data set consists of 5856 chest X-ray images (JPEG) in anteroposterior view of two categories (Pneumonia/Normal). In total, there were 1583 images for the “Normal” category and 4273 images for the “Pneumonia” category. We organised the data set into three main folders, including training, validation, and testing. Table 1 shows the distribution of chest X-ray images for each folder. An example of chest X-ray radiography used in this study is shown in Figure 1.

Table 1.

Class distribution of chest X-ray images into train, validation, and test set before augmentation and balancing

Class Training set Validation set Test set
Normal 1341 8 234
Pneumonia 3875 8 390
Total 5216 16 624

Figure 1.

Figure 1.

Samples of chest X-ray images from dataset with corresponding labels.

Data pre-processing and augmentation

The data augmentation techniques were used to improve classification accuracy of the proposed algorithms. Also, using data augmentation methods prevent overfitting problems and increase the model’s generalisation ability during training. Table 2 summarises the parameters of the data augmentation. Different pre-processing techniques (such as noise reduction, contrast enhancement, etc.) were used to pre-process input data (i.e. Chest X-ray images). All original input images were scaled from 0 to 255, so we rescaled these images and normalised pixels values to a range from 0 to 1. This enables the input images to be used for various classification algorithms. Before batch normalisation, we used the mean subtraction per channel to centre the data around zero mean for three colour channel (i.e. R, G, B). This can typically help the network to learn faster owing to uniform action of gradients for each channel. Since the training data were highly imbalanced, we applied the Synthetic Minority Oversampling Technique (SMOTE).19 Therefore, using oversampling technique can overcome the aforementioned problem. Figure 2 shows the class distribution before and after oversampling. After data augmentation and balancing, 80% of the images were put in the training and validation and 20% in the test phase.

Table 2.

Setting for the data augmentation

Method Setting
Rescale 1/255
Rotation range 15
Width shift range 0.1
Height shift range 0.1
Shear range 0.05
Zoom range 0.05
Flipping Horizontal/Vertical
Fill mode Nearest

Figure 2.

Figure 2.

Dataset class distribution. Left image: before oversampling and Right image: zfter oversampling method.

Pre-trained transfer models

In this study, the data set containing pneumonia chest X-ray radiographies was used. We utilised the image augmentation for the training process after data set pre-processing and splitting. The goal of the data augmentation was to prevent the risk of overfitting. In addition, the geometric transforms used for data augmentation included rescaling, rotations, shifts, shears, zooms, fill mode, and flips.

We used a CNN-based model to detect pneumonia from paediatric chest X-ray images. In the current work, a CNN-based transfer learning approach using four different pre-trained models, including VGG19,20 DenseNet121,21 Xception,22 and ResNet5023 was applied for the classification of chest X-ray images to normal and pneumonia (binary classification). We used these pre-trained CNN architectures because these models are well-known, powerful, and the most used deep convolutional networks that have been applied in biomedical image classification. The aforementioned models have been trained and tested on the ImageNet dataset. The overall architecture of the proposed CNN models contains two following parts: (1) feature extractors, and (2) a classifier (sigmoid activation function). In other words, the architecture of all proposed models is similar and consists of four parts, including convolution, pooling, flattening, and fully connected layers. In this study, the fully connected layer contains following layers: (1) AveragePooling2D with pool size of 4 × 4, (2) Flatten with a size of 512 units, (3) Dense with a size of 64 units, (4) Dropout with a threshold of 0.5, and (5) a last Dense with the “two-element softmax (sigmoid)” activation to predict the classes. The output of the classification model was binary classification.

In addition, we visualised class activation maps to debug deep neural networks. Therefore, the Gradient-weighted Class Activation Mapping (Grad-CAM) algorithm was applied. This technique enables us to understand where the proposed network is “looking” into chest X-ray radiographies to detect pneumonia. The output of Grad-CAM is heatmap visualisation.

Training phase

In the present study, we adopted a transfer learning approach for evaluation and comparison of the CNN architectures. Initially, radiologist should discriminate pneumonia chest X-ray images from normal images; hence, a CNN design was proposed for a binary classification (Pneumonia/Normal).

All of the proposed networks were trained using the binary cross-entropy loss function and Adam optimiser to decrease the dimension of extracted feature with learning rate of 0.001, batch size of 15, and epoch value of 50. As stated earlier, we have applied data augmentation methods to increase training efficiency and prevent overfitting. In our study, Python library was used to train, validate, and test different algorithms. The training of neural networks was carried out in a standard PC with a GeForce GTX 8 GB NVIDIA and 32 GB RAM. The Holdout method was used to evaluate the performance of binary classification of the proposed models.24,25 Figure 3 shows training curve of accuracy and loss for each transfer learning model. The accuracy and loss for training and validation data set for each proposed model are summarised in Table 3. As observable in Table 3, the validation accuracy for ResNet50 is surprisingly low, as compared with the other models, and compared with its accuracy on the training set. In this case, it can be said the overfitting probably occurred.

Figure 3.

Figure 3.

Training accuracy (a) and loss (b) curves for the proposed models.

Table 3.

The accuracy and loss for training and validation dataset for the proposed models

Model Training accuracy (%) Validation accuracy (%) Training loss (%) Validation loss (%)
VGG19 99.6 87.5 0.92 35.3
Xception 99.7 93.7 0.66 42.8
ResNet50 93.2 68.7 11.2 67.5
DenseNet121 94.8 87.5 6.7 36.8

Evaluation criteria

After the completion of the training phase, we evaluated and compared the performance of different proposed models for testing data set using five performances metrics, including accuracy, sensitivity/recall, Precision, area under curve (AUC), and F1 score. The trapezoidal rule was used to compute the AUC.

Accuracy=TP+TNTP+FP+TN+FN
Sensitivity/Recall=TPTP+FN
Precision=TPTP+FP
F1score=2×Precision×RecallPrecision+Recall

Where: TP: True Positive. FP: False Positive. TN: True Negative, and FN: False Negative.

Results and discussion

Table 4 compares the performance metrics among different models on the test data set. All proposed models achieve accuracy greater than 83.0% for binary classification. As shown in Table 4, the pre-trained DenseNet121 model provides the highest classification performance of automated pneumonia classification with 86.8% accuracy, followed by Xception model with an accuracy of 86.0%. As observable in Table 4, the sensitivity of the proposed models was greater than 91.0%. Our proposed models achieve a high value of sensitivity (recall), representing low FN case, an encouraging result; therefore, because the proposed models can decrease missed pneumonia cases as much as possible. The Xception and DenseNet121 models achieve the highest classification performance with F1-score greater than 89.0% (Table 4).

Table 4.

Performance metrics for the proposed models

Model AUC Accuracy (%) Precision (%) Sensitivity (%) F1 score (%)
VGG19 0.78 83.6 80.0 98.5 88.2
Xception 0.81 86.0 83.9 96.1 89.6
ResNet50 0.81 84.8 85.0 91.8 88.3
DenseNet121 0.86 86.8 87.0 92.8 89.8

AUC, area under the curve.

Figure 4 shows the confusion matrix of each proposed model for comparison. The confusion matrix represents four primary parameters including TP, FP, TN, and FN. The receiver operating characteristic (ROC) curves for each transfer learning model on the test set are given in Figure 5. As shown in Table 4, the plotted AUC of ROC of VGG19, Xception, ResNet50, and DenseNet121 models are 0.78, 0.81, 0.81, and 0.86, respectively. The AUC of the ROC curve is considered as one of the most important evaluation metrics for assessing performance of classification models.

Figure 4.

Figure 4.

Confusion matrix for each transfer learning model, (a) VGG19, (b) Xception, (c) ResNet50, and (d) DenseNet121.

Figure 5.

Figure 5.

ROC curve for each proposed model, (a) VGG19, (b) Xception, (c) ResNet50, and (d) DenseNet121.

Figure 6 displays examples of GRAD-CAM activation maps. This technique provides useful information about areas of the chest X-ray images highlighted by the proposed models for detection of pneumonia. As observable in Figure 6, the DenseNet121 model effectively considers the lungs for detection of pneumonia in comparison with other three models. An interesting result of our study this is that in all chest X-rays displayed in Figure 6 the activation maps are related in regions inside the chest. The proposed models do not consider areas outside the chest. In other words, the models effectively consider the lungs to discriminate pneumonia. The heat-map can evaluate the detection performance of the neural network models for pneumonia lesions. The heat-map (Figure 6) demonstrates where the proposed model is accurately detecting areas of inflammation in the lungs to diagnose pneumonia.

Figure 6.

Figure 6.

X-ray image and the corresponding heatmaps, (a) VGG19, (b) Xception, (c) ResNet50, and (d) DenseNet121.

We compared the performance of the proposed models with the performance of radiologists. Hence, we asked two radiologists to assess a test set of 624 chest X-ray images. The radiologists had 5 and 7 years of experience. Table 5 summarises the performance of two radiologists. As observable in Table 5, the performance of both radiologists was similar. The results of the present study demonstrate that it is difficult for radiologists to accurately detect pneumonia only with human eyes. These results further reveal the advantages of our proposed algorithms.

Table 5.

Performance metrics for each radiologist

Performance metrics Radiologist 1 Radiologist 2
Accuracy (%) 58.9 61.1
Precision (%) 62.5 63.7
Sensitivity (%) 85.6 87.7
F1-score (%) 72.2 73.8

We compared the results of our study with the results of recently published studies on the same data set (i.e. the data set compiled by Kermany et al18). In a study, the performance of Inception V3 pre-trained CNN model was evaluated to detect pneumonia from chest X-ray images. The results of that study showed sensitivity and accuracy of 93.2 and 92.8%, respectively for binary classification.18 Other research group has used a deep transfer learning model called VGG16 to automatically detect pneumonia from chest X-ray images. The results of that study showed that VGG16 achieves accuracy, precision, and sensitivity of 86.6%, 83.0%, and 99.0%, respectively for detection of pneumonia.26 In another study, Ayan and Unver compared the performance of two CNN networks (Xception and VGG16) on the automated diagnosis of pneumonia. They used a total of 5856 chest X-ray images collected by Kermany et al18. Their study indicated that the proposed VGG16 network achieves an accuracy of 87.0%, which outperforms Xception network with 82.0% accuracy. Xception network obtained a high sensitivity of 85.0%, whereas VGG16 achieved 82.0% sensitivity.27 Also, Gu et al trained deep CNN model for classification of pneumonia and achieved 80.4% accuracy for binary classification (Pneumonia/normal).28 In contrast with the aforementioned studies, we provide full data for evaluation, including accuracy, sensitivity, precision, F1 score, and AUC.

It is worthwhile to mention that a robust model should be able to detect lung pneumonia in a normal patient population in presence or absence of other respiratory conditions (e.g. bacterial pneumonia, viral pneumonia, SARS, novel Coronavirus 2019, etc.). Moreover, the performance of a robust method should not fluctuate remarkably when other independent data sets are used. In the present study, different geometric transformations (as shown in Table 2) for data augmentation were applied to increase the amount of data. Then, we randomly split training and test data set. As a consequence, images of the same patients are most likely used in both training and test sets, which increases the apparent accuracy of the models for automated detection of pneumonia. In other words, the accuracy reported here is likely to be an overestimate. Hence, an independent data set representing other lung conditions is necessary to evaluate the algorithm performance. In other words, the most important limitation of our study is to work with limited data. We used a relatively small training data set. To create a more stable model, it is necessary to use more data sets and test it with the data in many different centres. Herein, we evaluated the proposed models on the public data set. Our data indicate that the pre-trained CNN models can achieve high accuracy in the public data set. Of note, there may be differences between characteristics of the public data sets and clinical data. Therefore, the usefulness of our proposed models should be investigated using clinical data. Our algorithms can detect pneumonia from posteroanterior chest radiographs at a level exceeding practicing radiologists. It should be noted, however, that the lateral chest X-ray images are required for up to 15% of accurate diagnoses of pneumonia. It is noteworthy that we used the chest radiographs in children (ages between 1 and 5 years), so it is necessary to evaluate the performance of the models in other age populations. In the viewpoint of these criteria, our proposed models may not be robust because only normal and pneumonia chest X-ray images are used in the training data, as outlined in Table 1. Currently, our proposed models can act as a support for radiologists. In the other words, these models have a supporting role as a triage tool for a preliminary diagnosis of pneumonia. By overcoming the aforementioned limitations, these models can be used in routine clinical practice without a second check by a radiologist.

From another point of view, when a large-scale screening of pneumonia is performed, the workload of radiologists increases, which may result in the increased misdiagnosis. From Table 4, it can be seen that the pre-trained CNN models have a high diagnostic ability. These results suggest that our proposed models can be a promising supplementary diagnostic method and assist radiologists in making clinical decision quickly and accurately. In addition, these models can increase the accuracy of diagnosis when the radiologist is inexperienced.

Deep learning for automated detection of pneumonia using chest X-ray images is considered as an important issue, especially for large-scale screening.29 There are several major barriers to large-scale screening for pneumonia using chest radiographs. For example, the interpretation of chest X-ray images is very difficult; therefore, highly trained radiologists are required to review the chest radiographs, resulting in an increase of workload for those physicians.2,6,29 Furthermore, there are many different conditions that can change the appearance of a chest radiograph, such as, lung cancer, pulmonary edema, bleeding, etc.; therefore, detection of pneumonia can be complicated.2,6,29 Also, patient positioning and inspiration depth can alter the quality of the radiograph. Owing to aforementioned issues, interpreting chest radiographs of pneumonia by the human eye alone is very challenging. Hence, deep learning algorithms can be integrated with any standard X-ray reporting system to use for the analysis, interpretation, and tracking of large numbers of chest X-ray images, especially for large-scale screening in places where quick results in a significantly short period are required, e.g. airports, supermarkets, hospitals, etc. Using CNN models can provide rapid screening and result in remote diagnosis of pneumonia with few personnel.

Conclusion

In this study, we applied four different CNN-based deep transfer learning algorithms (i.e. VGG19, Xception, ResNet50, and DenseNet121) to automatically classify the chest X-ray radiographies into pneumonia and normal cases (binary classification). Our data showed that the proposed models achieve accuracy more than 83.0% for binary classification. The pre-trained DenseNet121 and Xception models provide the highest classification performance of automated pneumonia classification with accuracy greater than 86.0%. Transfer learning was used to accelerate training of the proposed models and resolve the problem associated with insufficient data. We hope that the aforementioned models can help radiologists for a quick diagnosis of pneumonia at radiology departments. And also, can be useful in the airport screening of patients with pneumonia. In addition, these models may be useful for diagnosis of other chest-related diseases such as novel Coronavirus 2019.

Footnotes

Acknowledgements: This study was granted by research chancellor of Iran University of Medical Sciences, Tehran, Iran.

Research involving human participants and/or animals: This article does not contain any studies with human participants or animals performed by any of the authors.

Funding: This study has received funding by Iran University of Medical Sciences, Tehran, Iran (grant number 97/4/75/13693).

Informed consent: Consent is not required for this type of study.

Contributor Information

Mohammad Salehi, Email: m.salehi7270@yahoo.com.

Reza Mohammadi, Email: reza021mohammadi@gmail.com.

Hamed Ghaffari, Email: hamedghaffari@yahoo.com.

Nahid Sadighi, Email: nsedighi@sina.tums.ac.ir.

Reza Reiazi, Email: Reiazi.r@iums.ac.ir.

REFERENCES

  • 1.McLuckie A. Respiratory disease and its management. New York, NY, USA: Springer Science & Business Media; Springer; 2009. . 51. [Google Scholar]
  • 2.Mittal A, Kumar D, Mittal M, Saba T, Abunadi I, Rehman A, et al. Detecting pneumonia using Convolutions and dynamic capsule routing for chest X-ray images. Sensors 2020; 20: 1068. doi: 10.3390/s20041068 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Ruuskanen O, Lahti E, Jennings LC, Murdoch DR. Viral pneumonia. The Lancet 2011; 377: 1264–75. doi: 10.1016/S0140-6736(10)61459-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Aydoğdu M, Ozyilmaz E, Aksoy H, Gürsel G, Ekim N. Mortality prediction in community-acquired pneumonia requiring mechanical ventilation; values of pneumonia and intensive care unit severity scores. Tuberk Toraks 2010; 58: 25–34. [PubMed] [Google Scholar]
  • 5.Li Y, Zhang Z, Dai C, Dong Q, Badrigilan S. Accuracy of deep learning for automated detection of pneumonia using chest X-ray images: a systematic review and meta-analysis. Comput Biol Med 2020; 123: 103898. doi: 10.1016/j.compbiomed.2020.103898 [DOI] [PubMed] [Google Scholar]
  • 6.Liang G, Zheng L. A transfer learning method with deep residual network for pediatric pneumonia diagnosis. Comput Methods Programs Biomed 2020; 187: 104964. doi: 10.1016/j.cmpb.2019.06.023 [DOI] [PubMed] [Google Scholar]
  • 7.Hemanth G, Janardhan M, Sujihelen L. Design and implementing brain tumor detection using machine learning approach. In: 2019 3rd International Conference on Trends in Electronics and Informatics; 2019. pp. 1289–94. [Google Scholar]
  • 8.Kallianos K, Mongan J, Antani S, Henry T, Taylor A, Abuya J, et al. How far have we come? artificial intelligence for chest radiograph interpretation. Clin Radiol 2019; 74: 338–45. doi: 10.1016/j.crad.2018.12.015 [DOI] [PubMed] [Google Scholar]
  • 9.Chowdhury MEH, Khandakar A, Alzoubi K, Mansoor S, M Tahir A, Reaz MBI, et al. Real-Time Smart-Digital stethoscope system for heart diseases monitoring. Sensors 2019; 19: 2781. doi: 10.3390/s19122781 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Erickson BJ, Korfiatis P, Akkus Z, Kline TL. Machine learning for medical imaging. Radiographics 2017; 37: 505–15. doi: 10.1148/rg.2017160130 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Choy G, Khalilzadeh O, Michalski M, Do S, Samir AE, Pianykh OS, et al. Current applications and future impact of machine learning in radiology. Radiology 2018; 288: 318–28. doi: 10.1148/radiol.2018171820 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Do S, Song KD, Chung JW. Basics of deep learning: a radiologist's guide to understanding published radiology articles on deep learning. Korean J Radiol 2020; 21: 33–41. doi: 10.3348/kjr.2019.0312 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Rajpurkar P, Irvin J, Zhu K, Yang B, Mehta H, Duan T. Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv 2017; arXiv:171105225. [Google Scholar]
  • 14.Abiyev RH, Ma’aitah MKS. Deep Convolutional neural networks for chest diseases detection. J Healthc Eng 2018; 2018: 1–11. doi: 10.1155/2018/4168538 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Lakhani P, Sundaram B. Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using Convolutional neural networks. Radiology 2017; 284: 574–82. doi: 10.1148/radiol.2017162326 [DOI] [PubMed] [Google Scholar]
  • 16.Pan SJ, Yang Q. A survey on transfer learning. IEEE Trans Knowl Data Eng 2010; 22: 1345–59. doi: 10.1109/TKDE.2009.191 [DOI] [Google Scholar]
  • 17.Weiss K, Khoshgoftaar TM, Wang D. A survey of transfer learning. Journal of Big Data 2016; 3: 9. doi: 10.1186/s40537-016-0043-6 [DOI] [Google Scholar]
  • 18.Kermany DS, Goldbaum M, Cai W, Valentim CCS, Liang H, Baxter SL, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 2018; 172: 1122–31. doi: 10.1016/j.cell.2018.02.010 [DOI] [PubMed] [Google Scholar]
  • 19.Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP. SMOTE: synthetic minority Over-sampling technique. Journal of Artificial Intelligence Research 2002; 16: 321–57. doi: 10.1613/jair.953 [DOI] [Google Scholar]
  • 20.Simonyan K, Zisserman A. Very deep Convolutional networks for large-scale image recognition. arXiv 2014;: preprint arXiv:1409.1556. [Google Scholar]
  • 21.Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. (Institute of electrical and electronics engineers Inc., 2017). In Proceedings—30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 2017; 2017: 2261–9. [Google Scholar]
  • 22.Chollet F. Xception: deep learning with depthwise separable con- volutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2017. pp. 1251–8. [Google Scholar]
  • 23.He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition; 2016. pp. 770–8. [Google Scholar]
  • 24.Raschka S, Evaluation M. Model selection, and algorithm selection in machine learning. arXiv 2018; arXiv:1811.12808. [Google Scholar]
  • 25.R M, M S, H G, A A R, R R, Mohammadi R. Transfer Learning-Based automatic detection of coronavirus disease 2019 (COVID-19) from chest X-ray images. J Biomed Phys Eng 2020; 10: 559–68. doi: 10.31661/jbpe.v0i0.2008-1153 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Nath M, Choudhury C. Automatic detection of pneumonia from chest x-rays using deep learning. : Bhattacharjee A, Borgohain S, Soni B, Verma G, Gao X. Z, Machine learning, image processing, network security and data sciences. mind 2020. communications in computer and information science, vol 1240. Singapore: Springer; 2020. doi: 10.1007/978-981-15-6315-7_14 [DOI] [Google Scholar]
  • 27.Ayan E, Ünver HM. Diagnosis of pneumonia from chest X-ray images using deep learning. In: 2019 Scientific Meeting on Electrical-Electronics & Biomedical Engineering and Computer Science; 2019. pp. 1–5. [Google Scholar]
  • 28.Gu X, Pan L, Liang H, Yang R. Classification of bacterial and viral childhood pneumonia using deep learning in chest radiography. In: Proceedings of the 3rd International Conference on Multimedia and Image Processing. Guiyang, China: Association for Computing Machinery; 2018. pp. 88–93. [Google Scholar]
  • 29.Adly AS, Adly AS, Adly MS. Approaches based on artificial intelligence and the Internet of intelligent things to prevent the spread of COVID-19: Scoping review. J Med Internet Res 2020; 22: e19104. doi: 10.2196/19104 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from The British Journal of Radiology are provided here courtesy of Oxford University Press

RESOURCES