Dersleri yüzünden oldukça stresli bir ruh haline sikiş hikayeleri bürünüp özel matematik dersinden önce rahatlayabilmek için amatör pornolar kendisini yatak odasına kapatan genç adam telefonundan porno resimleri açtığı porno filmini keyifle seyir ederek yatağını mobil porno okşar ruh dinlendirici olduğunu iddia ettikleri özel sex resim bir masaj salonunda çalışan genç masör hem sağlık hem de huzur sikiş için gelip masaj yaptıracak olan kadını gördüğünde porn nutku tutulur tüm gün boyu seksi lezbiyenleri sikiş dikizleyerek onları en savunmasız anlarında fotoğraflayan azılı erkek lavaboya geçerek fotoğraflara bakıp koca yarağını keyifle okşamaya başlar
Reach Us +1-947-333-4405

GET THE APP

Journal of Infectious Diseases & Therapy - Advanced Medical Image Recognition and Diagnosis of Respiratory System Viruses
ISSN: 2332-0877

Journal of Infectious Diseases & Therapy
Open Access

Our Group organises 3000+ Global Conferenceseries Events every year across USA, Europe & Asia with support from 1000 more scientific Societies and Publishes 700+ Open Access Journals which contains over 50000 eminent personalities, reputed scientists as editorial board members.

Open Access Journals gaining more Readers and Citations
700 Journals and 15,000,000 Readers Each Journal is getting 25,000+ Readers

This Readership is 10 times more when compared to other Subscription Journals (Source: Google Analytics)
  • Research Article   
  • J Infect Dis Ther

Advanced Medical Image Recognition and Diagnosis of Respiratory System Viruses

Mazhar B Tayel, Adel El Fahaar and AM Fahmy*
Electrical Engineering Department, Alexandria University, Alexandria, Egypt
*Corresponding Author: AM Fahmy, Electrical Engineering Department, Alexandria University, Alexandria, Egypt, Email: eng.ahmed.fahmy07@gmail.com

Received: 20-Jun-2022 / Manuscript No. JIDT-22-67054 / Editor assigned: 22-Jun-2022 / PreQC No. JIDT-22-67054(PQ) / Reviewed: 06-Jul-2022 / QC No. JIDT-22-67054 / Revised: 13-Jul-2022 / Manuscript No. JIDT-22-67054(R) / Published Date: 20-Jul-2022

Abstract

Respiratory infections are a confusing and time-consuming task of constantly looking at clinical pictures of patients. Therefore, there is a need to develop and improve the respiratory case prediction model as soon as possible to control the spread of disease. Deep learning makes it possible to discover a virus such as COVID-19 can be effectively detected using classification tools as CNN (Convolutional Neural Network). MFCC (Mel Frequency Cepstral Coefficients) is a common and effective classification tool. MFCC-CNN’s the proposed learning model is used to speed up the prediction process that assists medical professionals. MFCC is used to extract image features that are related to presence of COVID-19 or not. Prediction is based on convolutional neural network. This makes time-consuming process easier, faster with more accurate results reducing the spread of the virus and saves lives. Experimental results show that using a CT image converted to Mel-frequency cepstral spectrogram as an input to CNN can perform better results; with the validation data that include 99.08% accuracy for appropriate COVID categories and images with the non-COVID labels. Thus, it can probably be used to detect in CT images the presence of COVID-19. The work here provides evidence of the idea that high accuracy can be achieved with a trusted dataset, which can have a significant impact on this area.

Keywords: Biomedical imaging; COVID-19; Computed tomography; Feature extraction; MFCC; Image classification; Convolutional neural network

Introduction

Respiratory viruses are one of the most numerous contributory agents of death in humans. COVID-19 are presently a worldwide communicable disease caused by a pestilence known as SARS-CoV-2, initial introduced in urban center, China in 2019 and later in several components of the world; on third of January 2020 [1]. The virus symptoms of the virus are coughing, shortness of breath, abdominal pain and fever. No antibiotics, antibodies or correct treatment for COVID-19 infection are given [2]. An important and critical step in fighting respiratory system viruses is powerful screening of diseased patients, such that positive patients can be treated and isolated. A chest image based detection scheme might have several benefits over traditional approach. The accomplishment of Artificial Intelligence (AI) based techniques in automated diagnoses in the medical field and rapid increase in COVID-19 cases have demanded the requirement of AI based automated diagnosis and detection systems.

Today, Artificial Intelligence (AI) is a wide used force worldwide, which can scrutinize a method for speedy detection with high detection accuracy of the virus, and differentiate the chance and cruelty of Corona virus in patients exploitation advanced Deep Learning (DL) [3]. In this study, accomplishment of high accuracy of imaging classes was assessed on COVID-19 virus CT scan pictures. The CT image is then transferred to MFCC spectrum for feature extraction. The extracted features are transmitted to a CNN designed exploitation python (Tensor Flow environment) to predict the presence of COVID-19.

Chest CT scan image

The role of COVID-19 in analysis these days is growing to boost imaging for treatment and designation. CT pictures are two-dimensional pictures representing three-dimensional objects. Pictures are created by changing power (moving electrons) into X-ray photons, transferring photons to the total object, and so changing measured photons back to electrons. It may also sight advanced bone fractures and tumors. If a patient has cancer, emphysema, cardiopathy or liver and mass tumors, CT scans will mark or facilitate specialists diagnose any changes [4]. The infection triggers a huge spectrum of CT scan imaging discoveries, most typically respiratory organ boundary consolidations and ground-glass opacities. The sensitivity of Chest CT to diagnose COVID-19 is found to be appreciably higher with smart resolution and might occur before a positive infectious agent research laboratory takes a look at. Therefore, hospitals with massive quantities of admissions use CT for the quick emergency of patients with conceivable COVID-19 malady in epidemic territories, wherever the fundamental attention system is besieged. Chest CT plays a vital role within the estimation of COVID-19 patients with rigorous and compound metabolic process symptoms. Supported CT scans, has potential to see how defectively the lungs are compromised and the way the illness of the individual progresses that is effective in creating medical choices. There’s a growing thought of the unexpected incidence of respiratory organ defects that are like abdominal CT scans for internal organ disorders or patients while not metabolic process symptoms [5]. During this pandemic, by reducing the strain on clinicians, the analysis of AI could become the foremost vital issue. AI will analyze the pictures in less than ten seconds. So, introducing a new concept for image processing using the combination between melody MFCC spectrum features and CNN as a worthy tool for respiratory infection disease detection. Figure 1 shows a chest CT image.

infectious-diseases-Scan

Figure 1: Chest CT-Scan Image.

Methodology

CT scan image is a versatile tool for respiratory disease good discrimination. Capturing the respiration data in an exceedingly way and size that permits form modeling is efficient. Many feature extraction techniques are utilized in signal recognition system such linear prediction coefficients. This paper introduces a new concept for combining melody features with advanced DL techniques to extract new combined features that derived a proposed a MFCC-CNN model to additional quicken the prediction method of virus recognition technique supported the Mel-Frequency Cepstral Coefficients (MFCCs) for extracting features composite inside wave remodeling of the image that may assist in achieving a better recognition rate by passing it to a CNN. Once passing transferred image to CNN; this can yield to predict whether or not the patient features predicting whether a virus is present or not.

Extraction of ct image melody spectrum features

Feature extraction can be defined as the process of reducing the amount of data present in a given image sample while retaining the discriminatory image information. The concept of melody feature extraction offers the goal of identifying virus CT image features for the purpose of generating sufficient information on the positive effects of virus and capturing this information in a form and size that allows effective modeling. Various decoding methods are used in the signal recognition system, such as Linear Predictive Coefficients (LPC), Linear Predictive Cepstral Coefficients (LPCC), Perceptual Linear Predictive Parsing (PLP), and Mel frequency Cepstral Coefficients (MFCC) which is currently very popular and will be discussed in this article. The MFCC’s of a melody spectrum are used to represent signal distribution and often used as basic elements in speech recognition systems. Also its features are derived from Cepstral analysis and distorted according to Mel-scale scale, which emphasizes the existence of components of lower frequency than that of the higher frequency.

The conversion steps from image to MFCC coefficients, (Figure 2), are

infectious-diseases-Frequency

Figure 2: The flow chart of image to Mel Frequency Cepstral Coefficients (MFCC) conversion stages diagram.

• Slicing the original waveform into predetermined window size.

• Performing Fast Fourier Transformation (FFT) on the sliced signal.

• Mapping the log amplitudes of the spectrum onto the Mel scale, using triangular overlapping filters.

• Performing Discrete Cosine Transformation (DCT) on the Mel log amplitudes.

• The resulting amplitudes of the spectrum are the MFCCs.

The Mel is a unit of measure of perceived pitch or frequency of a tune. The Mel- scale is thus a mapping between the real frequency scale (Hz) and the perceived frequency scale (Mels). The Mapping is virtually linear below 1 kilohertz. The formula used to convert f hertz into m Mel is given in equation (1). The output of MFCC extraction of a COVID-19 image is shown in Figure 3.

infectious-diseases-Cepstral

Figure 3: The Mel Frequency Cepstral Coefficients (MFCC) Output of COVID-19 image.

Equation

MFCC-CNN architectures are effectual for classifying image data. Extracted features from the MFCC are represented as a picture in Figure 4. This is done for each image used in training, validation and testing samples. A block diagram representing the proposed system is shown in Figure 5.

infectious-diseases-Frequency

Figure 4: The Mel Frequency Cepstral Coefficients (MFCC) components of COVID-19 image.

infectious-diseases-block

Figure 5: The Proposed system block diagram.

Deep convolutional neural network is used, with multiple hidden layers and a binary dense output layer for label classification. The layers are:

Convolutional layer: Applying 32 (5 × 5) filters (extracting 5 × 5-pixel sub-regions) after resizing the image to 128 by 128 and adjusting the image orientation,

First Pooling layer: Toperform max pooling with a (2 × 2) filter for down sampling and stride of 2 (which specifies that pooled regions do not overlap).

Convolutional layer: It applies 36 (3 × 3) filters, with ReLU activation function.

Second Pooling layer: Again, performing max Pooling with a (2 × 2) filters and stride of 2.

• The dropout regularization rate of 0.5 (where the probability of 0.5 that any given element will be dropped in training)

Dense layer: One neuron for each target class.

Results and Discussion

The dataset used in this research consists of 1832 CT-images collected from various trustful sources, Kaggle.com is one of them [6]. It consists of 916 COVID images and 916 Non-COVID images by which a MFCC feature extraction is done to each of the dataset image. The resulting MFCC image samples are divided into 668 training images as COVID-19 and 668 training images as Non-COVID. Also 248 validation samples (124 images of COVID-19 and 124 images of non COVID ), 248 test samples (124 images of COVID-19 and 124 images of non COVID) batch size of 32 , epochs number of 50 epochs and 41 steps per epoch ( calculated by floor dividing the number of training samples by batch size) [7-10].

The performance confusion metrics token in considerations in each epoch were

True Positives (TP): The number of states correctly predicted by the classifier for having COVID-19 and actually infected by COVID-19.

False Positives (FP): The number of patients incorrectly predicted by the classifier infected by COVID-19 and actually healthy.

True Negatives (TN): Represents the number of patients correctly healthy and actually doesn’t have COVID-19.

False Negative (FN): Which represents the number of patients misclassified as healthy but actually they are infected by COVID-19.

Accuracy: The relation of correctly classified patients (TP+TN) to the total number of patients (TP+FP+TN+FN).

Precision: The ratio of correctly classified cases with COVID-19 (TP) to the total number of patients predicted to have the disease (tp+fp), and correctly predicted by the classifier.

Recall: It is the ratio of correctly classified patients with COVID-19 virus (TP) divided by total number of patients who have actually have COVID-19 virus.

Area under the curve AUC: This is the measure of the ability of a classifier to distinguish between classes (AUC) and sensitivity.

This CNN modeling and training produced an accuracy of 99.08% using early stopping call back of the best epoch (31st epoch) and 98.93% accuracy for full epochs training as shown in Figures 6 and 7 respectively. The CNN metrics evaluated a true positive value of 642 images, a false positive value of 6 images, a true negative value of 650 images, a false negative value of 6 images and a precision of 99% with recall value of 99.07% and 100% sensitivity.

infectious-diseases-Neural

Figure 6: Convolutional Neural Network (CNN) modeling Summary.

infectious-diseases-stopping

Figure 7: Resultant Accuracy using early stopping call back.

Both of the COVID labels for the MFCC images in the validation dataset were correctly classified. The prediction result of a COVID-19 and Non-COVID MFCC image cases is shown in Figures 8 and 9 respectively.

infectious-diseases-Cepstral

Figure 8: COVID-19 and Non-COVID Mel Frequency Cepstral Coefficients (MFCC) image cases classification report.

infectious-diseases-validation

Figure 9: Accuracy against validation accuracy. Note: Equation val acc.

The plotting of train loss against validation loss results of the viral comparisons using MFCC image cases is shown in Figure 10.

infectious-diseases-vall

Figure 10: Train loss Vs. Validation loss, results of the viral comparisons. Note: Equation train loss; Equation vall loss.

The model evaluation of our proposed system is shown in Figure 11 Obtaining an accuracy of 99.44%.

infectious-diseases-evaluation

Figure 11: Model evaluation of Convolutional Neural Network (CNN).

The confusion matrix of the proposed system model is shown in Figure 12.

infectious-diseases-Neural

Figure 12: Confusion Matrix of Convolutional Neural Network (CNN).

Conclusion

This research affords a new analysis concept technique for COVID-19 prediction based on converting a COVID-19 virus CT image into a signal, and then extracting MFCCs features of this modeled signal in the form of displayed colored image. The deep learning technique applied to CT clinical images of different types of respiratory viruses e.g. (COVID-19) which showed the knowledge gained by model trained for detecting and identifying COVID-19 is pretty good. This makes prediction easier by using existing model for prediction of COVID-19 virus. It is difficult to detect the abnormal features from CT images due to the noise impedance from lesions and tissues. For this reason, the Mel Frequency Ceptral Coefficient (MFCC) feature extraction is consummated which focus only on the area of interest to detect COVID-19 virus out of CT image. The classifier used in this research demonstrated a high accuracy of 98.93% for full epoch and 99.08% for the best epoch compared to the other studies, marginally outperforming good acceptable results. In the field of biomedical engineering, COVID-19 virus type detection is an essential and promising technology. Boundless work has been reported on COVID-19 identification and verification although the importance of COVID-19 image features. COVID-19 recognition based on extracting features is tedious work and requires high computational complexities. The obtained accuracy is better than that obtained of Logistic Regression, Random forest, SVM, CNN (applied to COVID images), CNN (applied to MFCC of COVID coughs). These results are highly encouraging and provide further opportunities for research by the academic community on this important topic.

Future work

As a future work, the direction for this research would be to use this methodology to diagnose the severity of illness, and differentiate between possible diagnoses and similar diseases. Taking in considerations to apply this proposed method on various biomedical cases such as viral pneumonia, Alzheimer cases, breast cancer ECT.

Declarations

Ethics approval and consent to participate

Not applicable

Consent for publication

Not applicable

Availability of data and materials

Dataset used would be provided by the authors upon a reasonable request.

Competing interests

The authors declare that they have no competing interests.

Funding

The authors had received no funding.

Authors’ contributions

All authors have participated in (a) conception and design, or analysis and interpretation of the data; (b) drafting the article or revising it critically for important intellectual content; and (c) approval of the final version of the manuscript.

References

Citation: Tayel MB, Fahaar AE, Fahmy AM. (2022) Advanced Medical Image Recognition and Diagnosis of Respiratory System Viruses. J Infect Dis Ther S4:002.

Copyright: © 2022 Tayel MB, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Top