×
The submission system is temporarily under maintenance. Please send your manuscripts to
Go to Editorial ManagerThyroid nodules (TNs) are discrete abnormalities located within the thyroid gland that are radiologically different from the surrounding thyroid tissue. Ultrasound is an accurate and efficient way to diagnose thyroid nodules. Recently, several methods of AI were proposed to improve the detection of thyroid nodules ultrasound images with good performances. However, in some cases related to the type or size of the dataset using machine or transfer deep learning methods alone is unable to achieve high accuracy and high specificity. Consequently, the addition of feature selection)FS) to the deep learning method enhances the results by reducing the high features and the time needed for training the dataset. This study proposes two deep-learning models for classifying thyroid nodule US images into two categories: benign and malignant. ResNet50 was the first model used to extract deep features from US images. The second model integrates ResNet50 and principal component analysis (PCA) for feature selection, intending to reduce dataset dimensionality while maintaining the greatest data variance possible before classification. The proposed model was created using a freely available dataset. The dataset consists of 800 images, 400 benign and 400 malignant. The suggested system was accessed based on accuracy, precision, recall, and F1 score. The classification accuracy for ResNet50 was 85%, while ReNet50-PCA was 89.16%. The combination of deep learning and FS techniques in this research produces an interesting diagnostic framework that can potentially increase efficiency and accuracy in thyroid cancer detection, especially in local healthcare centers.
Deep learning modeling could provide to detected Corona Virus 2019 (COVID-19) which is a critical task these days to make a treatment decision according to the diagnostic results. On the other hand, advances in the areas of artificial intelligence, machine learning, deep learning, and medical imaging techniques allow demonstrating impressive performance, especially in problems of detection, classification, and segmentation. These innovations enabled physicians to see the human body with high accuracy, which led to an increase in the accuracy of diagnosis and non-surgical examination of patients. There are many imaging models used to detect COVID-19, but we use computerized tomography (CT) because is commonly used. Moreover, we use for detection a deep learning model based on convolutional neural network (CNN) for COVID-19 detection. The dataset has been used is 544 slice of CT scan which is not sufficient for high accuracy, but we can say that it is acceptable because of the few datasets available in these days. The proposed model achieves validation and test accuracy 84.4% and 90.09%, respectively. The proposed model has been compared with other models to prove superiority of our model over the other models.
The interest in the Eye-tracking technology field dramatically grew up in the last two decades for different purposes and applications like keeping the focus of where the person is looking, how his pupils and irises are reacting for a variety of actions, etc. The resulted data can deliver an extraordinary amount of information about the user when it's interlocked through advanced data analysis systems, it may show information concerned with the user’s age, gender, biometric identity, interests, etc. This paper is concerned about eye motion tracking as an unadulterated tool for different applications in any field required. The improvements in this area of artificial intelligence (AI), machine learning (ML), and deep learning (DL) with eye-tracking techniques allow large opportunities to develop algorithms and applications. In this paper number of models were proposed based on Convolutional neural network (CNN) have been designed, and then the most powerful and accurate model was chosen. The dataset used for the training process (for 16 screen points) consists of 2800 training images and 800 test images (with an average of 175 training images and 50 test images for each spot on the screen of the 16 spots), and it can be collected by the user of any application based on this model. The highest accuracy achieved by the best model was (91.25%) and the minimum loss was (0.23%). The best model consists of (11) layers (4 convolutions, 4 Max pooling, and 3 Dense). Python 3.7 was used to implement the algorithms, KERAS framework for the deep learning algorithms, Visual studio code as an Integrated Development Environment (IDE), and Anaconda navigator for downloading the different libraries. The model was trained with data that can be gathered using cameras of laptops or PCs and without the necessity of special and expensive equipment, also It can be trained for any single eye, depending on application requirements.
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that severely impacts cognitive functions such as memory, attention, and reasoning, ultimately affecting daily life. Early and accurate detection is crucial for timely intervention and management. Traditional diagnostic methods, including neuroimaging and cognitive assessments, can be expensive and time-consuming, necessitating more accessible and efficient alternatives. This study aims to develop an automated and efficient deep learning-based detection system that uses Electroencephalogram (EEG) signals to accurately classify AD and healthy individuals. A Convolutional Neural Network (CNN) model was designed to extract meaningful features from preprocessed EEG data. The architecture consists of convolutional layers with max pooling, dropout regularization, and fully connected layers to improve classification accuracy. The model was trained and evaluated on a comprehensive EEG dataset, using key performance metrics such as accuracy, recall, precision, and F1-score. The proposed CNN model achieved a high classification accuracy of 94.56%, a low loss of 0.2162, and an AUC value of 0.93828, demonstrating superior classification capability. The results indicate that the model effectively distinguishes between AD and healthy individuals, outperforming several state-of-the-art approaches. The findings highlight the potential of deep learning-based EEG analysis for AD detection, providing an accessible and cost-effective tool for early diagnosis. The high accuracy of the proposed CNN model suggests that it can assist medical professionals in making well-informed decisions, ultimately improving patient outcomes.
This study evaluates the performance and efficiency of four deep learning models—VGG-16, ResNet-50, Inception-V3, and DenseNet-121—in detecting pneumonia from chest X-rays, addressing the critical need for balanced accuracy and computational efficiency in clinical diagnostics. Methods: A dataset of 5,234 chest X-rays (3,875 pneumonia, 1,341 normal) was augmented via rotation, flipping, and zooming to mitigate class imbalance. Models were trained on an RTX 2060 GPU for 40 epochs, with performance assessed using accuracy, F1 score, sensitivity, specificity, precision, and computational metrics (training time, memory usage). Statistical significance was validated via paired t-tests (p < 0.05). Results: DenseNet-121 achieved the highest accuracy (95.2% ± 0.8), F1 score (95.1% ± 0.7), and throughput (400 images/sec) with minimal memory usage (33MB). ResNet-50 and Inception-V3 showed moderate performance, while VGG-16 exhibited overfitting tendencies. In conclusion, DenseNet-121 showed strong performance compared to other models, both in terms of accuracy and processing speed, which is essential for use in real-time clinical settings. However, the small size of the validation set and limited population diversity are important limitations that should be addressed in future studies. Moreover, more testing on larger datasets is needed to confirm the stability of the model and see how the model will work in different settings. Future work should address ethical considerations in AI-driven diagnostics and validate findings across multi-institutional datasets.
One of the most common causes of mortality worldwide is Lung cancer, an early diagnosis crucial for a patient’s survival and recovery. Automated segmentation of lung lesions in chest CT has become a pre-eminent focal point for research, particularly with the development of hybrid methods combining traditional image processing with advanced deep learning methods such as CNN. These hybrid approaches aim to minimize individual methods limitations by controlling their merge strengths to enhance segmentation efficiency, precision, and clinical utility. This review comprehensively analyzes different hybrid techniques, such as deep learning improved by rule-based systems, multi-scale feature extraction, and ensemble learning. As well as inspect their clinical effect, particularly in improving diagnostic accuracy and optimizing treatment procedures. Despite their possibility, these approaches still face significant challenges, such as computational complexity, data requirements, and the requirement for explainable AI (XAI). Upcoming advancements in lung lesion segmentation will focus on refining these models to achieve faster processing, improved accuracy, and integration with diagnostic tools to protect transparency and ethical considerations.
Lung cancer is the most common dangerous disease that, if treated late, can lead to death. It is more likely to be treated if successfully discovered at an early stage before it worsens. Distinguishing the size, shape, and location of lymphatic nodes can identify the spread of the disease around these nodes. Thus, identifying lung cancer at the early stage is remarkably helpful for doctors. Lung cancer can be diagnosed successfully by expert doctors; however, their limited experience may lead to misdiagnosis and cause medical issues in patients. In the line of computer-assisted systems, many methods and strategies can be used to predict the cancer malignancy level that plays a significant role to provide precise abnormality detection. In this paper, the use of modern learning machine-based approaches was explored. More than 70 state-of-the-art articles (from 2019 to 2024) were extensively explored to highlight the different machine learning and deep learning (DL) techniques of different models used for the detection, classification, and prediction of cancerous lung tumors. The efficient model of Tiny DL must be built to assist physicians who are working in rural medical centers for swift and rapid diagnosis of lung cancer. The combination of lightweight Convolutional Neural Networks and limited resources could produce a portable model with low computational cost that has the ability to substitute the skill and experience of doctors needed in urgent cases.
Kidney disease is a global health concern, often leading to kidney failure and impaired function. Artificial intelligence and deep learning have been extensively researched, with numerous proposed models and methods to improve kidney disease diagnosis. This work aims to enhance the efficiency and accuracy of the diagnostic system for kidney disease by using Deep Learning, thereby contributing to effective healthcare delivery. This work proposed three models: CNN, CNN-XGBoost and CNN-RF to extract features and classify kidney Ultrasound images into four categories: three abnormal cases (stones, hydronephrosis, and cysts) and one normal case. The models were tested on a real dataset of 1260 kidney ultrasound images (from 1000 patients) collected from the Lithotripsy Centre in Iraq. CNN models are often viewed as black boxes due to the challenge of understanding their learned behaviors, Visualizing Intermediate Activations (VIA) was used to address this issue. The proposed framework was assessed based on precision, recall, F1-score, and accuracy. CNN-RF is the most accurate model, with an accuracy of 99.6%. This study can potentially assist radiologists in high-volume medical facilities and enhance the accuracy of the diagnostic system for kidney disease.
Recently, three-dimensional models 3DM in the prosthetics field gained popularity, especially in the context of residual limb shape creation resulting from collecting medical images in Digital Imaging and Communications in Medicine DICOM format from a magnetic resonance imaging MRI after image processing accurately. In this study, a three-dimensional model of the residual limb for a patient with transtibial amputation was realized with the integration of artificial intelligence and a computer vision approach demonstrating the benefits of AI segmentation tools and artificial algorithms to generate higher accuracy three-dimensional model before prosthetic socket design or in case of comparison the 3D model generated from MRI with another 3D model generated from another technique, where a residual limb of a 23 years old male patient with amputation in the left leg wearing a prosthetic socket liner, and having 62 kg weight, 168 cm height, with high activity level. The patient was scanned using GE Medical Systems, 1,5 Tesla Signa Excite. MRI images in DICOM format were read to retrieve essential metadata such as pixel spacing and slice thickness. These images were processed to obtain a model that reflects the real shape of the residual limb using a specific algorithm, and the 3D model was extracted using AI segmentation tools. The obtained 3D model result with high resolution proves the potential of the artificial intelligence approach with deep learning to reconstruct 3D models concluding that AI has an instrumental role in medical image analysis, particularly in the areas of organ and tissue classification and segmentation., thus generating automatic and repetitive a 3D model.
Medical image segmentation plays a crucial role in the realm of medical imaging. The process involves the division of an image to obtain a comprehensive view and ensure precise diagnostics. There are various methods that are employed, ranging from traditional approaches to the more advanced deep learning techniques. Both play a significant role in enhancing healthcare. With the continuous advancement in technology, there is a growing need for accurate segmentation. While traditional methods such as thresholding and region growing are effective, they may require human intervention for complex cases. Deep learning techniques, particularly Convolutional Neural Networks (CNNs), have significantly improved the process by learning intricate details and accurately segmenting the image. When these methods are combined, healthcare professionals can achieve high-quality, precise results. Furthermore, with the advancements in hardware and technology, real-time segmentation is now possible. Generally, the process of dividing medical images into segments is extremely important for the progress of healthcare with the help of artificial intelligence and the most recent advancements in the industry, such as explainable AI and multimodal learning. However, this meticulously detailed and in-depth review provides an all-encompassing and extensive analysis of the current methods utilized, their multitude of applications across various fields, and the promising emerging advancements that have the potential to pave the way for remarkable future improvements and innovations.
Face recognition and identification have recently become the most widely employed biometric authentication technologies, especially for access to persons and other security purposes. It represents one of the most significant pattern recognition technologies that uses characteristics included in facial images or videos to detect the identity of individuals. However, most of the traditional facial algorithms have faced limitations in identification and verification accuracy. As a result, this paper presents a sophisticated system for face identification adopting a novel algorithm of deep learning, namely, You Only Look Once version 8 (YOLOv8). This system can detect the face identity of different individuals with different positions with high accuracy. The YOLOv8 model has been trained for several target face images classified as training and validation images of 1190 and 255, respectively. The experimental results show a significant improvement in face identification accuracy of 99% of mean average precision, which outperforms many state-of-the-art face identification techniques.
In order to avoid losing sense of sight in a large portion of the working population, Diabetic Retinopathy (DR) identification during broad examination for diabetes is crucial. To prevent blindness in the future, early illness detection and measurement of disease development are essential. DR is diagnosed through medical image analysis. After the success of Deep Learning (DL) in other applications in the real world, it is considered a vital tool for upcoming health sector applications, providing solutions with accurate results for medical image analysis. This review provides a comprehensive survey of the state-of-the-art DL models for DR detection and grading using retinal fundus photography. This review thoroughly examined and summarized 81 relevant publications that were published through IEEE Xplore, Web of Science, PubMed, and Scopus between 2018 and 2023 based on the available database with binary or multiclass CNN classification models as well as the main preprocessing techniques. According to the findings of this review, transfer learning has proven to be an excellent technique for addressing the problems of limited resources for data for DR analysis. CNN models having tens or hundreds of layers are the most frequently utilized frameworks for DR classification. The most extensively utilized datasets for DR categorization are Aptos 2019 and EyePACS. Although DL has attained or surpassed human-level DR classification accuracy, there is still more work to be done in real-world clinical procedures.
Artificial intelligence (AI) is rapidly advancing as a valuable tool in oncology for enhancing detection and management of cancer. The integration of AI with PET/CT imaging presents significant scenarios for improving efficiency and accuracy of cancer diagnosis. This study examines the current applications of AI with PET/CT imaging, highlighting its role in diagnosing, differentiating, delineating, staging, assessing therapy response, determining prognosis, and enhancing image quality. A comprehensive literature search was conducted in six data-bases to get the most recent works, use Springer, Scopus, PubMed, Web of Science, IEEE, and Google Scholar in the last five years (2019-2024), identifying 80 studies that met the criteria for inclusion that focused on AI-driven models applied to PET/CT data in various cancers, with lung cancer being the most studied. Other cancers examined include head and neck, breast, lymph nodes, whole body, and others. All studies involved human subjects. The findings indicate that AI holds promise in improving cancer detection, identifying benign from malignant tumors, aiding in segmentation, response evaluation, staging, and determining the prognosis. However, the application of AI-powered models and PET/CT-derived radiomics in clinical practice is limited because of issues of data normalization, reproducibility, and the requirement of large multi-center data sets for improving model generalizability. All these limitations have to be solved to guarantee the dependable and ethical use of AI in day-to-day clinical activities.
Power outages are a common and persistent problem in Iraq, significantly impacting various aspects of life and business. These interruptions disrupt routine household tasks and hinder more complex technical operations in industries and services. Emphasizing the need for careful management and proactive solutions. This paper introduces a real-world time series dataset for Baghdad city, including historical outages, weather conditions (such as temperature), and power overloads, and analyzes the correlation among these parameters in different seasons. The research uses this dataset to train one-dimensional Convolutional Neural Networks (1D CNN) to find patterns and relationships that can accurately predict when power outages can happen in the long term and short term to improve the management of the Baghdad electricity grid through data-driven networks. This model was evaluated using performance metrics, and the results show that CNN is accurate in predicting outages in the short term with a Mean Absolute Error (MAE) of (0.0077), whereas, in the long term, it has achieved an MAE of (0.0775). These predictive models have the potential to facilitate the development of proactive measures aimed at reducing the impact of power outages by anticipating potential outages in advance. This research focuses on enhancing the reliability and efficiency of Baghdad's electricity supply, ultimately contributing to economic growth and stability.
Optical coherence tomography (OCT) allows for direct and immediate imaging of the morphology of retinal tissue. It has become a crucial imaging modality for diagnosing eye problems in ophthalmology. One of the most significant morphological characteristics of the retina is the structure of the retinal layers, which provides important evidence for diagnostic purposes and is related to a variety of retinal diseases. In this paper, a convolutional neural network (CNN) model is proposed that can identify the difference between a normal retina and three common macular diseases: Diabetic macular edema (DME), Drusen, and Choroidal neovascularization (CNV). This proposed model was trained and tested on an open source dataset of OCT images also with professional disease classifications such as DME, CNV, Drusen, and Normal. The suggested model has achieved 98.3% overall classification accuracy, with only 7 wrong classifications out of 368 test samples. The suggested model significantly outperforms other models that made use of the identical dataset. The final results show that the suggested model is particularly adapted to the detection of retinal disorders in ophthalmology centers.
Breast cancer is one of the greatest frequent tumours among females in Iraq. Medical ultrasound imaging has become a common modality for breast tumour imaging because of its ease of use, low cost, and safety. In the present study, Convolutional Neural Network (CNN) feature extraction approaches were used to classify breast ultrasound imaging. The CNN model used is composed of four-layer for breast cancer ultrasound image analysis. Two types of free datasets were used. These data were divided into groups A and B. Group A has three classes, namely benign, malignant and normal, while group B has two classes, namely, benign and malignant. The proposed technique was assessed based on its accuracy, precision, F1 score and recall. The model's classification accuracy for data A was 96%, whereas for data B was 100%.