×
The submission system is temporarily under maintenance. Please send your manuscripts to
Go to Editorial ManagerDiabetic peripheral neuropathy represents one of the common long-terms complications that effect about fifty percentage?of diabetes patients. The habitual diagnosis tool based on nerve conduction study that examine the nerve damage and classify the patient status into normal and diabetic peripheral neuropathy with degree of severity without considering the effect on skeletal muscle and take on patient data. A complementary diagnostic tool proposed, in this study integrates the patient’s data including body mass index, age and duration of diabetic, average blood glucose levels, nerve conduction study that involves amplitude and latency of peroneal and tibial nerves and muscle ultrasound alongside the machine learning algorithms to facilitate the clinicians for a precise diagnosis. A group of control and diabetic patients utilized to gather the data with calculating the muscle thickness and statistical properties from the gray-level ultrasound images of six skeletal muscles. Support vector machine, naïve bayes, ensemble of bagged tree and artificial neural network supervised machine learning algorithms categorize each class with a high classification accuracy, 98.1% for tibialis anterior with naïve bayes algorithm. The outcomes of this study show a promising complementary diagnostic tool that will help the clinicians to perform an exact diagnosis and disclose the side effect on both nerves and muscles of diabetic patients.
Lung cancer is the most common dangerous disease that, if treated late, can lead to death. It is more likely to be treated if successfully discovered at an early stage before it worsens. Distinguishing the size, shape, and location of lymphatic nodes can identify the spread of the disease around these nodes. Thus, identifying lung cancer at the early stage is remarkably helpful for doctors. Lung cancer can be diagnosed successfully by expert doctors; however, their limited experience may lead to misdiagnosis and cause medical issues in patients. In the line of computer-assisted systems, many methods and strategies can be used to predict the cancer malignancy level that plays a significant role to provide precise abnormality detection. In this paper, the use of modern learning machine-based approaches was explored. More than 70 state-of-the-art articles (from 2019 to 2024) were extensively explored to highlight the different machine learning and deep learning (DL) techniques of different models used for the detection, classification, and prediction of cancerous lung tumors. The efficient model of Tiny DL must be built to assist physicians who are working in rural medical centers for swift and rapid diagnosis of lung cancer. The combination of lightweight Convolutional Neural Networks and limited resources could produce a portable model with low computational cost that has the ability to substitute the skill and experience of doctors needed in urgent cases.
Technically, medical imaging modalities are quantitative, qualitative, and semi-quantitative. Such modalities can generate meaningful and valuable quantitative and qualitative data. Correlating predictive outcomes with quantitative and qualitative data is a difficult process. Thanks to modern computational hardware and advanced machine learning algorithms, it is not a demanding job to perform predictive analysis by cultivating quantitative and qualitative data. Radiomics is a popular topic that studies quantitative data from medical images in order to obtain biologically meaningful information for diagnosis, prognosis, theragnosis, and decision support. Handcrafted radiomics is a process including features based on shape, pixel, and texture-related knowledge from medical scans. In the pursuit of advancing the field of radiomics, we have developed a cutting-edge radiomics training simulator, powered by MATLAB. This tool has been designed for those familiar with MATLAB, making it easy for them to transition into the fascinating world of radiomics. MATLAB's user-friendly interface and strong support in the engineering community provide an ideal platform for this simulator, ensuring aspiring radiomics learners have access to the resources they need for success. Throughout the paper, purpose, design details and methodology of the simulator are described.
The interest in the Eye-tracking technology field dramatically grew up in the last two decades for different purposes and applications like keeping the focus of where the person is looking, how his pupils and irises are reacting for a variety of actions, etc. The resulted data can deliver an extraordinary amount of information about the user when it's interlocked through advanced data analysis systems, it may show information concerned with the user’s age, gender, biometric identity, interests, etc. This paper is concerned about eye motion tracking as an unadulterated tool for different applications in any field required. The improvements in this area of artificial intelligence (AI), machine learning (ML), and deep learning (DL) with eye-tracking techniques allow large opportunities to develop algorithms and applications. In this paper number of models were proposed based on Convolutional neural network (CNN) have been designed, and then the most powerful and accurate model was chosen. The dataset used for the training process (for 16 screen points) consists of 2800 training images and 800 test images (with an average of 175 training images and 50 test images for each spot on the screen of the 16 spots), and it can be collected by the user of any application based on this model. The highest accuracy achieved by the best model was (91.25%) and the minimum loss was (0.23%). The best model consists of (11) layers (4 convolutions, 4 Max pooling, and 3 Dense). Python 3.7 was used to implement the algorithms, KERAS framework for the deep learning algorithms, Visual studio code as an Integrated Development Environment (IDE), and Anaconda navigator for downloading the different libraries. The model was trained with data that can be gathered using cameras of laptops or PCs and without the necessity of special and expensive equipment, also It can be trained for any single eye, depending on application requirements.
This extensive and thorough review aims to systematically outline, clarify, and examine the numerous exploratory data analysis techniques that are employed in the intriguing and rapidly advancing domain of functional MRI research. We will particularly focus on the wide array of software applications that are instrumental in facilitating and improving these complex and often nuanced analyses. Throughout this discourse, we will meticulously assess the various strengths and limitations associated with each analytical tool, offering invaluable insights relevant to their application and overall efficacy across diverse research contexts and environments. Our aim is to create a comprehensive understanding of how these tools can be best utilized to enhance research outcomes. Through this analysis, we aspire to equip researchers with critical knowledge and essential information that could profoundly influence their methodological selections in upcoming studies. By carefully considering these factors, we hope to contribute positively to the ongoing progression of this important field of inquiry, fostering innovation and enhancing the impact of future research findings in functional MRI studies.
Deep learning modeling could provide to detected Corona Virus 2019 (COVID-19) which is a critical task these days to make a treatment decision according to the diagnostic results. On the other hand, advances in the areas of artificial intelligence, machine learning, deep learning, and medical imaging techniques allow demonstrating impressive performance, especially in problems of detection, classification, and segmentation. These innovations enabled physicians to see the human body with high accuracy, which led to an increase in the accuracy of diagnosis and non-surgical examination of patients. There are many imaging models used to detect COVID-19, but we use computerized tomography (CT) because is commonly used. Moreover, we use for detection a deep learning model based on convolutional neural network (CNN) for COVID-19 detection. The dataset has been used is 544 slice of CT scan which is not sufficient for high accuracy, but we can say that it is acceptable because of the few datasets available in these days. The proposed model achieves validation and test accuracy 84.4% and 90.09%, respectively. The proposed model has been compared with other models to prove superiority of our model over the other models.
Artificial intelligence (AI) is rapidly advancing as a valuable tool in oncology for enhancing detection and management of cancer. The integration of AI with PET/CT imaging presents significant scenarios for improving efficiency and accuracy of cancer diagnosis. This study examines the current applications of AI with PET/CT imaging, highlighting its role in diagnosing, differentiating, delineating, staging, assessing therapy response, determining prognosis, and enhancing image quality. A comprehensive literature search was conducted in six data-bases to get the most recent works, use Springer, Scopus, PubMed, Web of Science, IEEE, and Google Scholar in the last five years (2019-2024), identifying 80 studies that met the criteria for inclusion that focused on AI-driven models applied to PET/CT data in various cancers, with lung cancer being the most studied. Other cancers examined include head and neck, breast, lymph nodes, whole body, and others. All studies involved human subjects. The findings indicate that AI holds promise in improving cancer detection, identifying benign from malignant tumors, aiding in segmentation, response evaluation, staging, and determining the prognosis. However, the application of AI-powered models and PET/CT-derived radiomics in clinical practice is limited because of issues of data normalization, reproducibility, and the requirement of large multi-center data sets for improving model generalizability. All these limitations have to be solved to guarantee the dependable and ethical use of AI in day-to-day clinical activities.
Medical image segmentation plays a crucial role in the realm of medical imaging. The process involves the division of an image to obtain a comprehensive view and ensure precise diagnostics. There are various methods that are employed, ranging from traditional approaches to the more advanced deep learning techniques. Both play a significant role in enhancing healthcare. With the continuous advancement in technology, there is a growing need for accurate segmentation. While traditional methods such as thresholding and region growing are effective, they may require human intervention for complex cases. Deep learning techniques, particularly Convolutional Neural Networks (CNNs), have significantly improved the process by learning intricate details and accurately segmenting the image. When these methods are combined, healthcare professionals can achieve high-quality, precise results. Furthermore, with the advancements in hardware and technology, real-time segmentation is now possible. Generally, the process of dividing medical images into segments is extremely important for the progress of healthcare with the help of artificial intelligence and the most recent advancements in the industry, such as explainable AI and multimodal learning. However, this meticulously detailed and in-depth review provides an all-encompassing and extensive analysis of the current methods utilized, their multitude of applications across various fields, and the promising emerging advancements that have the potential to pave the way for remarkable future improvements and innovations.
One of the most common causes of mortality worldwide is Lung cancer, an early diagnosis crucial for a patient’s survival and recovery. Automated segmentation of lung lesions in chest CT has become a pre-eminent focal point for research, particularly with the development of hybrid methods combining traditional image processing with advanced deep learning methods such as CNN. These hybrid approaches aim to minimize individual methods limitations by controlling their merge strengths to enhance segmentation efficiency, precision, and clinical utility. This review comprehensively analyzes different hybrid techniques, such as deep learning improved by rule-based systems, multi-scale feature extraction, and ensemble learning. As well as inspect their clinical effect, particularly in improving diagnostic accuracy and optimizing treatment procedures. Despite their possibility, these approaches still face significant challenges, such as computational complexity, data requirements, and the requirement for explainable AI (XAI). Upcoming advancements in lung lesion segmentation will focus on refining these models to achieve faster processing, improved accuracy, and integration with diagnostic tools to protect transparency and ethical considerations.