×
The submission system is temporarily under maintenance. Please send your manuscripts to
Go to Editorial ManagerIn the past few years, all over the world, crime against children has been on the rise, and parents always worry about their children whenever they are outside. For this reason, tracking and monitoring children have become a considerable necessity. This paper presents an outdoor IoT tracking system which consists of a child module and a parent module. The child module monitors the child location in real time and sends the information to a database in the cloud which forwards it to the parent module (represented as a mobile application). This information is shown in the application as a location on Google maps. The mobile application is designed for this purpose in addition to a number of extra functions. A Raspberry Pi Zero Wireless is used with a GSM/GPS module on shield to provide mobile communication, internet and to determine location. Implementation results for the suggested system are provided which shows that when the child leaves a pre-set safe area, a warring message pops up on the parent’s mobile and a path from the current parent location to the child location is shown on a map.
As a result of the tremendous development taking place in modern systems and technologies in the field of electronic monitoring. Intelligent monitoring, decision making, and automated response systems have become common subjects at this time, especially after the development of machines responsible for these processes. Traffic surveillance is a trend goal nowadays using different techniques and equipment. In this article, real-time Object detection and tracking techniques were proposed for traffic surveillance using image processing techniques. A state was specifically examined for its ability to detect and count passing motorcycles on a highway in a specific area. The results showed good reliability, with a frame processing time of approximately about (30 ms) and the achievement of real-time performance. The main contribution of this article is reaching the best result implemented by the performance the real-time process using image process technique and tracking the object by depending on the sequencing of frames and can stands with rationally not so powerful machines. Several tools have been used for different types of necessary tasks that will be part of the required application such as Python 3.7; which was used to build the basic algorithms,Visual studio code (VSC) as an Integrated Development Environment (IDE), and Anaconda navigator for downloading many useful libraries. The specifications of the used device were Intel(R) Core (TM) i7- 10750H CPU @ 2.60GHz 2.59 GHz, RAM 16.0 GB, NVIDIA GeForce GTX 1650 GPU, 64-bit operating system, x64-based processor.
The interest in the Eye-tracking technology field dramatically grew up in the last two decades for different purposes and applications like keeping the focus of where the person is looking, how his pupils and irises are reacting for a variety of actions, etc. The resulted data can deliver an extraordinary amount of information about the user when it's interlocked through advanced data analysis systems, it may show information concerned with the user’s age, gender, biometric identity, interests, etc. This paper is concerned about eye motion tracking as an unadulterated tool for different applications in any field required. The improvements in this area of artificial intelligence (AI), machine learning (ML), and deep learning (DL) with eye-tracking techniques allow large opportunities to develop algorithms and applications. In this paper number of models were proposed based on Convolutional neural network (CNN) have been designed, and then the most powerful and accurate model was chosen. The dataset used for the training process (for 16 screen points) consists of 2800 training images and 800 test images (with an average of 175 training images and 50 test images for each spot on the screen of the 16 spots), and it can be collected by the user of any application based on this model. The highest accuracy achieved by the best model was (91.25%) and the minimum loss was (0.23%). The best model consists of (11) layers (4 convolutions, 4 Max pooling, and 3 Dense). Python 3.7 was used to implement the algorithms, KERAS framework for the deep learning algorithms, Visual studio code as an Integrated Development Environment (IDE), and Anaconda navigator for downloading the different libraries. The model was trained with data that can be gathered using cameras of laptops or PCs and without the necessity of special and expensive equipment, also It can be trained for any single eye, depending on application requirements.