This paper examined the mathematical model of Zika virus transmission, focusing on the impact of the virus on humans and mosquitoes. Human and mosquito populations involved in Zika virus transmission are divided into two categories: susceptible and infected. In addressing the nonlinear differential equation that governing Zika virus transmission, the Taylor series method (TSM) and the new Homotopy perturbation method (NHPM) were employed to derive semi-analytical solutions. Furthermore, for a comprehensive assessment of the nonlinear system behavior and the accuracy of the obtained solutions, a comparative analysis was performed using numerical simulations. This comparative analysis enabled us to validate the results and to gain valuable insights into the behavior of the Zika virus transmission model under different conditions. Moreover, to decrease the number of infected human population, we analyzed the contact rate of Zika virus transmission between humans and mosquitoes, as well as between humans and humans.
Communications in Mathematical Biology and Neuroscience - 24-14-Oct
Download
In this research, job satisfaction was evaluated in the four private universities in Aleppo city. After that, the four universities were classified according to each dimension of job satisfaction in this study, which are five basic dimensions: salaries and incentives and annual increases, performance evaluation, the relationship with the direct supervisor and coworkers, the work environment, job stability. The study was conducted on 111 employees at the four private universities, where a job satisfaction questionnaire was distributed and the result were transcribed into the SPSS program, a descriptive analysis of the personal data was conducted, and a five-point Likert scale was also used to describe job satisfaction, then we applied a new algorithm that converts the traditional five-point Likert scale into a form that gives a percentage between 0 and 100 for each dimension of job satisfaction, finally we ranked the four private universities based on this percentage according to each dimension separately.
Aleppo University Research Journal - 24-11-Nov
Download
Objectives: In recent years, the demand for automatic emotion recognition systems has increased for use in various fields, including efforts to modify an individual's mood to improve mental health, as well as assisting in identifying the emotions of children with autism spectrum disorder who struggle to express their emotional states. Deep learning networks are linked with face Detection algorithms. If a deep learning network is used alone, any object detected in the image will be considered as a face and will be processed to determine whether it has emotions or not. This results in high computational complexity, very high response time, and low accuracy if there is more than one face in the same image. Methods: In this research, images were first introduced into the Fer-Net convolutional neural network after performing some preprocessing operations. The Fer-Net was selected after experimenting with several other CNN. Facial features were then extracted from the network, and these extracted features were classified into four basic emotions. Additionally, several standard databases were tested individually, such as FER-2013 and AffectNet, for training and evaluating the network. Subsequently, the previous databases were merged with other databases like RAF-DB and CK+ to increase the number of training samples and evaluation samples in order to avoid the issue of overfitting. Finally, we linked facial detection with the classification network obtained from the trained model using the MTCNN algorithm to identify the faces present in the image before analyzing the facial features and determining the emotions extracted from them. Results: First, Data Augmentation was implemented on the data in standard database (Fer-2013), and the overfitting problem was resulted. The experimental results showed that the (Train Accuracy) value during the training epochs did not exceed more than 0.7, while the (Val Accuracy) value did not exceed more than 0.55 during the evaluation phase. Moreover, the value of error (Train Loss), it started with values above 2 and then decreased until it reached 0.8, while the error value during the evaluation phase (Val Loss) maintained large values until it reached a value of 1.2. When we replaced the Fer-2013 dataset with the AffectNet dataset, the error value during the evaluation phase (Val Loss) exceeded 4, which is a very large value. Finally, we merged several standard emotion recognition datasets (Fer-2013, AffectNet, RAF-DB, CK+) where the images for each class were grouped together, thus increasing the number of samples for training and evaluation. The results showed an increase in classification accuracy of over 0.95 and a decrease in error to approximately 0.15. Conclusions: Experimental results demonstrated the proposed system’s effectiveness and capability to detect the primary emotion through facial expression, achieving a higher accuracy than other related studies.
Arab Institute of Science and Research Publishing - 24-30-Dec
Download
Objectives: In recent years, the demand for automatic emotion recognition systems has increased for use in various fields, including efforts to modify an individual's mood to improve mental health, as well as assisting in identifying the emotions of children with autism spectrum disorder who struggle to express their emotional states. Deep learning networks are linked with face Detection algorithms. If a deep learning network is used alone, any object detected in the image will be considered as a face and will be processed to determine whether it has emotions or not. This results in high computational complexity, very high response time, and low accuracy if there is more than one face in the same image. Methods: In this research, images were first introduced into the Fer-Net convolutional neural network after performing some preprocessing operations. The Fer-Net was selected after experimenting with several other CNN. Facial features were then extracted from the network, and these extracted features were classified into four basic emotions. Additionally, several standard databases were tested individually, such as FER-2013 and AffectNet, for training and evaluating the network. Subsequently, the previous databases were merged with other databases like RAF-DB and CK+ to increase the number of training samples and evaluation samples in order to avoid the issue of overfitting. Finally, we linked facial detection with the classification network obtained from the trained model using the MTCNN algorithm to identify the faces present in the image before analyzing the facial features and determining the emotions extracted from them. Results: First, Data Augmentation was implemented on the data in standard database (Fer-2013), and the overfitting problem was resulted. The experimental results showed that the (Train Accuracy) value during the training epochs did not exceed more than 0.7, while the (Val Accuracy) value did not exceed more than 0.55 during the evaluation phase. Moreover, the value of error (Train Loss), it started with values above 2 and then decreased until it reached 0.8, while the error value during the evaluation phase (Val Loss) maintained large values until it reached a value of 1.2. When we replaced the Fer-2013 dataset with the AffectNet dataset, the error value during the evaluation phase (Val Loss) exceeded 4, which is a very large value. Finally, we merged several standard emotion recognition datasets (Fer-2013, AffectNet, RAF-DB, CK+) where the images for each class were grouped together, thus increasing the number of samples for training and evaluation. The results showed an increase in classification accuracy of over 0.95 and a decrease in error to approximately 0.15. Conclusions: Experimental results demonstrated the proposed system’s effectiveness and capability to detect the primary emotion through facial expression, achieving a higher accuracy than other related studies.
Arab Institute of Science and Research Publishing - 24-30-Dec
Download