LC International Journal of STEM (ISSN: 2708-7123) <p><strong>Journal Name:</strong> LC International Journal of STEM<br /><strong>ISSN Number:</strong> <a href="" target="_blank" rel="noopener">2708-7123</a><br /><strong>Frequency:</strong> Quarterly<br /><strong>Published by:</strong> <a href="" target="_blank" rel="noopener">Logical Creations Education Research Institute (LC-ERI)</a>.</p> <p><img src="" alt="" width="115" height="146" /></p> <p><strong>LC International Journal of STEM (LC-JSTEM)</strong>, ISSN Number: 2708-7123, is an open access journal and publish articles from all areas of science, technology, engineering, computational mathematics, technology management and education technology. The main focus of the journal is on practical research and outcomes.</p> <p>LC-JSTEM (ISSN: 2708-7123) was inaugurated on 1st January 2020. This journal is published online quarterly in the months of April, July, October and January by <a href="" target="_blank" rel="noopener">Logical Creations Education Research Institute (LC-ERI)</a>, Quetta-Pakistan.</p> <p>LC-JSTEM (ISSN: 2708-7123) is an open access, double blind peer-reviewed journal, free for readers and we provide a supportive and accessible services for our authors throughout the publishing process. LC-JSTEM recognizes the international influences on the science, technology and engineering platforms and its development.</p> <p>LC-JSTEM (ISSN: 2708-7123) provides an open access forum for scientists, scholars, researchers and engineers to exchange their research work, technical notes and surveying results among professionals through publications.</p> <p> </p> Logical Creations Education Research Institute en-US LC International Journal of STEM (ISSN: 2708-7123) 2708-7123 <p>This work is licensed under a <a href="" target="_blank" rel="noopener">Creative Commons Attribution 4.0 International License (CC BY 4.0)</a>.</p> Enhanced Facial Expression Recognition via Deep Transfer Learning and Augmentation <p>Facial Expression is one of the key parts of non-verbal communication. Facial Expression Recognition is the major application of surveillance, automation, health care, and education. Deep learning is important in different fields of computer vision due to its ability to process and analyze large volumes of data, extract features, and correctly classification of images. This research empirically evaluates the performance of a pre-trained model on augmented datasets for facial expression recognition. The study includes preprocessing techniques, data augmentation, and transfer learning using the ResNet50 model. The experiments are conducted on a dataset containing images of three facial expressions: happy, sad, and surprised. The results indicate significant improvements in accuracy as the dataset size and preprocessing techniques increase. In particular, Cubic Support Vector Machine (SVM) and Linear Cubic SVM consistently outperform other classifiers, achieving an impressive accuracy of 99.7% on the augmented dataset. The research demonstrates the potential of data augmentation and preprocessing in enhancing facial expression recognition systems.</p> Akshay Kumer Dr. Junaid Babar Muhamamd Khalid Sadia Mujtaba Copyright (c) 2024 Akshay Kumer, Dr. Junaid Babar, Muhamamd Khalid, Sadia Mujtaba 2024-01-06 2024-01-06 4 4 1 9 10.5281/zenodo.10594199 Producing of High Quality Colored Images using Scalable Image Processing Techniques <p>One of the many digital approaches that came from the image processing domain is picture enhancement. These approaches are employed to enhance the perceptibility of images, or to transform the image into a format more suitable for human or machine analysis, and to highlight intricate elements that might otherwise remain indistinct. The primary topic of this thesis is the utilization of the pseudo color approach, which is an image enhancement technique, to convert grayscale intensity images into color-coded images. An investigation into the various forms of pseudo color techniques that have been created in the past has been done in this work. Using the spectra returned by the Fourier transform of the input picture, the Pseudo color method applies three distinct digital filters—a high pass filter, a band pass filter, and a low pass filter—to achieve the desired effect: The Red, Green, and Blue components of the CRT electron cannons are then given the three filtered outputs that are generated, which are subsequently projected onto the screen. Therefore, a comprehensive package has been developed to execute the necessary procedures for generating the colored image. This bundle comprises two primary components. The initial one facilitates the execution of Fourier transformations and filtering operations. For the second part, a computed color table is used to mix the three components of Red, Green, and Blue to make and show the desired color. This means that each pixel in the original image will have a new value that matches the new color, which creates a new colored image. Also, Combining optimal partitioning with dynamic programming with a representation of the image for space-filling curve, we offer a novel algorithm for pseudo-coloring in this paper. The algorithm permits the fine-to-coarse assignment of triplet colors to the pixels of an image, thereby producing a pseudo-colored image that preserves either structure or detail. This is accomplished by initially considering the original gray levels in the image and then systematically decreasing them by optimal partitioning until reaching a specific number, which can include reducing the image to only two colors for a binary representation. The number of colors is output by the algorithm, and the specific allocation of colors is determined by the nature of the problem being addressed. Two sets of medical photos are used to illustrate how the algorithm is applied.</p> Mohammed Rasool Jawad Copyright (c) 2024 Mohammed Rasool Jawad 2024-01-06 2024-01-06 4 4 10 24 10.5281/zenodo.10594220 Word-Graph Construction Techniques for Context Analysis <p>A Nomo-Word Graph Construction Analysis Method (NWGC-AM) is used to graph let the corresponding construction phrases into essential and non-essential citation groups. NMCS-NR, or Nomo Maximum Common Sub-graph edge resemblance, Maximum Common Subgraph Directed Edge resemblance (MCS-DER), and Maximum Common Subgraph Resemblance. The graph resemblance metrics used in this work are called Undirected Edges Resemblance (MCS-UER). The tests included five distinct classifiers: Random Forest, Naive Bayes, K-Nearest Neighbors (KNN), Decision Trees, and Support Vector Machines (SVM).Four sixty one (361) citations made up the annotated dataset used for the studies.&nbsp; The Decision Tree classifier exhibits superior performance, attaining an accuracy rate of 0.98.</p> Rafique Yasir Wu Jue Mushtaq Muhammad Umer Atif Nazma Copyright (c) 2024 Rafique Yasir, Wu Jue, Mushtaq Muhammad Umer, Atif Nazma 2024-01-06 2024-01-06 4 4 25 35 10.5281/zenodo.10594263 Understanding Public Opinions on Social Media about ChatGPT – A deep Learning Approach for Sentiment Analysis <p>User-generated multimedia content—photos, text, videos, and audio—is becoming more and more common on social networking sites to allow individuals to express their thoughts. One of the largest and most advanced social media platform discussing ChatGPT is Twitter. This is because Twitter updates are constantly being produced and have a limited duration. The deep learning method for sentiment analysis of Twitter data about ChatGPT evaluation is presented in this research. This study used 4-class labels (sadness, joy, fear, and anger) from public Twitter data stored in the Kaggle database. The proposed deep learning strategy significantly improves the efficiency metrics determined by the use of the attention layer in current LSTM-RNN approaches, increasing accuracy by 20% and precision by 10-12%, but recall only 12-13%. Out of 18000 ChatGPT-related tweets, positive, neutral, and negative sentiments accounted for a respective 45%, 30%, and 35%. It is determined that the suggested deep learning technique for ChatGPT review sentiment categorization is effective, realistic, and fast to deploy.</p> Rafique Yasir Wu Jue Mushtaq Muhammad Umer Rafique Bilal Atif Nazma Kanwal Sania Copyright (c) 2024 Rafique Yasir, Wu Jue, Mushtaq Muhammad Umer, Rafique Bilal, Atif Nazma, Kanwal Sania 2024-01-06 2024-01-06 4 4 36 50 10.5281/zenodo.10594299 Indoor Smoking Detection Based on YOLO Framework with Infrared Image <p>This study recommends combining the efficacy of YOLO with the greater visibility provided by infrared imaging to create a better indoor smoking detection system. The YOLO system divides photos into a grid and anticipates bounding boxes and class probabilities at the same time, making it an obvious choice for its real-time item detection capabilities. The approach improves its robustness by identifying heat signals associated with smoking sessions and overcoming limitations posed by low-light or blocked circumstances. The addition of infrared images significantly improved the system's performance in low-light conditions. A dual spectrum thermal camera is used in the entire indoor smoking detection system to obtain a large collection of infrared images representing various interior locations with documented smoking episodes. During the training phase, data augmentation processes such as random rotations, flips, and brightness and contrast fluctuations were used to improve the system's performance. The CIoU loss function improved the system's localization accuracy significantly, reducing false positives and improving overall detection performance. The combination of YOLO and infrared photography, in conjunction with data augmentation and the CIoU loss function, not only improves indoor smoking detection but also demonstrates the benefits of merging several technologies in the development of more effective and adaptive systems.</p> Abdullah Al Nayeem Mahmud Lavu Hua Zhang Hao Zhao MD Toufik Hossain Copyright (c) 2024 Abdullah Al Nayeem Mahmud Lavu, Hua Zhang, Hao Zhao, MD Toufik Hossain 2024-01-06 2024-01-06 4 4 51 71 10.5281/zenodo.10594345