Mesopotamian Journal of Big Data https://mesopotamian.press/journals/index.php/bigdata <p style="text-align: justify;">Attention scholars and researchers in the Big Data realm! The Mesopotamian Journal of Big Data, already with three published issues, invites your cutting-edge contributions to shape the future of this field. Our platform aims to disseminate groundbreaking discoveries and transformative applications in Big Data, emphasizing data analytics, machine learning, and related areas. We encourage interdisciplinary collaboration to drive advancements in this rapidly evolving domain. Your expertise is crucial—join us in this impactful journey.Submit your research to the Mesopotamian Journal of Big Data and be a part of the vanguard shaping knowledge in this transformative field.</p> Mesopotamian Academic Press en-US Mesopotamian Journal of Big Data 2958-6453 Assessing the Transformative Influence of ChatGPT on Research Practices among Scholars in Pakistan https://mesopotamian.press/journals/index.php/bigdata/article/view/234 <p>This article investigates the transformative impact of ChatGPT on research practices within the scholarly community in Pakistan. ChatGPT, a powerful AI language model, has added significant consideration for its possibility of improving academic research. Survey data was gathered via a structured questionnaire distributed to researchers in Pakistan. A total of 278 questionnaires were distributed for the randomly chosen sample, of which 223 were returned. For calculating descriptive statistics, SPSS was utilized.&nbsp; Results of the study indicated that 90% of scholars are familiar with the practice of ChatGPT in research activities. 86% of scholars used 3.5 (Basic version) of ChatGPT for their research and only 14% used 4 (Plus version) of ChatGPT for their research work. The overall satisfaction level was 46% response satisfied with the usage of ChatGPT in research activities. The article discusses how ChatGPT's natural language processing capabilities have advanced literature reviews, data analysis, and content generation, thereby saving time and fostering greater productivity. Moreover, it examines how the tool’s accessibility and affordability have democratized research, making it more inclusive and open to a broader range of scholars. By shedding light on these critical aspects, this article provides valuable insights into the evolving landscape of research practices in Pakistan and highlights the potential for ChatGPT to revolutionize academic scholarship in the digital age.</p> Nayab Arshad Mehran Ullah Baber Adnan Ullah Copyright (c) 2024 Adnan Ullah, Mehran Ullah Baber, Nayab Arshad https://creativecommons.org/licenses/by/4.0 2024-01-10 2024-01-10 2024 1 10 10.58496/MJBD/2024/001 Leveraging AI and Big Data in Low-Resource Healthcare Settings https://mesopotamian.press/journals/index.php/bigdata/article/view/337 <p>Big data and artificial intelligence are game-changing technologies for the underdeveloped healthcare industry because they help optimize the entire supply chain and deliver more exact patient outcome information. Machine learning approaches that have recently seen more growing popularity include deep learning models that have brought revolution within the healthcare system in the previous years due to more complicated data compared to previous years . Machine learning is an essential data analysis procedure to describe efficient and effective methods to extract hidden information from large amounts of data that it would take logical analytics too long to manage. Recent years have seen an expansion and growth of advanced intelligent systems that have been able to learn more about clinical treatments and glean untapped medical information emanating from vast quantities of data when it comes to drug discovery and chemistry. The aim of this chapter is, therefore, to assess which big data and artificial intelligence approaches are prevalent in healthcare systems by investigating the most advanced big data structures, applications, and industry trends today available. First and foremost, the purpose is to provide a comprehensive overview of how the artificial intelligence and big data models can allocation in healthcare solutions fill the gap between machine learning approaches’ lack of human coverage and the healthcare data’s complexity. Moreover, current artificial intelligence technologies, including generative models, Bayesian deep learning, reinforcement learning, and self-driving laboratories, are also increasingly being used for drug discovery and chemistry . Finally, the work presents the existing open challenges and the future directions in the drug formulation development field. To this end, the review will cover on published algorithms/automation tools for artificial intelligence applied to large scale-data in the case of healthcare .</p> Ahmed Hussein Ali Saad Ahmed Dheyab Abdullah Hussein Alamoodi Aws Abed Al Raheem Magableh Yuantong Gu Copyright (c) 2024 Ahmed Hussein Ali , Saad Ahmed Dheyab , Luís Martínez , , Iman Mohamad Sharaf , Abdullah Hussein Alamoodi , Aws Abed Al Raheem Magableh , Witold Pedrycz , Yuantong Gu https://creativecommons.org/licenses/by/4.0 2024-02-14 2024-02-14 2024 11 22 10.58496/MJBD/2024/002 Enhancing XML-based Compiler Construction with Large Language Models: A Novel Approach https://mesopotamian.press/journals/index.php/bigdata/article/view/343 <p>Considering the prevailing rule of Large Language Models (LLMs) applications and the benefits of XML in a compiler context. This manuscript explores the synergistic integration of Large Language Models with XML-based compiler tools and advanced computing technologies. Marking a significant stride toward redefining compiler construction and data representation paradigms. As computing power and internet proliferation advance, XML emerges as a pivotal technology for representing, exchanging, and transforming documents and data. This study builds on the foundational work of Chomsky's Context-Free Grammar (CFG). Recognized for their critical role in compiler construction, to address and mitigate the speed penalties associated with traditional compiler systems and parser generators through the development of an efficient XML parser generator employing compiler techniques. Our research employs a methodical approach to harness the sophisticated capabilities of LLMs, alongside XML technologies.&nbsp; The key is to automate grammar optimization, facilitate natural language processing capabilities, and pioneer advanced parsing algorithms. To demonstrate their effectiveness, we thoroughly run experiments and compare them to other techniques. This way, we call attention to the efficiency, adaptability, and user-friendliness of the XML-based compiler tools with the help of these integrations. And the target will be the elimination of left-recursive grammars and the development of a global schema for LL(1) grammars, the latter taking advantage of the XML technology, to support the LL(1) grammars construction. The findings in this research not only underscore the signification of these innovations in the field of compilation construction but also indicate a paradigm move towards the use of AI technologies and XML in the context of the resolution of programming traditional issues. The outlined methodology serves as a roadmap for future research and development in compiler technology, which paves the way for open-source software to sweep across all fields. Gradually ushering in a new era of compiler technology featuring better efficiency, adaptability, and all CFGs processed through existing XML utilities on a global basis.</p> Idrees A. Zahid Shahad Sabbar Joudar Copyright (c) 2024 Idrees A. Zahid, Shahad Sabbar Joudar https://creativecommons.org/licenses/by/4.0 2024-03-20 2024-03-20 2024 23 39 10.58496/MJBD/2024/003 Agent-Interacted Big Data-Driven Dynamic Cartoon Video Generator https://mesopotamian.press/journals/index.php/bigdata/article/view/366 <p>This study presents a novel method for animating videos using three Kaggle cartoon faces data sets. Dynamic interactions between cartoon agents and random backgrounds, as well as Gaussian blur, rotation, and noise addition, make cartoon visuals look better. This approach also evaluates video quality and animation design by calculating the backdrop colour's average and standard deviation, ensuring visually appealing material. This technology uses massive datasets to generate attractive animated videos for entertainment, teaching, and marketing.</p> Yasmin Makki Mohialden Abbas Akram khorsheed Nadia Mahmood Hussien Copyright (c) 2024 Yasmin Makki Mohialden, Abbas Akram khorsheed, Nadia Mahmood Hussien https://creativecommons.org/licenses/by/4.0 2024-04-17 2024-04-17 2024 40 47 10.58496/MJBD/2024/004 MLP and RBF Algorithms in Finance: Predicting and Classifying Stock Prices amidst Economic Policy Uncertainty https://mesopotamian.press/journals/index.php/bigdata/article/view/386 <p>In the realm of stock market prediction and classification, the use of machine learning algorithms has gained significant attention. In this study, we explore the application of Multilayer Perceptron (MLP) and Radial Basis Function (RBF) algorithms in predicting and classifying stock prices, specifically amidst economic policy uncertainty. Stock market fluctuations are greatly influenced by economic policies implemented by governments and central banks. These policies can create uncertainty and volatility, which in turn makes accurate predictions and classifications of stock prices more challenging. By leveraging MLP and RBF algorithms, we aim to develop models that can effectively navigate these uncertainties and provide valuable insights to investors and financial analysts. The MLP algorithm, based on artificial neural networks, is able to learn complex patterns and relationships within financial data. The RBF algorithm, on the other hand, utilizes radial basis functions to capture non-linear relationships and identify hidden patterns within the data. By combining these algorithms, we aim to enhance the accuracy of stock price prediction and classification models. The results showed that both MLB and RBF predicted stock prices well for a group of countries using an index reflecting the impact of news on economic policy and expectations, where the MLB algorithm proved its ability to predict chain data. Countries were also classified according to stock price data and uncertainty in economic policy, allowing us to determine the best country to invest in according to the data. The uncertainty surrounding economic policy is what makes stock price forecasting so crucial. Investors must consider the degree of economic policy uncertainty and how it affects asset prices when deciding how to allocate their assets.</p> Bushra Ali Khder Alakkari Mostafa Abotaleb Maad M Mijwil Klodian Dhoska Copyright (c) 2024 Bushra Ali, Khder Alakkari, Mostafa Abotaleb, Maad M Mijwil, Klodian Dhoska https://creativecommons.org/licenses/by/4.0 2024-05-11 2024-05-11 2024 48 67 10.58496/MJBD/2024/005 Generalized Time Domain Prediction Model for Motor Imagery-based Wheelchair Movement Control https://mesopotamian.press/journals/index.php/bigdata/article/view/429 <p>Brain-computer interface (BCI-MI)-based wheelchair control is, in principle, an appropriate method for completely paralyzed people with a healthy brain. In a BCI-based wheelchair control system, pattern recognition in terms of preprocessing, feature extraction, and classification plays a significant role in avoiding recognition errors, which can lead to the initiation of the wrong command that will put the user in unsafe condition. Therefore, this research's goal is to create a time-domain generic pattern recognition model (GPRM) of two-class EEG-MI signals for use in a wheelchair control system.<br>This GPRM has the advantage of having a model that is applicable to unknown subjects, not just one. This GPRM has been developed, evaluated, and validated by utilizing two datasets, namely, the BCI Competition IV and the Emotive EPOC datasets. Initially, fifteen-time windows were investigated with seven machine learning methods to determine the optimal time window as well as the best classification method with strong generalizability. Evidently, the experimental results of this study revealed that the duration of the EEG-MI signal in the range of 4–6 seconds (4–6 s) has a high impact on the classification accuracy while extracting the signal features using five statistical methods. Additionally, the results demonstrate a one-second latency after each command cue when using the eight-second EEG-MI signal that the Graz protocol used in this study. This one-second latency is inevitable because it is practically impossible for the subjects to imagine their MI hand movement instantly. Therefore, at least one second is required for subjects to prepare to initiate their motor imagery hand movement. Practically, the five statistical methods are efficient and viable for decoding the EEG-MI signal in the time domain. Evidently, the GPRM model based on the LR classifier showed its ability to achieve an impressive classification accuracy of 90%, which was validated on the Emotive EPOC dataset. The GPRM developed in this study is highly adaptable and recommended for deployment in real-time EEG-MI-based wheelchair control systems.</p> Z.T. Al-Qaysi M. S Suzani Nazre bin Abdul Rashid Reem D. Ismail M.A. Ahmed Rasha A. Aljanabi Veronica Gil-Costa Copyright (c) 2024 Z.T. Al-Qaysi , M. S Suzani , Nazre bin Abdul Rashid , Reem D. Ismail , M.A. Ahmed , Rasha A. Aljanabi, Veronica Gil-Costa https://creativecommons.org/licenses/by/4.0 2024-06-15 2024-06-15 2024 68 81 10.58496/MJBD/2024/006