Does Lack of Knowledge and Hardship of Information Access Signify Powerful AI? A Large Language Model Perspective

Authors

  • Idrees A. Zahid Electrical & Computer Engineering, Gannon University Erie, PA, USA
  • Shahad Sabbar Joudar Information Technology Center, University of Technology, Baghdad, Iraq

DOI:

https://doi.org/10.58496/ADSA/2023/014

Keywords:

Large Language Model, Artificial Intelligence, Human Feedback, Reinforcement Learning, Digital Corpus

Abstract

Large Language Models (LLMs) are evolving and expanding enormously. With the consistent improvement of LLMs, more complex and sophisticated tasks will be tackled. Handling various tasks and fulfilling different queries will be more precise. Emerging LLMs in the field of Artificial Intelligence (AI) impact online digital content. An association between digital corpus scarcity and the improvement of LLMs is drawn. The impact it will bring to the field of LLMs is discussed. More powerful LLMs are insights to be there. Specifically, increase in Reinforcement Learning from Human Feedback (RLHF) LLMs release. More precise RLHF LLMs will endure development and alternative releases.

Downloads

Download data is not yet available.

References

T. Teubner, C. M. Flath, C. Weinhardt, W. van der Aalst, and O. Hinz, “Welcome to the Era of ChatGPT et al.: The Prospects of Large Language Models,” Bus. Inf. Syst. Eng., vol. 65, no. 2, pp. 95–101, Apr. 2023, doi: 10.1007/S12599-023-00795-X/METRICS.

Y. Liu et al., “Summary of ChatGPT-Related research and perspective towards the future of large language models,” Meta-Radiology, vol. 1, no. 2, p. 100017, Sep. 2023, doi: 10.1016/J.METRAD.2023.100017.

A. Kolides et al., “Artificial intelligence foundation and pre-trained models: Fundamentals, applications, opportunities, and social impacts,” Simul. Model. Pract. Theory, vol. 126, p. 102754, Jul. 2023, doi: 10.1016/J.SIMPAT.2023.102754.

Z. Liu et al., “Tailoring Large Language Models to Radiology: A Preliminary Approach to LLM Adaptation for a Highly Specialized Domain,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 14348 LNCS, pp. 464–473, 2024, doi: 10.1007/978-3-031-45673-2_46/COVER.

S. Lankford, H. Afli, and A. Way, “adaptMLLM: Fine-Tuning Multilingual Language Models on Low-Resource Languages with Integrated LLM Playgrounds,” Inf. 2023, Vol. 14, Page 638, vol. 14, no. 12, p. 638, Nov. 2023, doi: 10.3390/INFO14120638.

A. Liesenfeld, A. Lopez, and M. Dingemanse, “Opening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generators,” Proc. 5th Int. Conf. Conversational User Interfaces, CUI 2023, Jul. 2023, doi: 10.1145/3571884.3604316.

K. Bhardwaj, R. S. Shah, and S. Varma, “Pre-training LLMs using human-like development data corpus,” Nov. 2023, Accessed: Dec. 20, 2023. [Online]. Available: https://arxiv.org/abs/2311.04666v3.

H. Laurençon et al., “The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset,” Adv. Neural Inf. Process. Syst., vol. 35, pp. 31809–31826, Dec. 2022.

“Wikipedia:Size of Wikipedia - Wikipedia.” https://en.wikipedia.org/wiki/Wikipedia:Size_of_Wikipedia (accessed Dec. 20, 2023).

S. Yin et al., “A Survey on Multimodal Large Language Models,” Jun. 2023, Accessed: Dec. 20, 2023. [Online]. Available: http://arxiv.org/abs/2306.13549.

M. Aljanabi, M. Ghazi, A. H. Ali, S. A. Abed, and C. Gpt, “ChatGpt: Open Possibilities,” Iraqi J. Comput. Sci. Math., vol. 4, no. 1, pp. 62–64, Jan. 2023, doi: 10.52866/20IJCSM.2023.01.01.0018.

Downloads

Published

2023-12-12

How to Cite

Idrees A. Zahid, & Shahad Sabbar Joudar. (2023). Does Lack of Knowledge and Hardship of Information Access Signify Powerful AI? A Large Language Model Perspective. Applied Data Science and Analysis, 2023, 150–154. https://doi.org/10.58496/ADSA/2023/014
CITATION
DOI: 10.58496/ADSA/2023/014
Published: 2023-12-12

Issue

Section

Articles