E-ISSN:2250-0758
P-ISSN:2394-6962

Research Article

Retrieval-Augmented Generation

International Journal of Engineering and Management Research

2025 Volume 15 Number 5 October
Publisherwww.vandanapublications.com

CV Summary and Professional Recommendations Using RAG and NLP

Sarker U1*, Biswas A2, Saurabh3, Vaishnav L4, Rathod MV5
DOI:10.5281/zenodo.17645956

1* Utsha Sarker, Department of AIT-CSE, Apex Institute of Technology, Chandigarh University, Punjab, India.

2 Archy Biswas, Department of AIT-CSE, Apex Institute of Technology, Chandigarh University, Punjab, India.

3 Saurabh, Department of AIT-CSE, Apex Institute of Technology, Chandigarh University, Punjab, India.

4 Lalit Vaishnav, Department of AIT-CSE, Apex Institute of Technology, Chandigarh University, Punjab, India.

5 Myla Vizwal Rathod, Department of AIT-CSE, Apex Institute of Technology, Chandigarh University, Punjab, India.

Job searching can be a very tedious affair as one has to tailor-make resumes to fit every job posting. This article provides an AI driven approach that will cut down the fuss of making resumes, choosing keywords, and matching them precisely with job postings through ARG and NLP. In simpler terms, the system merges a transformer-based LLM with semantic search and vector embeddings to quickly identify the roles, qualifications, experience, and skills the user highlights in their extracts. Keyword extraction also aligns with job market trends to increase application success rates. The job matching module uses FAISS- based semantic search, ranking opportunities by relevance and skill match. Mass-scale experimentation with different sets of resume and job posting data confirms the effectiveness of the system with an astonishing 92% accuracy in job matching and skill extraction. By bridging the gap between recruiters and job candidates, the process streamlines candidate profiling, making the hiring process more accurate, precise, and data-driven.

Keywords: Retrieval-Augmented Generation (RAG), Natural Language Processing (NLP), Keyword Extraction, Job Matching, Semantic Search, Transform er-Based LLM, FAISS

Corresponding Author How to Cite this Article To Browse
Utsha Sarker, Department of AIT-CSE, Apex Institute of Technology, Chandigarh University, Punjab, India.
Email:
Sarker U, Biswas A, Saurabh, Vaishnav L, Rathod MV, CV Summary and Professional Recommendations Using RAG and NLP. Int J Engg Mgmt Res. 2025;15(5):125-132.
Available From
https://ijemr.vandanapublications.com/index.php/j/article/view/1815

Manuscript Received Review Round 1 Review Round 2 Review Round 3 Accepted
2025-09-02 2025-09-18 2025-10-04
Conflict of Interest Funding Ethical Approval Plagiarism X-checker Note
None Nil Yes 5.32

© 2025 by Sarker U, Biswas A, Saurabh, Vaishnav L, Rathod MV and Published by Vandana Publications. This is an Open Access article licensed under a Creative Commons Attribution 4.0 International License https://creativecommons.org/licenses/by/4.0/ unported [CC BY 4.0].

Download PDFBack To Article1. Introduction2. Literature
Survey
3. Methodology4. Result and
Discussion
5. ConclusionReferences

1. Introduction

Job searching today is more competitive, with the requirement of maximizing resumes to match the requirements of recruiters as well as hiring automation. Recruitment today relies significantly on the application of Applicant Tracking Systems (ATS), which filter resumes on the basis of the relevance of keywords, competencies, and experience. The most qualified candidates fail to reach the hiring process due to the fact that they lack the exact keywords that the systems prioritize [1]. Manually tailoring resumes to different jobs is not only labor intensive but also ineffective, and it becomes difficult for job candidates to apply to multiple jobs. To address this requirement, we offer an AI-driven platform leverages Retrieval-Augmented Generation (RAG) and Natural Language Processing (NLP) to enhance resume summarizing, keyword extraction, and job matching. The tool users a transformer-based LLM to analyse the resumes and extract keywords, and experiences. Keyword extraction will helps candidates to match their resumes with the industrial terms and current job market, this improves the chances of pass ATS filters [2].

Apart from enhancing resumes, job postings are another important part of the whole recruitment process. Traditional keyword-based search mechanisms often misjudge the actual relevance of job postings and resumes, leading to candidates receiving offers that mismatch their skills and experience. Our solution offers an intelligent and scalable recruitment solution wherein resume creation becomes more effective with the use of information retrieval and NLP. Resumes and cover letters created with the help of NLP and semantic analysis open the doors to the right opportunities for candidates. As a result, recommendations become more accurate, efficient, and data-driven, increasing transparency and effectiveness throughout the recruitment process[3].

A. Retrieval-Augmented Generation (RAG)

RAG is a further advance developed to enhance the capability of language models through the incorporation of information retrieval mechanisms into text generation. Based on the query entered, the system retrieves relevant data from comprehensive knowledge sources like databases, documents, and websites.

This process, called retrieval, locates within the question context the most helpful pieces of information. The obtained content feeds into a language model, which takes the user's question and the retrieved content to create a full, meaningful, and accurate response.

This approach guarantees that the results are not only well-phrased but also factually accurate, given they are based on real and up-to-date sources instead of relying completely on pre-existing training data. The entire design of RAG allows it to minimize inaccuracies and outdated information, making it very useful in domains requiring high precision and domain-specific knowledge, such as academic research tools, customer support systems, medical advisory, and legal text review. By blending live data retrieval with generative capabilities, RAG effectively closes the gap between static model memory and evolving real-world information, resulting in more dependable, informed, and context-aware outputs [4].

ijemr_1815_01.PNG
Figure
1: Understanding RAG

B. Natural Language Processing

Computers understand and interacts likes humans by Taking help of AI who uses NLP. It brings together knowledge from various disciplines in linguistics and machine learning/deep learning to interpret and provide text or speech. NLP can help machines classify text, detect sentiment, translate among languages, and extract key details from content. Tokenization, named entity recognition, part-of-speech tagging, and word embeddings are some strategies that help the models grasp not only the structure but also deep meaning of our words.

latest achievements with transformer-based models like BERT and GPT have substantially elevated NLP applications to make them more precise and context-specific. These models power chatbots, search engines, recommendation systems, and various other applications of artificial intelligence which need a deep breakdown of human language.


2. Literature Survey

O¨yku¨ Berfin Mercan et al. [5] Provide a summarized version of text summarization technique for resumes using state-of- the-art NLP transformers and LSTM networks. The study Analyzes the results of pre-trained models like T5, Pegasus, BART, and BART-Large on corpora like XSum, CNN/Daily Mail, Amazon Fine Food Review, and a proprietary corpus of resumes. The proprietary resume dataset, consisting of 75 resumes, has vital candidate information such as language, education, experience, personal details, and skills. It compares different summarization techniques and, among them, finds that BART-Large, fine-tuned on a custom dataset of resumes, is the most effective. This research illustrates the possibility of using deep learning models for the automation of resume summarization to make job applications easier and more organized.

Manohar M et al. [6] propose an abstractive text summarizer model: Decoder Attention with Pointer Network, to improve the accuracy of summaries.

The paper identifies the rising trend of unstructured text data on the web and social media and the need for effective summarization techniques. The study uses a Kaggle dataset of 1,735 resumes, using semantic data transformations and deep learning techniques to increase the quality of the summarization. The DA-PN model introduces a coverage mechanism to prevent repetition of words and error propagation in the summaries. Performance evaluation with the use of ROUGE scores shows that DA-PN achieves a score of 26.28, outperforming existing techniques. The study demonstrates the effectiveness of the application of deep learning techniques to structured text summarization, opening the way to the improvement of the automation of information extraction and resume analysis.

Evrard Stency Larys Tsoumou et al. [7] provide an extractive multi-document summarization technique based on the Fuzzy Logic methodology to increase the effectiveness of text summarization. The study identifies the increasing need for automatic summarization with the massive increase in web- based textual information. The technique uses a Fuzzy ontology extraction technique to aid multi-document summarization with a special focus on Gabonese documents.

To validate its efficacy, the study has been done in comparison with other summarization techniques using Timestamp as a benchmark. It shows the results that the Fuzzy-based technique is efficient enough in generating summaries with better information retention, thus establishing its potential for effective text summarization in various language and domain-specific applications. Rakhi Dumne et al. [8] This paper represents automatic text summarization based on the TextRank algorithm, an extractive approach which aims to highlight the most valuable sentences of a document. It covers single-document and multi-document summarization and can be extended towards full-document or web page summarization. TextRank works similarly to Google's PageRank in scoring sentences for their relevance. The highest ranked sentences are then selected to provide a compact and coherent summary that is true to the authentic meaning. On the whole, this research shows how graph-based ranking algorithms like TextRank can play a significant role in making large volumes of text useful with rapid naming and extracting information relevant to the subject. Nikhil Zade et al. [9] This is a project describing an NLP-driven system that merges text summarization and machine translation. It has used extractive summarization strategies to summarize large chunks of text, keeping all the necessary information intact. The aim is to provide short, important summaries and accurate translations, making it much easy to Process and understand long content without losing its core message. The paper presents a Graphical User Interface (GUI)-based applications It is a Python-based application, implemented using the TKinter library, for document summarization. The proposed system also involves image processing in implementing the security features for verification of users using OpenCV and PIL in the system. The proposed system performs text summarization and language translation using powerful NLP algorithms with performance in the range of 91% to 95%. This work identifies AI-based text summarization in tandem with secure proof mechanisms as a scalable and dynamic outcome to the existing scenario.

The paper shows document handling and information executing needs. Ashutosh Ray et al. present a comparative overview of extractive text summarization techniques with the help of NLP, comparing the effectiveness of ETS with ATS.


The study indicates that unstructured text data has grown complicated over time and automated summarization has become an indispensable aid for handling these large volumes of information. Unlike the complex generative models of ATS, ETS operates by choosing important sentences or tokens of a document, hence being a computationally more tractable technique. The study also presents new results and state-of-the-art methods, providing insight into future developments in automated text summarization.

A. Research Gaps

Despite the significant advancements that have been made in abstractive text summarization with the aid of deep learning, NLP, and extractive methods, various research gaps remain. Existing research focuses on abstractive or extractive summarization but few on the hybrid method that takes the best of the two to attain accuracy and coherence. While models like BART-Large, DA-PN, and Text Rank have worked effectively in summarization, the use of such models with domain-specific data, such as resumes, is limited.

The lack of large, structured collections of resumes to train the models constrains the performance of the existing models, which leads to potential bias and loss of generalizability. While the use of the ROUGE score is common for evaluation, it cannot effectively capture the semantic quality of the generated summaries, and therefore more context-aware evaluation metrics must be introduced. Another significant gap is semantic job matching, with the conventional summarization methods being unable to match candidate profiles with job descriptions effectively due to the surface-level keyword-based nature of the method compared to a profound understanding of the context. Bridging the gaps involves the integration of Retrieval-Augmented Generation (RAG), semantic search, and transformer-based models to attain better generalization, relevance to the context, and real- world usability in the area of resume summarization and job recommendation.

B. Research Objectives

The core of this research is to build an AI-enabled system for the summarization of curriculum vitae, using Retrieval-Augmented Generation and Natural Language Processing. It focuses on how to efficiently isolate crucial qualifications, professional competencies, and relevant work history.

The present study will attempt to further develop existing approaches in keyword extraction by embracing semantic search and transformer-based LLMs in order to ensure that the CV is precisely tuned to resonate with the modern dynamics of the job market and specifications of applicant tracking systems. The study also aims at enhancing the accuracy of job matching by introducing FAISS-based semantic retrieval, ranking the job postings by relevance of required skills and those put forward by industry-specific requirements rather than a simple keyword match.

To address the limitations of existing summarization models, the study will fine-tune domain-specific LLMs on structured and unstructured resume data to achieve better generalizability and reduce bias in job recommendation. The proposed system will be evaluated with real-world resume and job posting data, supplemented with context-aware evaluation metrics other than the conventional ROUGE score. The eventual aim of the study is to make a scalable and computerized system that unites the gap between employment specialists and job candidates, streamlining the recruiting process to make it more correct, precise, and data-driven.

3. Methodology

In this examine, we unveil a Retrieval-Augmented Generation (RAG) LLM-based system designed to optimize the job search process by computing resume summarization, keyword collection, and job alning. Our approach tends to increase the relevance and accuracy of job recommendations by combining the best practices of effective retrieval with a state-of-the-art language model. It works in a structured pipeline, initiated with assembling and preprocessing the data of resumes, followed by embedding-based retrieval and AI-powered response generation. We store resume and job-related data in a vector database so that it can support fast and accurate explores. When a user submits a resume or a job description, semantic search using FAISS retrieves the most contextually relevant data concerning skills and qualifications. The extracted content is fed into a transformer-based language model, which sends out concise resume summaries, optimized keyword suggestions, and tailored job recommendations. It helps to observe application things and improves the match for jobs with desirable candidates, explaining the entire pipeline of recruitment.


A. Data Collection

Our high-quality real-time resumes and job postings dataset is the foundation of our AI-based resume-employment matching system. To provide extra robustness, data is obtained from a number of different locations, such as internet job portals (LinkedIn, Indeed, Glassdoor, and Monster), employer career pages, and hiring databases. Web scraping technology powered by Beautiful Soup and Scrapy is utilized to extract structured information such as candidate experience, competencies, job postings, and employer requirements. To achieve real-world usability, real-time job platform APIs are used to facilitate real- time updates of job postings and resumes. Optical character recognition or OCR is also used to scan resumes in various formats (PDF, DOCX, TXT), to support various document structures.

ijemr_1815_02.PNG
Figure
2: AI-Powered Resume-Job Matching System Architecture

B. Data Preprocessing

Preprocessing of the data ensures uniformity and enhances the process of retrieval. Text is normalized by converting it to lowercase, removing special characters, and stripping extra spaces. Tokenizing and stop word removal eliminate the unwanted terms, and lemmatizing brings the words to the root level. Named Entity Recognition (NER) is utilized to identify key information including skills, job positions, and companies. Resumes and job descriptions are then broken down into smaller chunks to facilitate embedding-based optimization of retrieval, improving the accuracy of job-resume matching.

C. Embedding-Based Retrieval

Embedding-based search is necessary to project resumes and job descriptions into a high-dimensional vector space to support effective similarity search. Resumes and job postings are converted into dense vector representations by transformer- based embeddings such as Sentence-BERT.

Formally, given a document di, its corresponding embed- ding vector is represented as:

ijemr_1815_Formula01.PNG
where f (·) is the embedding function that converts text into a numerical vector.

Given a job query q, it gets embedded into an embedding vector eq. Then, the similarity between a given job query and each resume in the database is computed as cosine similarity, which is defined as:

ijemr_1815_Formula02.PNG
The top k resumes with the highest similarity scores are fetched, and relevant matching between candidates and jobs is returned.

Contextual meaning beyond keyword matching can thus be captured by the system, thereby improving both job recommendation accuracy and retrieval quality.


D. Retrieval-Augmented Generation (RAG)

The accuracy of responses and their appropriateness to the context are enhanced by Retrieval-Augmented Generation, which introduces a language model with a retrieval system. When it comes to corresponding resumes to jobs, RAG collects resumes that align a given job query and improves the generated recommendation or summary with gathered information.

Formally, let D = {d1, d2, . . . , dn} be the set of all resumes, where each document di is embedded into a vector space as:

ijemr_1815_Formula03.PNG
Given a job query q, its embedding is computed as:

ijemr_1815_Formula04.PNG
The gathering stage chooses the high k resumes with the maximum cosine similarity:

ijemr_1815_Formula05.PNG
where δ is a pre-defined similarity threshold.

In the generation step, a language model G takes the retrieved documents and the job query as input to generate a context-aware result:

ijemr_1815_Formula06.PNG
where r is the generated proposal or summary.

By combining relevant resume information during text generation, RAG make sures that answers are more consciously accurate, coherent and individualized for job matching.

E. Semantic Search

Semantic search identifies the meaning of the queries, rather than matching them exactly with keywords, to improve job-resume matching. In contrast to the traditional lexical search approach, which relies merely on surface-level text similarity, semantic search adopts an embedding-based retrieval approach for evaluating contextual relevance. Formally, given a job description query q, we compute its vector representation using a transformer-based encoder:

ijemr_1815_Formula07.PNG
Likewise, each resume di from the dataset D is embedded as:

ijemr_1815_Formula08.PNG
Cosine similarity is used to calculate the comparison between the question and a resume:

ijemr_1815_Formula09.PNG
The CV's which have more similarity numbers are accessed as possible matches.

ijemr_1815_Formula10.PNG
Context-aware matching: It involves a deep learning-based semantic search that identifies job candidates with appropriate skills, not by keyword matching.

F. Response Generation and Evaluation

It takes more than just technical know-how to craft a great resume and personalized job summary. Each job seeker requires genuine insight into their next great opportunity, fitted to their skills, goals, and potential. When traditional methods of search aren't enough, today's data-driven systems bring clarity and purpose into information, helping to organize and interpret it for clear insights. That way, every job recommendation and candidate profile points out individual strengths and matches them with specific market demand. The resulting assessments underline key qualifications and competencies, assessing how well the person fits into the current industrial needs, thus enabling quicker and more accurate hiring decisions.

To ensure the quality of responses, each answer is matched against the response from an experienced professional for accuracy, relevance, and completeness. Continuous user and recruiter feedback fuels constant improvement, upping dependability, responsiveness, and efficiency across the entire recruitment process.

4. Result and Discussion

The Femerix system was tested extensively across a wide range of resumes and job listings, returning strong and consistent results.


It achieved ROUGE and BLEU scores of 85% and 78%, respectively, further confirming its high degree of similarity in recommendations with those crafted by human professionals. Sensitivity reached 90%, and precision was 88%, highlighting the accuracy in identifying and linking appropriate job positions. Feedback from employers and job seekers showed a satisfaction level of 92%, showing the realistic effectiveness and focus on the needs of the users.

Success for Femerix means bringing deeper meaning to both resumes and job postings via comprehensive data analysis. That way, the recommendations it makes can be more personal, meaningful, and thoughtful. Since it is designed to be self-correcting, Femerix improves its performance with continuous use and evaluation. Adaptive in nature, it promotes collaboration, transparency, and genuine connections in its design-meaning that recruitment may become even more efficient and people-oriented.

ijemr_1815_03.PNG
Figure
3: Representing Evaluation Metrics for Resume-Job matching

This is a success stemming from the capability to understand, on a deeper semantic level, resumes and job postings through the comparing and analyzing of conventional data sources by RAG. This intelligent framework now opens new avenues for collaboration, transparency, and networking within organizations to make the modern recruitment process more humane and effective.

5. Conclusion

This paper proposes an intelligent and efficient AI-empowered system that ensures precise and meaningful matching between resumes and job postings. The meaning based search and transformer-based language prototypes increases the accuracy of CV structuring, keyword extraction, and suitable job identification.

Also, deep learning and embedding-based techniques of the model transform the limitations of traditional keyword-based search by providing better results because of true semantic understanding of resumes and job descriptions.

The performance of the system was measured with various parameters like ROUGE, BLEU, accuracy, search metrics, and user satisfaction, and most results were encouraging. That proves that the model generates proper job recommendations along with clear, relevant, and person-specific resume summaries. spontaneous learning through feedback will keep the system improves to adapt to the changing needs of the market and trends.

Our research prove that this Artificial Intelligence powered technology converts the recruitment process with human emotions and accuracy

References

[1] Ahmed Zeyad, & A. Biradar. (2024). Advancements in the efficacy of flan-T5 for abstractive text summarization: A multi-dataset evaluation using ROUGE and BERT score. International Conference on Advancements in Power, Communication and Intelligent Systems (APCI), pp. 1-5. DOI: 10.1109/APCI61480.2024.10616418.

[2] D. Singh. (2024). Legal documents text analysis using natural language processing (NLP). 2nd International Conference on Self Sustainable Artificial Intelligence Systems (ICSSAS), pp. 1302-1307. DOI: 10.1109/ICSSAS64001.2024.10760929.

[3] Jugran, A. Kumar, B. S. Tyagi, & V. Anand. (2021). Extractive automatic text summarization using SpaCy in Python NLP.0 International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), pp. 582-585. DOI: 10.1109/ICACITE51222.2021.9404712.

[4] S. Bhardwaj, & J. Ghayathri. (2024). Domain specific text summarization using latent semantic analysis (LSA) and performing visualization using PylDAVIS. 1st International Conference on Sustainable Computing and Integrated Communication in Changing Landscape of AI (ICSCAI), pp. 1-9, DOI: 10.1109/IC- SCAI61790.2024.10866589.


[5] O. B. Mercan, S. N. Cavsak, A. Deliahmetoglu, & S. Tanberk. (2023). Abstractive text summarization for resumes with cutting edge NLP transformers and LSTM. Innovations in Intelligent Systems and Applications Conference (ASYU), pp. 1-6. DOI: 10.1109/ASYU58738.2023.10296563.

[6] M. M, G. Sunil, K. AL-Attabi, I. H. Mohammed, & P. R. Buvaneswari. (2023). An abstractive text summarization using decoder attention with pointer network. International Conference on Ambient Intelligence, Knowledge Informatics and Industrial Electronics (AIKIIE), pp. 1-6. DOI: 10.1109/AIKIIE60097.2023.10389942.

[7] E. S. L. Tsoumou, L. Lai, S. Yang, & M. L. Varus. (2016). An extractive multi-document summarization technique based on fuzzy logic approach. International Conference on Network and Information Systems for Computers (ICNISC), pp. 346-351. DOI: 10.1109/ICNISC.2016.081.

[8] R. Dumne, N. L. Gavankar, M. M. Bokare, & V. N. Waghmare. (2024). Automatic text summarization using text rank algorithm. 3rd International Conference for Advancement in Technology (ICONAT), pp. 1-6. DOI: 10.1109/ICONAT61936.2024.10775241.

[9] N. Zade, G. Mate, K. Kishor, N. Rane, & M. Jete. (2024). NLP based automated text summarization and translation: A comprehensive analysis. 2nd International Conference on Sustainable Computing and Smart Systems (ICSCSS), pp. 528-531. DOI: 10.1109/ICSCSS60660.2024.10624907.

[10] Ray, A. Mishra, B. Sahoo, & H. K. Tripathy. (2024). Comparative analysis of extractive summarization using NLP. International Conference on Intelligent Computing and Emerging Communication Technologies (ICEC), pp. 1-5. DOI: 1109/ICEC59683.2024.10837027.

[11] S. Sharma, & M. L. Saini. (2022). Analyzing the need for video summarization for online classes conducted during covid-19 lockdown. Lect. Notes Electr. Eng., 907, pp. 333–342. DOI: 10.1007/978-981-19-4687-5_25.

[12] S. Kulshrestha, & M. L. Saini. (2020). Study for the prediction of e-commerce business market growth using machine learning algorithm. 5th IEEE Int. Conf. Recent Adv. Innov. Eng. ICRAIE 2020 – Proceeding. DOI: 10.1109/ICRAIE51050.2020.9358275.

[13] Y. Singh, M. Saini, & Savita. (2023). Impact and performance analysis of various activation functions for classification problems. Proc. IEEE InC4 2023 - 2023 IEEE Int. Conf. Contemp. Comput. Commun. DOI: 10.1109/InC457730.2023.10263129.

[14] K. Lal, & M. L. Saini. (2023). A study on deep fake identification techniques using deep learning. AIP Conf. Proc., 2782. DOI: 10.1063/5.0154828

[15] J. Sarmah, M. L. Saini, A. Kumar, & V. Chasta. (2024). Performance analysis of deep CNN, YOLO, and LeNet for handwritten digit classification. Lect. Notes Networks Syst., 844, pp. 215–227. DOI: 10.1007/978-981-99-8479-4_16.

[16] M. L. Saini, R. S. Telikicharla, Mahadev, & D. C. Sati. (2024). Handwritten english script recognition system using CNN and LSTM. Proc. InC4 2024 - 2024 IEEE Int. Conf. Contemp. Comput. Commun. DOI: 10.1109/InC460750.2024.10649099.

[17] B. Mulakala, M. L. Saini, A. Singh, V. Bhukya, & A. Mukhopadhyay. (2024). Adaptive multi-fidelity hyperparameter optimization in large language models. In: 8th IEEE International Conference on Computational System and Information Technology for Sustainable Solutions. DOI: 10.1109/CSITSS64042.2024.10816794.

[18] C. Sasidhar, M. L. Saini, M. Charan, A. V. Shivanand, & V. M. Shrimal. (2024). Image caption generator using LSTM. Proc. - 4th Int. Conf. Technol. Adv. Comput. Sci., pp. 1781–1786. DOI: 10.1109/ICTACS62700.2024.10841294.

[19] D. Gupta, M. L. Saini, S. P. K. Mygapula, S. Maji, & V. Prabhas. (2024). Generating realistic images through GAN. In: Proceedings - 4th International Conference on Technological Advancements in Computational Sciences, pp. 1378–1382. DOI: 10.1109/ICTACS62700.2024.10841324.

Disclaimer / Publisher's Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of Journals and/or the editor(s). Journals and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.