1 Introduction

Machine reading comprehension (MRC) is one technique in natural language processing (NLP). Machine reading comprehension teaches machines to understand and answer a question [1]. The main aim of MRC is to read a text and understand its exact meaning. MRC is mainly used to answer questions that users in NLP systems generate. MRC scans documents and files and extracts important, meaningful content from text. [2]. The field of artificial intelligence (AI) known as natural language processing (NLP) studies how computers and humans communicate through the use of natural language. Machine reading comprehension (MRC) is one of the most important uses of natural language processing; it entails training computers to read and comprehend text passages and answer questions about them in the same way that humans do. Computers struggle with MRC because it necessitates an understanding of the context and meaning of words, sentences, and passages . Text pre-processing, linguistic modeling, feature extraction, and machine learning are just some of the many methods that make up NLP approaches for MRC. Large datasets of text passages and questions/answers are used by NLP techniques to train computers to read like humans. Using supervised learning algorithms, the computer learns to map text passages to their answers by being trained on these datasets. The use of neural network models, like the ones mentioned in previous answer, that are built to capture the intricate interconnections between words and sentences is a common method for MRC. Multiple layers of artificial neurons process the incoming text and generate a predicted answer in such models. The overarching goal of natural language processing techniques for MRC is to have computers read and comprehend text and answer questions in the same way that humans do. More complex and nuanced questions will be answered by MRC systems that are more accurate and efficient as NLP starts to develop. MRC is cutting-edge technology that is available in NLP. MRC utilizes human intelligence to understand the exact meaning of a text presented in files [3]. MRC predicts questions based on reviews and then answers questions via NLP systems. A read-then-verify method is used in MRC that reads queries and then understands the content of queries [4]. Read-then-verify reduces delay time in verification which enhances the efficiency of NLP. The entire query is broken up into smaller pieces, and the connection between them is found using this method. The goal is to grant the query greater specificity. The natural language processing keyword-based query reconstruction strategy employs NLP algorithms to recognize keywords in a query and then reconstruct the query based on those keywords. The algorithm takes into account possible variations on a query by looking for synonyms and antonyms. MRC detects definitions, structure, and difference datasets of content and constructs architecture to perform further tasks in NLP. The classification method is used in MRC to classify tasks based on specific characteristics and functions [5]. In order to comprehend and respond to questions based on natural language text, the MRC (machine reading comprehension) system has been developed. To decipher questions and grasp their meaning, it employs a read-then-verify strategy, which involves first reading the text, grasping its meaning, and then checking whether the answer to the question can be found in the text. This makes MRC a great option for queries that attempt to predict the future by analyzing current information and historical trends. Using machine learning algorithms, MRC can sift through mountains of data in search of patterns and trends, which it then uses to predict what will happen in future . Answers to these questions about the future can be gleaned from a wide range of resources, including news articles, social media, and financial reports, using natural language processing (NLP) systems. Natural language processing (NLP) can analyze the tone and sentiment of a piece of writing to help determine how likely an event is to occur. Together, MRC and NLP systems can analyze massive amounts of data and extract the key insights needed to answer intricate questions, making them a potent tool for future event and trend prediction.

Query reconstruction (QR) is a process that understands the exact meaning of questions and produces optimal information for machine reading comprehension (MRC). The query reconstruction method is mainly used in MRC to improve the efficiency level of systems [6]. Joint query reconstruction is widely used in MRC to find features in questions. Reconstruction divides queries into various segments. Segments are analyzed and identified via the segmentation process [7]. The keyword-based query reconstruction method is commonly used in natural language processing (NLP). In order to function, QR parses a user’s query, determines what information was left out, and then reformulates the query to include everything that was left out. This can make the search more precise and ensure that only relevant results are returned. In addition, the additional information provided by the QR code can help the computer better understand the user’s intent. This can help the computer better understand the query, which in turn can produce more precise results. Query reconstruction network (QRN) is widely used for MRC to understand the relationship among queries. QRN uses object segmentation to identify essential parts presented in language queries [8]. Iterative segmentation correction (ISC) is used in QRN to correct segments and provide proper segments QRN maximizes segmentation in MRC, improving the performance and accuracy of query reconstruction systems [9]. QR is mainly used to identify patterns based on specific code changes in MRC. Getting to the bottom of what the text actually means will help you come up with better, more in-depth answers. MRC can also be used to find co-occurrences of words and phrases in a text.

QR is one of the complex tasks in MRC that requires accurate data for reconstruction. QR produces optimal queries that are needed to answer by MRC via natural language processing (NLP) [10]. One variety of two-dimensional barcode is called a Quick Response code (or QR code for short). Many different types of information, including URLs, text, images, and contact details, can be stored in a QR code. Products, advertisements, and packaging all benefit from them. In the field of machine learning, deep learning refers to the use of neural networks to glean insights from massive datasets. It’s predicated on the idea that computers can be taught to recognize patterns and act autonomously once those patterns have been introduced. More accurate models can be generated by deep learning algorithms because of their use of multiple layers of neurons. Object recognition in images, language comprehension, and data-driven forecasting are all possible with the help of these algorithms.

The human languages reading system is used to solve the cloze style problem in the text reading process. The MRC technique is used to analyze the texts sent into it, and the texts that are understood are then fed into reading systems as inputs. In this paradigm, the user’s multiple output delivery to the reading system is the cloze style issue. The model examines the usage of keywords and code switching to further narrow the alternatives. In this suggested paradigm, fixing the cloze style problem in MRC is thought to enhance the machines’ reading and comprehension. Input texts are used to build queries based on keywords, which improves readability and mitigates the cloze problem. The cloze style problem is recognized to check for the presence of comparable keywords in input texts, and the procedure is repeated until the maximum text output from the computer is achieved. To reduce the cloze style problem, the computer organizes the queries according to the keywords, reconstructs the questions using the probable combinations, and executes them. The cloze style issue is mitigated because to the machine’s ability to categorize queries based on keywords and execute query reconstructions using all feasible permutations. By allowing for NLP testing, the computer was able to more effectively solve the cloze issue.

A wide variety of data types can be detected and predicted by applications utilizing persistent comprehensive model (PCM) techniques based on query reconstruction (QR). Detecting and preventing fraudulent activity, cyber-attacks, image objects, and customer preferences are a few examples of applications that make use of PCM-QR. Using PCM-QR, applications can discover hidden patterns and outliers in large data sets, as well as uncover subtle clues and correlations between data points. PCM-QR is used to optimize query segmentation across the board to boost the precision and performance of query reconstruction systems. Using PCM-QR algorithms, the most important words in a query are isolated, and the best presentation order is determined. Improved search performance is a result of increased relevance to the user’s query. PCM-QR models can also be used to correct spelling errors and other typos in queries, leading to even higher levels of accuracy.

Machine learning (ML) techniques are widely used in various applications for detection and prediction processes. ML techniques are also used in query reconstruction systems to improve accuracy levels in segmentation and optimizations [11]. Natural language processing, semantic analysis and machine learning are just a few of the most popular methods. Features such as question type, keywords, and other relevant information can be extracted using these methods. With this knowledge, we can better interpret the user’s intent and deliver more precise outcomes. The support vector machine (SVM) algorithm is mainly used for query reconstruction to identify data points presented in queries [12]. SVM produces data points and patterns to query reconstruction, reducing delay time in pattern identification. A deep neural network (DNN) algorithm is also used for query reconstruction. DNN reduces computation costs in reconstruction, enhancing the efficiency of natural language processing (NLP) systems [13]. It accomplishes this by producing data points that stand for particular patterns in the input data. Since the DNN’s output data points are simpler to query, using one drastically reduces the amount of time needed to find patterns. In addition, the DNN can quickly spot patterns that more conventional NLP methods might have missed. This speeds up the data reconstruction process by allowing the system to quickly recognize and extract relevant patterns from the input data. In turn, this facilitates more rapid and precise analysis. In addition, DNNs aid in the reconstruction process by accurately classifying and labeling data. In final result, support vector machines (SVMs) and deep neural networks (DNNs) are both powerful tools for enhancing data reconstruction’s efficacy. While DNNs aid in precise classification and labeling, SVM aids in pattern creation in the data and shortens reconstruction times. During the reconstruction process, both technologies aid in providing accurate and quicker results. The symmetric gradient-domain machine learning (sGDML) model is used to reconstruct queries based on specific functions and characteristics. sGDML improves identification accuracy that provides valuable data for query reconstruction [14]. A cross-modal interaction network (CMIN) is used to leverage the syntactic structure of queries that produce optimal data for further reconstruction. CMIN reduces the complexity level in NLP [15].

One classic approach for reconstructing questions according to their functions and properties is symmetric gradient-domain machine learning (sGDML). To improve query reconstruction, sGDML enhances identification accuracy. In proposed model, MRC relies heavily on the query reconstruction technique to boost system performance. Common in natural language processing (NLP) is a suggested strategy for query reconstruction based on keywords. QRN uses object segmentation to extract relevant information from search engine results. In QRN, we apply iterative segmentation correction (ISC) to refine and improve segmentation. Maximizing segmentation in MRC with QRN improves query reconstruction systems’ efficiency and precision.

2 Related works

Liu et al. [16] designed a cross-domain slot-filling approach for machine reading comprehension (MRC). The main aim of the proposed approach is to detect MRC problems. Cross-domain transform slot names into queries that provide information for detection. MRC datasets are used here for training, producing optimal data for slot detection. The proposed approach maximizes the performance and feasibility of MRC.

Zhu et al. [17] introduced a dual multi-head co-attention (DUMA) model for MRC. DUMA solves multi-choice MRC problems that are occurred in training. DUMA identifies relationships among triplet and slots that provide data for MRC problem-solving. Pre-trained language model (PrLM) is used to provide an appropriate answer to queries that are available in MRC. The proposed DUMA model improves performance levels in MRC.

Yang et al. [18] developed new machine reading comprehension for endless questions (Cq-MRC). Key entities are first identified that are presented in uncompleted queries. The completion then-prediction approach is used here to predict uncompleted queries. Key-entity provides appropriate data for endless questions that delay time in computation. Cq-MRC improves the effectiveness level in MRC, which maximizes the reliability of an application.

Ma et al. [19] introduced a distant supervision-based machine reading comprehension model (DSMRC-S) for summarization. Summarization tasks are transferred into MRC problems using DSMRC-S. Key points are fetched from a predefined question that provides optimal data for summarization. Compared with other methods, the proposed DSMRC-S model improves accuracy in MRC problem detection, increasing efficiency.

Yang et al. [20] designed an MRC model for natural language processing (NLP) systems. TriviaQA dataset is used here that provides valuable data to MRC. The reasoning process is also used here that train datasets and produce optimal data for other functions. TriviaQA reduces delay time and energy consumption level in computation, improving the quality of service in NLP systems. The proposed model identifies irrelevant content presented in MRC, which improves the performance of an application.

Zhou et al. [21] developed a knowledge distillation (KD)-based multi-domain MRC model. Domain interference mitigation approach is also used here that train the proposed model. The main of KD is to identify gradient issues that are occurred in MRC. The proposed model exploits multiple domain data, reducing time delay in data training. The proposed KD-based MRC model improves overall performance and reliability compared with other models.

Baradaran et al. [22] proposed an ensemble learning-based approach for MRC systems. Important factors and characteristics are identified by ensemble learning that provides data to MRC in natural language processing (NLP). The proposed approach improves generalization capability, which improves the robustness level of MRC. The learning model reduces cost and time consumption levels in data distribution.

Guo et al. [23] introduced the frame-based neural network (FNN) for the MRC method. FNN is used here to understand a question that provides the necessary data to answer a question. Lexical unit (LU) and frame-to-frame relations are used in the MRC method to answer questions in natural language processing (NLP) systems. The proposed FNN-MRC method achieves high accuracy in answering questions in NLP.

Jia et al. [24] developed a new keyword-aware dynamic graph neural network (KA-DGN) for reading comprehension (RC). Graph neural network is used here to predict facts that are required for RC. Keywords are identified that provide feasible data for RC processes. The dynamic reasoning graph technique is also used here to reduce computations’ time and energy consumption levels. The proposed KA-DGN improves the overall performance and efficiency of RC.

Li et al. [25] introduced a multi-task joint training model for MRC in natural language processing (NLP) systems. The feature extraction method is used here to extract text features from a question that provides keywords to answer those questions. MRC requires various text features to answer questions via a joint training model correctly. Experimental results show that the proposed model improves the efficiency and performance level of MRC.

Ren et al. [26] introduced the molecular knowledge reasoning model for MRC. A neural network (NN) algorithm is used in the reasoning model for explicit reasoning steps. Reasoning modules identify essential aspects that are available in the database. The knowledge reasoning model predicts complicated content that is presented in MRC. The proposed model provides a high generalization capability that enhances the effectiveness level of natural language processing (NLP) systems.

Yan et al. [27] designed multi-stage maximization attention (MMA) network for multi-choice reading comprehension. MMA captures important relations presented in the passage and produces optimal data for other functions. MMA selects salient sentences that reduce delay time in computation. MMA maximizes the performance and efficiency of NLP systems. The proposed MMA method improves accuracy in predicting content in MRC.

Liao et al. [28] proposed a heterogeneous graph learning model for multi-hop MRC in natural language processing (NLP). Leverage-selected sentences are used here that provide optimal data to MRC. A heterogeneous graph identifies extract features necessary to perform MRC in NLP systems. Graph learning also identifies missing entities that are presented in questions. The proposed model improves the efficiency and feasibility level of MRC.

Feng et al. [29] developed contrastive learning in context representation space (CLCRS) for MRC. A contrastive learning algorithm is used here to train context information for MRC. CLCRS identifies context information that is available in the passage. CLCRS identified both misleading sentences and correct answers to questions. The proposed CLCRS improves the robustness and effectiveness of MRC.

The models were developed with the intention of enhancing MRC’s effectiveness, efficiency, and dependability in NLP systems. Slot-filling across domains, multi-head attention, MRC based on an infinite number of questions, knowledge distillation, neural networks trained on frames, and molecular knowledge reasoning are just a few of the models available. To recognize essential entities, extract features, and predict content in MRC problems, the proposed persistent comprehensive model with query reconstruction (PCM-QR) model employs methods including pre-trained language models, reasoning modules, heterogeneous graph learning, and contrastive learning. The models enable NLP systems to answer questions with greater accuracy, faster response times, and lower energy consumption.

3 Persistent comprehensive model (PCM) using query reconstruction (QR)

The design goal of MRC based on QR technology and a deep learning process is to augment the machine’s efficiency for reading and responding to input texts. The input texts are observed from MRC through the readability and understandability of machines. Machines can analyze human languages in a real-time environment. The NLP is aided for testing the machine’s efficiency and relies on appropriate queries for processing the human languages through the deep learning process. In the text reading process, the cloze style issue is addressed from the human languages reading system. Its input text analysis is processed through the MRC method, and the grasped input texts serve as the inputs in reading systems. To further refine the options, the model analyzes the use of keywords and code switching. The model also makes use of a technique known as “query expansion” to find ancillary terms that may be relevant to the initial query. This aids it in further refining the options to provide the most accurate response. Finally, a scoring system is used by the model to arrive at the best answer possible for the query.

In this proposed model, addressing the cloze style issue in MRC is considered to improve the readability and understandability of the machines. The user asking queries to the reading system depends on input texts, whereas the reading system reads the query and gives an accurate answer. The sophisticated NLP is for reading and responding to the input texts to address multiple answers that are irrelevant for various input queries from unavailable users. The proposed PCM-QR is depicted in Fig. 1. Because of this, the machine may be able to understand the full meaning of the user’s inquiry and tailor its response to provide better, more relevant results. In addition, the reading system is better able to identify the intended target of the query and provide relevant information when the user employs query terms that are more specific and relevant to the query. The number of irrelevant search results can also be decreased by using more precise query terms that are specific to the user’s inquiry. This method can also improve search efficiency in a reading system by making it easier to recognize and process queries, which in turn speeds up the search process.

Fig. 1
figure 1

Proposed PCM-QR

The reading and response by the machines for testing and augmenting the understandability of grasping input texts for processing queries and answers are becoming sequential analyses due to advanced technology used in MRC. They address multiple output delivery results in cloze style issues at reading text. The challenges in MRC using QR address the cloze style issue in the required human language processing. This reading system is to satisfy the reading and to respond and reduce multiple answers due to observing irrelevant input queries from users. The appropriate queries rely on keywords, combinations, and swapping of words in text reading to identify the cloze style issue is often used for possible combinations to reduce multiple answers. Due to the segregation of keywords in this model, the observed input texts rely on a reading system for serving different input queries; the forming and reconstruction of queries based on available keywords in text reading also differ. The construction of queries depending on keywords is observed from input texts improving understandability and reducing the cloze style issue in the reading system. However, the possible combination, swapping, substitutions, and training processes using query reconstruction do not replace queries for achieving maximum individual combinations. Therefore, the maximum possible query keywords are replaced using a combination process for testing and augmenting the machine’s efficiency. This article mainly identifies the maximum individual combinations through deep learning by reading and responding to the input texts of processing human languages by machines. This learning paradigm is used to swap the combinations for the substitution process and keyword training. The cloze style issue is identified for verifying the similar keywords available in input texts and continues the process until the maximum text output from the machine is reached. The observed input texts are validated with previous understanding words through MRC. The deep learning process is used to identify the possible query keywords in text reading through similarity analysis in the swapping process. Therefore, the possible combination and substitution of query keywords in the reading system improve machines’ accuracy and efficiency. The proposed model focuses on the maximum text output from the machines through keyword training and the substitution process for query reconstruction. In this proposal, readability and understandability are administrable in text reading and addressing the cloze style issue. The best combination of keywords to deliver the various outputs is then determined by the machine. The phenomenon of simultaneous output delivery is recognized by reading machines. The computer sorts the queries according to the keywords, reconstructs the queries using the potential combinations, and processes them in order to lessen the cloze style issue. The computer then determines which set of keywords will produce the most useful results.

PCM-QR employs deep learning algorithms to scrutinize texts for useful phrases and words. Models that can interpret text in context and pull relevant data from it have been developed with the help of deep learning algorithms. The algorithms learn from a large body of text and scan it for patterns that can be taken as clues as to the user’s motivation. The system then applies these patterns to the text in order to extract useful phrases and words to display to the user. The system can make the text more understandable to humans by picking up on context and extracting key details.

The main objective of this machine reading comprehensive model is to improve output accuracy based on reading and responding to the input texts through appropriate queries. The proposed model is designed to improve the understandability of the machine through a deep learning process. The reading system combines software and hardware components to grasp and process the input texts based on user queries and keywords. The processing of input texts is observed from human languages by machines outputs in text reading, reducing errors under controlled combination time and improving the readability and understandability of the machines, respectively. Then

$$\max \mathop \prod \limits_{i = m} \mathop \prod \limits_{j = p} \rho \left( {{\text{Input}}_{{{\text{text}}}} ,R_{S} } \right)^{Q}$$
(1)

Such that.

$$\mathop \sum \limits_{i = m} \left( {{\text{Input}}_{{{\text{text}}}} } \right)_{r} = R_{S}$$
(2)

Instead,

$$\mathop \sum \limits_{i = m} \left( {{\text{Input}}_{{{\text{text}}}} } \right)_{m} = \mathop \sum \limits_{i = m} \mathop \sum \limits_{j = p} \left( {{\text{Input}}_{{{\text{text}}}} } \right)_{ij} - 1 - \left[ {\frac{{\left( {{\text{Input}}_{{{\text{text}}}} } \right)_{Q} }}{{\sum \left( {R_{S} + T_{R} } \right)_{m} }}} \right]$$
(3)

In Eq. (1), the variable \(\rho \left( {{\text{Input}}_{{{\text{text}}}} ,R_{S} } \right)\) denotes the probability of input text reading system of human language processing \(p\) by the machines \(m\). The maximum probability of observing query from user \(Q = 1\) achieves a high response \(r\) for the reading system for addressing the cloze style problem. Instead, the variable \(Q\) and \(r\) are not stable due to machines \(p\) as \(Q \in \left[ {0,1} \right]\) is the observation of different input queries. Therefore, \(Q = 1\) is processed at any time and outputs errors at a controlled combination time. The cloze above style problem is referred to as multiple output delivery (answer) from the user to a reading system in this model. These multiple answers serve as irrelevant for different input queries in text reading \(T_{R}\). Therefore, the query reconstruction is achieved through deep learning. The assisting query reconstruction technology and deep learning processes are jointly processed in this proposed model to augment machines’ efficiency for reading and responding to the available input texts. Multiple output delivery is used by query reconstruction technology to help better comprehend a query. Machines can now read and understand text by using this technology. This paves the way for smarter machines that can interpret user intent and deliver tailored responses. Voice recognition and SEO are just two examples of the many uses for this technology in the fields of natural language processing (NLP) and machine learning. Having access to multiple outputs also aids machines in detecting typos, misspellings, and other textual errors. The technology can be implemented to improve the precision of machine learning programs like NLP. By having machines read the same input multiple times, the technology can be used to enhance the precision of text-to-speech services, for instance. This improves the machines’ ability to spot trends and make appropriate adjustments. The technology can also be used to enhance SEO, as machines can reread a query to gain a deeper understanding of the user’s intent and return more relevant results. When it comes to text analysis and processing by machines, multiple output delivery is a powerful tool. Natural language processing, search engine optimization, and other applications can all benefit from this technology’s enhanced precision. Having machines read text multiple times allows them to better spot patterns and make adjustments as needed, leading to more precise results.

4 Query reconstruction technology and deep learning processes in MRC

In QR technology and deep learning processes, the reading system is centrally connected to maintaining the machine’s efficiency is satisfied for reading and responding to input texts through queries. Machines identify the multiple output delivery that occurs in text reading. The machine segregates the appropriate query based on keywords; the query reconstruction is processed using the possible combination, reducing the cloze style problem. The possible query keywords are replaced to identify the maximum individual combination occurrence in text reading under controlled combination time as in Eq. (1). The probability of possible query keywords at different time intervals without addressing the cloze style issue (i.e.)\(\rho \left( {Q_{k} } \right)\) is computed as

$$\rho \left( {Q_{k} } \right) = \frac{{\mathop \sum \nolimits_{i \in m} q_{r} }}{{\mathop \sum \nolimits_{j \in p} C}}MA^{{ - \frac{{{\text{Input}}_{{{\text{text}}}} }}{m}*S^{W} *R_{S} }}$$
(4)

where the variables \(q_{r}\) and \(C\) represent the query reconstruction using the individual combination, and multiple answers \({\text{MA}}\) are used for swapping at any instances, and the possible query keywords are replaced, respectively. In this article, the possible combination condition \(\left( { 1 - \left[ {\frac{{\left( {{\text{Input}}_{{{\text{text}}}} } \right)_{Q} }}{{\sum \left( {R_{S} + T_{R} } \right)_{m} }}} \right]} \right)\) is computed for performing the swapping process \(S^{W}\). The first process is for improving \(\rho \left( {{\text{Input}}_{{{\text{text}}}} } \right)\) and \(Q = 1\) for query reconstruction, the replacement of possible query keywords relies on readability and understandability of human languages is estimated through individual \(C\),\(m \in p\). The \(q_{r}\) using \({\text{ MA}}\) is presented in Fig. 2.

Fig. 2
figure 2

\(q_{r}\) using \({\text{ MA}}\)

The input texts are utilized for generating maximum queries using \(C\) such that \(\rho \left( {Q_{c} } \right)\) is addressed. In this addressing problem, \(S^{W}\) the process is required in the un-matching cases. Relying on \(C\), the response is provided through \(Q = 1\) detection \(\forall R_{S}\) such that \(r\) is achieved. The \(q_{r}\) phase is initialized through \(S^{W}\) such that \(C \forall m \in p\) achieves less \(\rho \left( {Q_{k} } \right)\) (Fig. 2). The deep learning process through keyword training and substitution processes based on \(Q\) and \({\text{MA}}\) is used for analyzing the understandability of the machine. The possible combination assists in reducing the cloze style issue for different input queries of processing \({\text{ Input}}_{{{\text{text}}}}\). The possible combination and swapping of keywords are used to perform substitutions which is computed as

$$C \forall m \in p = [\left( {1 - Q.r} \right) \frac{{{\text{Input}}_{{{\text{text}}}} }}{C}. - Q.\frac{{{\text{Input}}_{{{\text{text}}}} }}{MA} - \left( {q_{r} - S^{W} } \right), j \in m$$
(5)

In Eq. (5), the possible combinations and swapping intervals based on the appropriate query keywords are replaced at different intervals. If condition \(C \forall m \in p\) exceeds text reading, the keyword training and substitution processes are performed until the maximum text output is required from the machine. The keyword segregation increases \({\text{ MA}}\) and defacing of query keywords that serve as irrelevant for different input queries \(\left\{ {Q, q_{r} , S^{W} , C,\rho \left( {{\text{Input}}_{{{\text{text}}}} } \right)} \right\}\) after the combinations are swapped in all the instances. The output for the understandability of \({\text{Input}}_{{{\text{text}}}}\) and \(C \forall m \in p\) is achieved by the machines, and the training intensity is tuned for successive iterations. The cloze style issue is addressed in a reading system of human languages for reading and responding to input texts. The query keywords are recurrently analyzed in the substitution process for identifying similar words in input texts. It is validated with previous understanding words based on \(C\) and \(Q\) of \(\left( {{\text{Input}}_{{{\text{text}}}} } \right)_{r} = R_{S}\),\(q_{r}\) and \(\rho \left( {{\text{Input}}_{{{\text{text}}}} } \right)\) are the maximum text output condition as per Eq. (1). Let \(Su_{b}\) and \(k_{T}\) represent the substitution based on possible query keywords that identify multiple output delivery for the condition as in Eq. (3). It refers to the readability and understandability analysis through training keywords for different input queries. Therefore, the maximum text output \(Max_{{t_{O} }}\) is computed as

$${\text{Max}}_{{t_{O} }} = Su_{b} + k_{T}$$
(6)

Such that

$$Su_{b} = \mathop \sum \limits_{i \in m} \left( {{\text{Input}}_{{{\text{text}}}} } \right)_{Q} = Q \times \mathop \sum \limits_{i \in m} \frac{{\left( {R_{S} } \right)_{k} }}{{q_{r} }} = Q \times \mathop \sum \limits_{j \in p} C$$
(7)

where

$$k_{T} = \mathop \sum \limits_{i \in m} \mathop \sum \limits_{j \in p} \left( {{\text{Input}}_{{{\text{text}}}} } \right)_{{{\text{ij}}}} - \left( {1 - Q_{k} } \right) = \mathop \sum \limits_{i \in m} \left( {C - S^{W} } \right) \left( {T_{R} } \right)_{Q}$$
(8)

As per Eqs. (6), (7), and (8), the maximum text output is validated for appropriate queries relying on \({\text{Input}}_{{{\text{text}}}}\) and \(Q\) with keyword training performed until the process is persistent. Therefore, the substitution process depends on \(S^{W}\) and \(C\), whereas keyword training depends on \(Q_{k}\) and \(\rho \left( {{\text{Input}}_{{{\text{text}}}} } \right)\). In this condition, the text output is either 1 or 0 used for analyzing the maximum understandability of the machine can be achieved successfully. In this article, if \({\text{Max}}_{{t_{O} }} = 1\) the query forming is performed for possible query keywords and if \({\text{Max}}_{{t_{O} }} = 0\) then query reconstruction is obtained from the machine. Therefore, the MRC analysis is processed to improve reading and understanding of human languages as in Eq. (1). If the condition \({\text{Max}}_{{t_{O} }} = k_{T}\) relies on \(Q = 1\) and \(C \forall i \in m = S^{W} \forall j \in p\), based on which the training is performed for achieving maximum text output at any time. Figure 3 portrays the \(Su_{b}\) for training.

Fig. 3
figure 3

\({\text{Su}}_{b}\) for training

The learning process demands the maximum \(C\) as Input \(\forall p\) combinations. Considering the hidden layer, the \(K_{T}\) (or)\(S^{W}\) processes are required for all \(MA\) such that \(Q = 1\) is achieved. Depending on the available \({\text{ MA}}\), either \(Q = 1\) (or)\(Q < 1\) is alone possible \(\forall r\). Therefore, if \(Q = 1\), then a response is directly provided \(K_{T}\) is high; \({\text{Su}}_{b}\) is required for \(Q < 1\). As \(Q\) is \(< 1\), then \(\left( {1 - Q_{K} } \right)\) is the analyzable training set required for a new set of keywords. Hence, the training is initiated from \(\left( {1 - Q_{k} } \right)\forall m \in p\) (Refer to Fig. 3). Using the deep learning of \( {\text{Max}}_{{t_{O} }} = {\text{Su}}_{b} \left( {i - \frac{C}{{q_{r} }}} \right) + k_{T} \left( {{\text{Su}}_{b} + S^{W} } \right)\forall i \in m \) and then calculate the keyword query results of \( j \in p\) and respectively. The input texts processing is performed by the machines depending on the query and answer for the condition \(\left( {i - \frac{C}{{q_{r} }}} \right)\). Query reconstruction is handled by observing different input queries, and it requires the machine to provide multiple answers, it will output errors under controlled combination time instances, where \(C \ne S^{W}\). The maximum individual combinations instance of \(R_{S} , \rho \left( {{\text{Input}}_{{{\text{text}}}} } \right)\) is computed through \(k_{T} \left( {{\text{Su}}_{b} + S^{W} } \right)\) relies on a combination time of \(Q\). This combination time is computed using the factor \(C\) as

$${\text{Max}}_{{t_{O} }} = Q \mathop \sum \limits_{j \in p} S^{W} + \mathop \sum \limits_{i \in m} \left( {C - T_{R} } \right)\left( {q_{r} } \right)_{{{\text{MA}}}}$$
(9)

Required,

$$= Q*\left[ {\frac{{\left( {\frac{{\min \left( {{\text{Input}}_{{{\text{text}}}} } \right)_{Q} }}{{\max \left( {{\text{Input}}_{{{\text{text}}}} } \right)_{Q} }}} \right)}}{{C + S^{W} }}} \right]^{{k_{T} }} \left( {i - \frac{C}{{q_{r} }}} \right) + \frac{{\max \left( {{\text{MA}}} \right)_{m} - \min \left( {{\text{MA}}} \right)_{m} }}{{\max \left( {{\text{MA}}} \right)_{m} }}$$
(10)

From Eqs. (9) and (10), the first multiple answer identification in the reading system outputs in one then \(Q = 1\), \(\min \left( {{\text{MA}}} \right)_{m} = \max \left( {{\text{MA}}} \right)_{m}\) and \(T_{R} = 0\), and \(R_{S} = \max \left( {{\text{Input}}_{{{\text{text}}}} } \right)\) processing under combination time. Hence, the combinations are swapped as \({\text{Max}}_{{t_{O} }} = \mathop \sum \limits_{i \in m} \left( {{\text{Input}}_{{{\text{text}}}} } \right)_{Q}\) or \(\max \left( {{\text{MA}}} \right)_{m}\) until the maximum response or answer \(\left[ {1 < i - \frac{C}{{q_{r} }} < Q } \right]\) is achieved in MRC. Instead, the possible combinations in the reading system reduce the causing errors and issues above by machines \(\left[ {i - \frac{C}{{q_{r} }},Q } \right]\). The probabilistic of analyzing the output based on the understandability of the machine is discussed in a detailed manner.

5 Understandability analysis

The previous understanding of words relies on \(Q\) and \(\rho \left( {{\text{Input}}_{{{\text{text}}}} } \right)\) for achieving the machine’s efficiency in reading and answering the input texts to reduce the machine’s cloze style issue through deep learning. Instead, the contrary part, as in Eq. (3), is analyzed in MRC to maximize output accuracy. The probability of \(q_{r}\) in text reading through deep learning is competed as

$$\rho \left( {q_{r} } \right) = \frac{{\rho \left( {{\text{MA}} \cap q_{r} } \right)}}{{\rho \left( {q_{r} } \right)}}$$
(11)

where

$$\rho \left( {{\text{Max}}_{{t_{O} }} } \right) = \frac{{\max \left( {q_{r} } \right) - \min \left( {q_{r} } \right)}}{{{\text{max}}\left( {q_{r} } \right)}} . \rho \left( {{\text{Input}}_{{{\text{text}}}} } \right)$$
(12)

From Eq. (11) and (12), the computation of \(\rho \left( {{\text{Max}}_{{t_{O} }} } \right)\) relies on the contrary part as in Eq. (1) based on query reconstruction as in Eqs. (9) and (10). For any instance of\({ q}_{r}\), if \(MA<{q}_{r}\) identifies error under combination time, which again outputs in maximum text output. The understandability analysis is shown in Fig. 4.

Fig. 4
figure 4

Understandability analysis

The understandability analysis relies on \(\rho \left( {{\text{Max}}_{{t_{0} }} } \right)\) (Fig. 4) for detecting \(r\) surpassing the cloze style issue. In this process, the training \(\forall {\text{Max}}_{{t_{o} }}\) [as in Eq. (6)] is provided \(\forall \rho \left( {q_{r} } \right)\) before classification, pursuing the classification \(\rho \left( {{\text{Max}}_{{t_{o} }} } \right)\) is required for high/low \(r\forall m \in p\). The end of this process is a training recommendation/ a classified output. This training requires new \({\text{ MA}}\) and \(C\) instigating \({\text{Max}}_{{t_{o} }}\)[Eq. (6)]. Contrarily, the final probability verification \(\forall Q = 1\) output relies on \(\rho \left( {{\text{Max}}_{{t_{o} }} } \right)\) [Eq. (12)] that includes \({\text{ MA }}\) and \(C\) combinations. The input texts are observed from human languages by machines and trained for successive iterations, respectively. Therefore, the maximum text output relies on keyword training and substitution processes and \(Q\) helps to reduce the cloze style issue and improve the understandability of the machine for the condition \(0 < Q < 1\) in any \(m\) with different input queries.

Training is done such that maximum text output may be achieved at any moment if the condition \({\text{Max}}_{{t_{O} }} = k_{T}\) relies on \(Q = 1\) and \(C \forall i \in m = S^{W} \forall j \in p\), holds. A computer program that analyzes the largest amount of text produced under the conditions Q = 1 and Q [0,1] solves this cloze problem to achieve the maximum text output achieved; the accuracy of machine-based human language analysis through query reconstruction and deep learning.

The various query keywords are interchanged and reconstructed using a single combination and multiple replies. For the purpose of carrying out multiple answers the swapping process SW, this article calculates the feasible combination condition. Replacement of potential query keywords is based on readability and understandability of human languages, which is evaluated by individual \(C\),\(m \in p\) scores in the first step of improvement \((\rho \left( {{\text{Input}}_{{{\text{text}}}} } \right)\) and Q = 1 for query reconstruction. The output improved probability and cloze issue probability are validated forehand to enhance the query detection in direct or reconstructed format. The deep learning process is aided by training more new keywords and integration for classifying errors and detecting crisp inputs. Therefore, the proposed output is improved using improved alternatives for leveraging the intensity.

6 Discussion

The MRC dataset [30] validates the proposed model over the cloze style issue. The dataset contains two classifications for questions and answers independently. Nearly 60 text paragraphs (stories) are used in this dataset containing 60 \(\times 4\) combinations of answers. A maximum of 400 MA are used for training using 4 combinations. First, the cloze style issue is described using Fig. 5.

Fig. 5
figure 5

Cloze style issue

The cloze style issue is portrayed in Fig. 5 for a query “who escaped the tower?” from the input dataset. The answers are \({\text{ MA}}\), i.e., “Mother,” “Princess,” “Man,” and “John.” The keyword “who” represents all 4 answers resulting in a cloze style issue. The \(\rho \left( {{\text{Max}}_{{t_{o} }} } \right)\) achieves high value for the obtained query by reconstructing it. The reconstruction requires \(S^{W} \forall {\text{MA}}\) and \(C\) associated with the keywords. The learning process performs the maximum training by filtering the keywords in the \(Q\). Based on the filtering process, the answers are mapped with the actual \({\text{Input}}_{{{\text{text}}}}\). The matching combination with the input text and the answer is used for training. The highest, i.e.,\(Q = 1\) and \(\rho \left( {{\text{Max}}_{{t_{o} }} } \right)\) are used for identifying the precise answer. The training for the maximum \(C\) for the above query is illustrated in Fig. 6.

Fig. 6
figure 6

Training for maximum \(C\)

In the training process, the new \(CA\) from the \({\text{Input}}_{{{\text{text}}}}\) are used for identifying the matches. Therefore, the maximum combination converging \(\forall Q = 1\) and \(\rho \left( {{\text{Max}}_{{t_{o} }} } \right)\) is identified as the precise response \(\left( r \right)\). From the input dataset, two self-assessments for the above discussion are made. In the first discussion, the \(q_{r}\) and cloze style issue factors are analyzed (Fig. 7).

Fig. 7
figure 7

\(q_{r}\) and cloze style analysis

The available input texts perform two possible operations for \({\text{ MA}}\) and \(C\) over the \(q_{r}\) and issue. The \(q_{r}\) is less in \({\text{ MA}}\) compared to \(C\) as the keywords (existing and new) are high for the varying inputs. Considering the accuracy for the convergence in \(\rho \left( {{\text{Max}}_{{t_{o} }} } \right)\) the \(q_{r}\) is confined. As the reconstruction is confined, the available keywords (no new \(C\)) are utilized alone for further \(r\). Therefore, the cloze style issue is less for \({\text{ MA}}\) compared to the \(C\). The \(\rho \left( {Q_{k} } \right)\) and \({\text{Su}}_{b}\) are concurrently utilized for improving the training across multiple inputs (Fig. 7). The cloze style issue for the \(S^{W}\) and \({\text{Su}}_{b}\) over the varying \({\text{Input}}_{{{\text{text}}}}\) is presented in Fig. 8.

Fig. 8
figure 8

\(S^{W}\) and \({\text{Su}}_{b}\) for input/ Q

In the above representation, the \(S^{W}\) is varied for \(10\) Q’s and 60 input texts and the \({\text{ Su}}_{b}\) required are estimated. As the input varies,\(S^{W}\) increases whereas \({\text{Su}}_{b}\) fluctuates too high. Contrarily for the \(Q\) varies,\(S^{W}\) fluctuates whereas \({\text{Su}}_{b}\) steadily increases. This is due to the inclusion of \(C\) and the \(Q\) condition for \(\rho \left( {{\text{Max}}_{{t_{o} }} } \right)\). The learning process classifier the \(Q < 1 \forall p \in m\) such that new \({\text{MA}}\) is required. Therefore, a steady increase in both factors is observed across multiple \(Q \&\) Input instances. The fluctuating instances require a precise learning iteration in achieving high precision (Fig. 8).

7 Comparison description

The metrics keyword accuracy, cloze error, combination time, substitutions, and query detection are analyzed in the comparison study. The queries and reading inputs are varied accordingly. The comparison is performed with LWEBA [22], DUMA [17], and CLCRS [29] methods. The DUMA (Dual multi-head co-attention) algorithm, the CLCRS (contrastive learning in context representation space) algorithm, and the LWEBA algorithm represent lexical weighted ensemble bagging algorithms. While CLCRS makes use of contrastive learning to acquire more discriminative context representations, DUMA makes use of a dual multi-head attention mechanism to capture both the query and context information for answering questions. However, LWEBA is an ensemble approach that uses multiple models using various lexical weighting schemes to achieve better results. Persistent comprehensive model with query reconstruction (PCM-QR) is a different approach that aims to boost performance by reconstructing the query while modeling. Specifically, the model takes advantage of high-quality query representations by employing a query reconstruction technique. The proposed method and existing algorithms may share something in common in that they both use deep learning to model text data. However, there is a wide variety in approaches taken to enhance performance. Furthermore, the proposed method focuses on enhancing query representations to better inform the modeling process, whereas the existing algorithms emphasize either improving context representations or leveraging multiple models.

8 Keyword accuracy

In Fig. 9, the MRC is introduced for augmenting the readability and understandability of input texts, increasing the machine’s efficiency through NLP for testing serves as input for the reading system. The addressing of multiple output delivery is referred to as the cloze style issue leads to query reconstruction technology relying on text reading at different instances. The grasping of input texts from the machines based on previous understanding words and possible query keywords for the successive iterations for analyzing the understandability, whereas the possible combination causes error in the reading system. This cloze error is addressed by a machine for analyzing the maximum text output for the condition \(Q = 1\) and \(Q \in \left[ {0,1} \right]\) satisfies successive iteration process in this article, query reconstruction and understandability are the critical factors, preventing errors under combination time. Hence, the input texts to the reading system using query reconstruction are reduced, preventing high keyword accuracy due to maximum individual combinations.

Fig. 9
figure 9

Keyword accuracy

9 Cloze errors

The PCM using QR for identifying the multiple answers or responses in MRC relies on human language observation through a deep learning process under combination time as illustrated in Fig. 10. T his proposed model satisfies less cloze style error by validating the continuous input text observations depending on segregating keywords at different time instances and scrutinizes the output accuracy. In this MRC, the possible query keywords analysis at different time instances of \(1 - \left[ {\frac{{\left( {{\text{Input}}_{{{\text{text}}}} } \right)_{Q} }}{{\sum \left( {R_{S} + T_{R} } \right)_{m} }}} \right]\) that is irrelevant for different input queries, and the probability of \(Q\) and \(r\) is computed. The NLP is aided for testing and increases the machine’s efficiency for addressing the cloze style issue. The error detection is identified based on query reconstruction technology and deep learning processed until the maximum text output is obtained. The different input queries observation is preceded using Eq. (3), (4), (5), (6), (7), and (8) estimations. In this proposed model, similar words are identified using the swapping process relies on possible combinations in text reading based on two conditions for further validation of similarity analysis. Therefore, the cloze error is less compared to the other factors in MRC.

Fig. 10
figure 10

Cloze errors

10 Combination time

Figure 11 shows the maximum text output computation for reading and responding to the input texts through query reconstruction technology for cloze style issue detection in MRC. The readability and understandability of NLP are for augmenting machines’ efficiency at any instance using a possible combination. The observed input texts rely on keyword training and substitution processes with the previous understanding of words observed by machines. The available text output analysis and the cloze style issue are considered for improving the understandability of the machine for the instance \(\left( {{\text{Input}}_{{{\text{text}}}} } \right)_{r} = R_{S} ,q_{r}\),and \(\rho \left( {{\text{Input}}_{{{\text{text}}}} } \right)\) in a consecutive manner of processing MRC. The different input queries using the possible combination for reconstruction using keywords are replaced. The top individual combinations are identified using replaced query keywords from the machines and are analyzed through deep learning for better understandability and further computation, preventing cloze errors. The available input texts from the machine are used for reading queries and responses for which the training intensity satisfies less combination time.

Fig. 11
figure 11

Combination time analysis

11 Substitutions

The multiple output delivery and cloze style issue detections rely on input text analysis for an appropriate query in MRC processing, as represented in Fig. 12. This proposed model satisfies the high substitution process by computing the query and response from the user. In this manner, the possible query keywords-based combination analysis at any time prevents the combination time for identifying multiple answers. The human language analysis by machines through query reconstruction and deep learning is validated until the maximum text output is reached. The observed input texts depend on the different input queries in the successive iterations, wherein through keywords substitutions and training using deep learning and reconstructing queries due to replacing the possible query keywords. The cloze errors in reading systems for improving reading and responding to the input texts using top combinations in text reading help identify multiple answers under combination time. The multiple output delivery is based on text input retaining with \({\text{Max}}_{{t_{O} }}\) estimation. Therefore, the query reconstruction is computed to maximize the machine’s efficiency through similarity analysis using deep learning is high substitutions.

Fig. 12
figure 12

Substitutions

12 Query detection

This proposed model achieves high query detection in input text reading and relies on addressing the multiple output delivery issue in MRC. The query reconstruction is aided for improving the understandability of the machine in which the training intensity is tuned for successive iteration as depicted in Fig. 13. The cloze errors and combination time is mitigated relies on \({Su}_{b}\) and \({k}_{T}\) for the input text processing through deep learning, irrelevant for different input queries. The query reconstruction is used to address the cloze error and maximum individual combinations for reducing the multiple responses simultaneously. The individual combination analysis for maximum text output relies on the instances of \({\text{Max}}_{{t_{O} }} = {\text{Su}}_{b} \left( {i - \frac{C}{{q_{r} }}} \right) + k_{T} \left( {{\text{Su}}_{b} + S^{W} } \right)\forall i \in m\) and \(j \in p\), respectively. The observation of input text analysis through query reconstruction and deep learning for substitution and swapping process compares the available keywords with previous understanding words in a consecutive manner. Contrarily, the understandability of the machine is computed for augmenting the query reconstruction, and output accuracy for performing the substitution and training relies on other factors; therefore, the query detection is high, and understandability also increases. The above study is tabulated in Tables 1, 2 for the varying queries and reading inputs.

Fig. 13
figure 13

Query detection

Table 1 Comparison of queries
Table 2 Comparison of reading inputs

The proposed model maximizes keyword accuracy, substitutions, and query detection by 14.74%, 15.83%, and 10.31% in order. This model reduces cloze error and combination time by 17.39% and 11.96%, respectively.

The proposed model maximizes keyword accuracy, substitutions, and query detection by 13.39%, 15.87%, and 9.76%. This model reduces cloze error and combination time by 19.42% and 11.9%, respectively.

13 Conclusion

This article introduced a persistent comprehensive model using query reconstruction to improve machine reading comprehension detection performance. The proposed model addresses the cloze style issue regardless of the queries by improving the understandability feature. In this model, deep learning is employed keyword switching and word substitutions. The process relies on the maximum possible combinations observed in classifying multiple answers/ keywords. The query reconstruction is performed; the conditional filtering converges the understandability across various queries. The output maximization probability and cloze issue probability are validated forehand for maximizing the query detection either in direct or reconstructed format. The learning process is aided by training multiple new keywords and combinations for classifying errors and detecting precise inputs. Therefore, the proposed output is improved using improved substitutions for leveraging the intensity. Considering the success of the training iterations, the intensity is varied to prevent additional combination time. The proposed model maximizes keyword accuracy, substitutions, and query detection by 13.39%, 15.87%, and 9.76%. This model reduces cloze error and combination time by 19.42% and 11.9%, respectively, for the varying inputs.