Keywords

1 Introduction

Discourse coherence refers to the degree to which the various components of a discourse are logically interconnected and contribute to a clear and meaningful message [1]. Analyzing coherence can greatly benefit numerous natural language processing tasks, such as text generation [2], summarization [3] and essay scoring [4, 5].

In essay scoring tasks, there are many dimensions to measure the student’s language proficiency, such as lexical sophistication, grammatical errors, content coverage and discourse coherence [6]. Since coherence is a key property of a well-written essay, coherence assessment plays an essential role in the task.

In this work, we argue that two key aspects should be considered when evaluating the coherence of an essay. The first aspect is the logical coherence between sentences. The content of the essay should demonstrate a clear progression of ideas, with sentences and paragraphs closely connected and unfolding in logical order. Factors that may negatively impact the logical coherence between sentences include the improper use of discourse connectives and a lack of logical relationships between contexts. The second aspect is the appropriateness of punctuation. Proper punctuation is essential for clarifying the structure and organization of the essay. It can help establish logical connections between sentences, making the text easier to understand. Inappropriate punctuation can lead to confusion and disrupt the smooth flow of the text.

In this work, we propose a feature-based coherence-scoring model framework. We employ two feature extractors to tackle the two essential aspects of coherence. Specifically, the first feature extractor is a local discriminative model [7], while the second is a punctuation correction model [8]. The local discriminative model takes two or three consecutive sentences as input and generates a probability estimate of the local coherence of the sequence. We separated the essay into successive sentences, taking each one as input for the model. Following the inference, we obtained the ratio of coherent sequences to the total number of sequences. The punctuation correction model examines the essay’s punctuation usage and explicitly focuses on identifying redundant, missing, and misused commas and periods.

Following feature extractors, we propose employing a regression model to map features onto a final global coherence score. A simple yet transparent model for combining features is linear regression. However, when the patterns in the data exhibit non-linear relationships, alternative models such as random forest regression, gradient-boosted regression trees (GBRT), and neural networks offer superior performance compared to linear regression. A non-linear model may be prone to overfitting the data and negatively impacting the validity of automated scores. To address this issue, we enforce regulations on the input features to maintain linguistically-informed monotonicity, thereby enhancing scoring transparency and improving the model’s generalization ability.

Consequently, we present a scoring model that utilizes GBRT and incorporates monotonic constraints on the input features. We assume that the input feature, the ratio of locally coherent sequences to the total sequence of the essay, demonstrates a positive correlation with global coherence. Thus, we apply an increasing constraint to this feature. Furthermore, we assume that the feature of the number of redundant, missing, and misused commas and periods negatively correlates with global coherence. Hence, we impose a decreasing constraint on these features.

In summary, our contributions are as follows:

  • We proposed a novel coherence scoring model consisting of a scorer with two feature extractors, i.e. a local discriminative model and a punctuation correction model. We showed that a local discriminative model with a more extended contextual input performs better than just consecutive pairs of sentences on the subsequent scoring tasks.

  • We implement linguistically-informed monotonicity constraints on the input features to enhance the generalization ability in scoring essay coherence.

  • Experiments on the LEssay dataset demonstrate the effectiveness of our proposed methods, and we achieved third place on track 1 from NLPCC2023 shared task 7.

In the last of this paper, we will briefly overview our solution for the remaining tracks from NLPCC 2023 shared task 7. The code is available at

https://github.com/chernzheng/nlpcc2023_shared_task7_ouchnai_solutions.

2 Related Works

Coherence Modeling. The early development of models for coherence analysis was influenced by lexical cohesion [9], which refers to sharing identical or semantically related words in nearby sentences. Reference [10] introduced the concept of lexical chains and demonstrated that the number and density of lexical chains correlated with the topic structure. Reference [11] introduced the TextTiling algorithm revealing that sentences or paragraphs within a subtopic exhibit higher cosine values than those in neighbouring subtopics. Reference [12]’s LSA Coherence method pioneered the use of embeddings in studying coherence between sentences.

Modern neural representation-learning coherence models [7, 13, 14] incorporate insights from early unsupervised coherence models for learning sentence representations and assessing their transformations between adjacent sentences. These models are designed to differentiate between natural and unnatural discourses based on deep neural networks.

Automated Chinese Essay Scoring. Reference [15] implemented LDA to score Chinese essays. Reference [16] enhanced the accuracy of Chinese AES by recognizing beautiful sentences and incorporating them as literary features. Reference [17] assessed the organizational score of high school argumentative essays. Reference [18] investigated cross-prompt holistic scoring on four distinct essay sets, with articles in each dataset responding to a distinct prompt. Reference [19] proposed a multi-task learning framework for the Chinese AES and an inter-sequence attention mechanism to enhance information interaction between the different trait tasks.

3 Method

The architecture of our coherence scoring model is presented in Fig. 1. The model consists of three components: a local discriminative model, a punctuation correction model, and a scorer. The local discriminative model is employed to evaluate the local coherence of consecutive sentences of the essay. The punctuation correction model is utilized to identify the inappropriateness of punctuation usage. The scorer maps the features extracted from the above two models into a final coherence score of the essay.

Fig. 1.
figure 1

The figure shows the architecture of our coherence scoring model. The punctuation correction model outputs six features: num_del_comma, num_ins_comma, num_rep_comma, num_del_period, num_ins_period, and num_rep_period, which enforced decreasing constraints on the subsequent scoring process. The local discriminative model output one feature: num_coh_norm, which enforces an increasing constraint.

3.1 Local Discriminative Model

Our local discriminative model is similar to that of Ref. [7], but we employ BERT as an encoder and treat the problem as a text classification task. Reference [7] proposed a scoring model to differentiate between consecutive sentence pairs in the training corpus, which are assumed to be coherent, and constructed incoherent ones. We extend the input sequence to three consecutive sentences rather than just two sentences and compare the different context lengths on the performance of subsequent scoring tasks.

For the case of sentence pairs, the input sequence is represented as [CLS] + Sentence A + [SEP] + Sentence B, where segment embeddings distinguish between the two sentences. For an essay with n sentences, \(s_{i}\) is the i-th sentence. We construct negative training samples by replacing one of the sentences, \(s_{i}\) or \(s_{i+1}\), with another sentence, \(s_{j}\) (\(j \ne i, i+1\)), from the same essay. The trained model denoted as LD-Bisent).

For the case of three sentences, the input sequence is set as [CLS] + Sentence A + Sentence B + Sentence C without using a special token [SEP] to separate them. We randomly substitute one sentence, \(s_{i}\), \(s_{i+1}\) or \(s_{i+2}\), by \(s_{j}\) (\(j \ne i, i+1, i+2\)) from the same essay as the negative training sample. The trained model denoted as LD-Trisent).

The model use the final hidden vector \(C \in R^{H}\) (in our case, Chinese-RoBERTa-wwm-ext-large [20], H=1024) corresponding to the first input token [CLS] as the aggregate representation. The classification layer weights \(W \in R^{K \times H}\), where K is the number of labels. In our case, \(K=2\) for coherent or incoherent sequence. We compute a standard classification loss as \(\log (\)softmax\((CW^{T}))\).

3.2 Punctuation Correction Model

Our punctuation correction model is composed of two components. The first component, called the punctuation restoration model, accepts punctuation-free input texts and predicts the label for each token, indicating the punctuation that should follow it. The possible labels include a comma, a period, or no punctuation following the token. The second component is a misused-case classifier, which compares the punctuation-restored text with its original counterpart and determines the type of error the author has made. For instance, consider the sentence written by the author:

figure a

(I ran late for school one day and recklessly charged through the red light.)

To begin with, we remove the punctuation, resulting in the sentence

figure b

Next, we input this sentence into the punctuation restoration model. The model predicts that the token ‘’ should be followed by a comma, the token ‘’ should be followed by a period, and no punctuation following the rest of the token. Consequently, the punctuation-restored sentence becomes

figure e

Subsequently, the misused-case classifier aligns the punctuation-restored sentence with its original counterpart and identifies that a comma has been erroneously used after the token ‘’.

The punctuation restoration model is built upon a token classification model. We remove all punctuation marks from the original text and then pass it through a BERT encoder to obtain the final hidden vector for each input token \(T_i \in R^H\). The probability of the token i belonging to one of the labels {0, 1, 2} is computed as softmax \((S \cdot T_i)\), where \(S \in R^{K \times H}\) is the set of weights to be learned of the final layer. Here, label 0 signifies that the token is not followed by punctuation, label 1 indicates a comma follows it, label 2 indicates it is followed by a period, and \(K = 3\) is the number of labels.

The misused-case classifier uses a sequence-matching algorithm to compare the punctuation-restored texts with their original counterparts. We then count the instances of redundant, missing, and misused punctuation in the essay. For the sake of simplicity, all colons within the dataset are transformed into commas. Semicolons, question marks, and exclamation marks are replaced with periods while disregarding other punctuations.

3.3 Scorer

The scorer takes extracted features from the above two models as input. The one feature is the ratio of coherent sequences to the total number of sequences in the essay (num_coh_norm). Additional features are the number of redundant, missing, and misused commas (num_del_comma, num_ins_comma, and num_rep_comma) and the period counterparts (num_del_period,

num_ins_period, and num_rep_period).

We employ the abovementioned features as input and utilize a GBRT scorer with monotonic constraints to map these features into a final global coherence score. We impose a decreasing constraint for all features extracted from the punctuation correction model because these features characterize the inappropriateness of punctuation. For feature \(x_i \in \{\)num_del_comma,

num_ins_comma,  num_rep_comma,   num_del_period,

num_ins_period,  num_rep_period}, the model satisfies

$$\begin{aligned} \text{ GBRT }(x_1, \ldots , x_i, \ldots , x_n) \ge \text{ GBRT }(x_1, \ldots , x'_i, \ldots , x_n) \end{aligned}$$
(1)

whenever \(x_i \le x'_i\). We impose an increasing constraint for feature \(x_j =\)

num_coh_norm because the feature captures the local coherence between adjacent sentences. It satisfies

$$\begin{aligned} \text{ GBRT }(x_1, \ldots , x_j, \ldots , x_n) \le \text{ GBRT }(x_1, \ldots , x'_j, \ldots , x_n) \end{aligned}$$
(2)

whenever \(x_j \le x'_j\).

We compare our proposed scoring model against two regression models: a linear model and a random forest model. We also compare the performance of our model with different configurations, i.e. the scorer with or without monotonic constraints and the local discriminative model with different context lengths.

4 Experiments

4.1 Datasets

LEssay Dataset. The LEssay dataset consists of four sub-datasets corresponding to four tasks. All tasks are related to the coherence evaluation of Chinese student essays. The first sub-dataset is dedicated to the task of global coherence evaluation. It includes a training set of 50 essays, a verification set of 10 essays, and a test set of 5,000 essays. All of these essays are written in Chinese by middle school students and assessed for their coherence on three levels: excellent, moderate, and poor. The remaining three sub-datasets are allocated to the topic sentence extraction, paragraph and sentence logical relation recognition tasks, respectively.

These four tasks are interconnected, and a model trained on one sub-dataset can potentially contribute to another task. However, in this study, a global coherence scoring model will be trained only by the first sub-dataset and two external datasets. These external datasets, including the Chinese essay dataset for pre-training [18] and the IWSLT 2012-zh dataset for punctuation restoration [21], will be utilized to train the feature extractors for the scoring model. The global coherence scores of the first sub-dataset will be used to train the scorer.

Chinese Essay Dataset for Pre-training. The dataset comprises 93,002 essays authored by Chinese students in grades 7 to 12, covering various topics and genres, such as narrative, argumentative, and expository essays.

We utilized the dataset for training the local discriminative model. In practice, we excluded essays with the lowest rating (assigned rating 1) due to poor writing quality. For the remaining essays, we divided each into consecutive sentence pairs or triple sentences, assuming their coherence. And we constructed incoherent sentences, as described in Sect. 3.1. We generated 4.3 million positive and equal negative training samples for the LD-Bisent. We also prepared 3.1 million positive and equal negative training samples for the LD-Trisent.

IWSLT2012-Zh Dataset. The dataset consists of 150k lines of sentences in Chinese from TED talk transcripts. We only predict commas and periods. The question marks are converted to periods for simplicity.

4.2 Experimental Settings

We use the pre-trained Chinese-RoBERTa-wwm-ext-large model to fine-tune the local discriminative and punctuation correction models. For the random forest scorer, we set the number of trees in the forest to 30 and maintained the other parameters at their default values. For the GBRT scorers, we configure the number of boosted gradients to 30, with a maximum tree depth for base learners of 4. The learning rate is set to 1, and all other parameters are left at their default values.

We use precision, recall, and macro F1-score to evaluate the effectiveness of coherence identification. The precision is calculated by dividing the number of correctly identified coherence types (excellent, moderate, and poor) by the total number of identified coherence types. The recall is determined by dividing the number of correctly identified coherence types by the total number of coherence types as labelled.

Table 1. Comparison of regression models

4.3 Results

Table 1 presents the results of each regression model. In the experiment, we used the LD-Trisent feature extractor in linear and random forest regressions.

Our findings suggest that the GBRT model with monotonic constraint using LD-Trisent (GBRT w/ MC (Tri-sent)) performs better in terms of precision and recall compared to the same model without enforcing monotonic constraint (GBRT (Tri-sent)). Furthermore, this model demonstrates improvements in precision, recall, and macro F1 score compared to the same model using LD-Bisent (GBRT w/ MC (Bi-sent)) and LD-Bisent without enforcing monotonic constraint (GBRT (Bi-sent)). Additionally, this model exhibits superior performance in macro F1 score compared to both linear and random forest regressions.

Our results show that training local coherence models to predict longer contexts than just consecutive pairs of sentences can result in better performance on subsequent scoring tasks, which agrees with the previous study on discourse representation [22].

5 Our Solution to the Remaining Tracks from NLPCC2023 Shared Task7

5.1 Text Topic Extraction (Track 2)

This task aims to identify the topic sentence for each paragraph and one overall topic sentence for a given middle school student essay.

In our approach, we employ two token classification models to identify both paragraph-level and overall topic sentences. The first model accepts the essay title connected to a paragraph as input. For each token, it outputs a label indicating whether the token belongs to the topic sentences of the paragraph (designated as a key token). The topic sentences of each paragraph are determined by the ratio of key tokens to the total number of tokens within the sentence. We select the sentence with the highest ratio as the topic sentence for that paragraph. The model is fine-tuned on Chinese-RoBERTa-wwm-ext-large.

The second model is similar to the first, but the input is a sequence that sequentially connects the essay title to all paragraph’s topic sentences. We assume that the overall topic sentence is one of the paragraph topic sentences and determine it by calculating the ratio of key tokens to the total number of tokens within each paragraph topic sentence. We select the sentence with the highest ratio as the overall topic sentence. The second model is fine-tuned on the first model.

The evaluation results are shown in Table 2. Our approach achieved second place in Track 2.

Table 2. The result of text topic extraction.

5.2 Paragraph Logical Relation Recognition (Track 3)

The task aims to determine the logical relationship between the two consecutive paragraphs of an essay. The logical relationship includes co-occurrence, inversion, explanatory and superior-subordinate relationships.

Our approach regards the paragraph-level logical relation recognition task as a sequence classification problem. Specifically, we process a pair of paragraphs as input, and the model determines the logical relationship between these paragraphs. Considering the similarity between this task and sentence-level logical relation recognition, we chose to fine-tune the model trained for track 4.

The evaluation results for track 3 are shown in Table 3. Our approach achieved first place in the track.

Table 3. The results of paragraph-level logical relation recognition.

5.3 Sentence Logical Relation Recognition (Track 4)

The task is comparable to the previous task. Nonetheless, the logical relationships are sentence-based and include 12 different relationships.

We employ a two-stage training approach for our classification model. In the first stage, we utilize an external dataset, TED-CDB [23], to pre-train the model based on Chinese-RoBERTa-wwm-ext-large. In the subsequent stage, we fine-tune the pre-trained model on the current dataset to enhance its performance for the given task.

The evaluation results for track 4 are shown in Table 4. Our approach achieved first place in the track.

Table 4. The results of sentence-level logical relation recognition.

6 Conclusion and Future Work

In this study, we present a scoring model to assess the global coherence of Chinese student essays. This scoring model incorporates two feature extractors: a local coherence discriminative model and a punctuation correction model. Furthermore, we employed a GBRT model with linguistically-informed monotonicity constraints to convert features into a final global coherence score.

Our findings suggest that the enforced regulations on the features improved the model’s generalization capability, and a local discriminative model with a context extending beyond consecutive sentence pairs can achieve better performance in scoring tasks.

For future research, we will incorporate the features of paragraph-level coherence into the scoring model. The current model considers sentence-level coherence by introducing a local discriminative model. But the global coherence characterized by logical relationships between paragraphs is equally important for coherence evaluation. By incorporating paragraph-level coherence features, we can further enhance the performance of the scoring model and provide a more accurate assessment.